WO2017006615A1 - Aging prediction system, aging prediction method, and aging prediction program - Google Patents

Aging prediction system, aging prediction method, and aging prediction program Download PDF

Info

Publication number
WO2017006615A1
WO2017006615A1 PCT/JP2016/063227 JP2016063227W WO2017006615A1 WO 2017006615 A1 WO2017006615 A1 WO 2017006615A1 JP 2016063227 W JP2016063227 W JP 2016063227W WO 2017006615 A1 WO2017006615 A1 WO 2017006615A1
Authority
WO
WIPO (PCT)
Prior art keywords
aging
model
dimensional
prediction
texture
Prior art date
Application number
PCT/JP2016/063227
Other languages
French (fr)
Japanese (ja)
Inventor
永田 毅
和敏 松崎
秀正 前川
和彦 今泉
Original Assignee
みずほ情報総研 株式会社
警察庁科学警察研究所長が代表する日本国
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015137942A external-priority patent/JP5950486B1/en
Application filed by みずほ情報総研 株式会社, 警察庁科学警察研究所長が代表する日本国 filed Critical みずほ情報総研 株式会社
Priority to CN201680016809.0A priority Critical patent/CN107408290A/en
Priority to KR1020177025018A priority patent/KR101968437B1/en
Publication of WO2017006615A1 publication Critical patent/WO2017006615A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to an aging prediction system, an aging prediction method, and an aging prediction program for performing an aging simulation of a face image.
  • Patent Document 1 In beauty, a system for predicting changes in the shape of a face or body after aging has been studied (for example, see Patent Document 1).
  • a polygon mesh is constructed from an image acquired by scanning a face, the polygon mesh is re-parameterized, and a base mesh and a displacement image are calculated.
  • This displacement image is divided into a plurality of tiles, and the statistical value of each tile is measured.
  • the displacement image is deformed by changing the statistical value, and the deformed displacement image is combined with the base mesh to synthesize a new face.
  • An object of the present invention is to provide an aging prediction system, an aging prediction method, and an aging prediction program for efficiently and accurately performing an aging simulation of a face image.
  • an aging prediction system that solves the above problems includes a shape aging model that predicts changes due to aging of the face shape, a texture aging model that predicts changes due to aging of the texture of the face, A model storage unit that stores a three-dimensional prediction model that predicts three-dimensional data from a two-dimensional image, and a control unit that is configured to be connected to an input unit and an output unit and that predicts aging.
  • the control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, estimates a face direction in the prediction target image using the extracted feature point, and Generating first three-dimensional data based on the predicted prediction model and the estimated face orientation; generating second three-dimensional data from the first three-dimensional data using the shape aging model; Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture, and for the second three-dimensional data, An aging texture is synthesized to generate an aging face model, and a prediction process for outputting the generated aging face model to the output unit is executed.
  • an aging face model is produced
  • control unit obtains a face orientation angle designated for output from the input unit, and uses the generated aging face model to generate a two-dimensional face image of the face orientation angle. It may be further configured to generate and output the generated two-dimensional face image to the output unit. As a result, an image of an aging face with a designated face orientation can be output.
  • the two-dimensional image is a first two-dimensional image
  • the control unit generates a second two-dimensional image based on the acquired three-dimensional face sample data
  • the second 2D image A feature point is specified in a three-dimensional image, and the feature point is used to generate normalized sample data obtained by normalizing the three-dimensional face sample data.
  • the shape aging model and the It may be further configured to generate a texture aging model and execute a learning process for storing the generated shape aging model and the generated texture aging model in the model storage unit. Thereby, an aging model can be generated based on actual sample data.
  • the learning process may include generating the three-dimensional prediction model using the normalized sample data and storing it in the model storage unit. Thereby, a three-dimensional prediction model can be generated based on actual sample data.
  • the model storage unit stores a first texture aging model calculated using principal component analysis and a second texture aging model calculated using wavelet transform.
  • the control unit performs a first wavelet coefficient obtained by performing a wavelet transform on an image obtained by applying a first texture aging model to the first two-dimensional image, and a first wavelet coefficient for the first two-dimensional image.
  • the texture aging model to be applied may be further specified according to a result of comparison with the second wavelet coefficient to which the texture aging model of 2 is applied. This makes it possible to use the second texture aging model that is predicted by using existing stains, wrinkles, and the like, so that a more appropriate aging model can be generated.
  • a shape aging model that predicts changes in facial shape due to aging a texture aging model that predicts changes in facial texture due to aging, and 3D data from 2D images
  • an aging prediction system including a model storage unit storing a three-dimensional prediction model, and a control unit configured to be connected to an input unit and an output unit I will provide a.
  • the control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, estimates a face direction in the prediction target image using the extracted feature point, and Generating first three-dimensional data based on the predicted prediction model and the estimated face orientation; generating second three-dimensional data from the first three-dimensional data using the shape aging model; Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture, and for the second three-dimensional data, A prediction process for generating an aging face model by synthesizing aging textures and outputting the generated aging face model to the output unit is executed.
  • a shape aging model that predicts changes due to aging of the face shape a texture aging model that predicts changes due to aging of the texture of the face, and 3D data from 2D images are predicted.
  • a non-transitory computer-readable storage medium is provided.
  • the control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, and uses the extracted feature point to perform the prediction Estimating the face orientation in the target image, generating first three-dimensional data based on the three-dimensional prediction model and the estimated face orientation, and using the shape aging model, the first three-dimensional Generating second three-dimensional data from the data, applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data, generating an aging texture, Synthesizing the aging texture with the second three-dimensional data to generate an aging face model, and executing a prediction process for outputting the generated aging face model to the output unit .
  • the flowchart of the process sequence of the learning process of the three-dimensional conversion of this embodiment The flowchart of the process sequence of the prediction verification process of the three-dimensional conversion of this embodiment. It is explanatory drawing explaining the verification result of the three-dimensional conversion of this embodiment, (a) is input data, (b) is a correct answer, (c) is predicted only by a two-dimensional feature point, (d) is 2 Face image when predicted with dimension feature points and image.
  • the flowchart of the process sequence of the prediction verification process of the texture aging of this embodiment Explanatory drawing explaining each main component in the prediction verification of the texture aging of this embodiment.
  • the aging of the face due to aging is learned using the three-dimensional face data before and after aging.
  • An aging simulation for predicting an aging face image is performed using the photographed two-dimensional face image.
  • an aging prediction system 20 is used.
  • An input unit 10 and an output unit 15 are connected to the aging prediction system 20.
  • the input unit 10 is a means for inputting various types of information, and includes a keyboard, a pointing device, an input interface for acquiring data from a recording medium, and the like.
  • the output unit 15 is a means for outputting various types of information, and includes a display or the like.
  • the aging prediction system 20 is a computer system for performing aging prediction processing.
  • the aging prediction system 20 includes a control unit 21, an aging data storage unit 22, a snapshot data storage unit 23, and a model storage unit 25.
  • the control unit 21 includes control means (CPU, RAM, ROM, etc.), and processes to be described later (a learning management stage, a first learning stage, a second learning stage, a third learning stage, a principal component analysis stage, a machine learning stage, (Aging management stage, first treatment stage, second treatment stage, third treatment stage, etc.).
  • the control unit 21 has a learning management unit 210, a first learning unit 211, a second learning unit 212, a third learning unit 213, a principal component analysis unit 214a, and a machine learning unit.
  • 214b an aging manager 215, a first processor 216, a second processor 217, and a third processor 218.
  • the learning management unit 210 executes a process of learning the secular change of the face in the aging simulation. As will be described later, the learning management unit 210 stores centroid calculating means for calculating the centroid position of both eyes using face feature points. Furthermore, the learning management unit 210 creates in advance data related to a generic model (basic model) used in a homologous modeling process for shapes to be described later, and stores the data in a memory.
  • This generic model is a model related to the face that represents the general characteristics of Japanese people. In the present embodiment, a generic model having 10741 mesh points and 21256 polygons (triangles) is used. This generic model includes mesh points corresponding to face feature points, and identification information for specifying each face feature point is set.
  • the learning management unit 210 stores a predetermined texture average calculation rule for calculating the average of the coordinates of each vertex of the normalized mesh model from the coordinates of the face feature points in the texture homology modeling process described later. ing.
  • the learning management unit 210 includes a learning memory for recording a cylindrical coordinate system image, a cylindrical coordinate system coordinate, and a homologous model used for learning.
  • the first learning unit 211 executes a first learning process for creating a model for predicting three-dimensional face data represented by a three-dimensional cylindrical coordinate system from two-dimensional face data (face image).
  • the 2nd learning part 212 performs the 2nd learning process which produces the model for predicting the change by aging about the texture (texture) of a face image.
  • the 3rd learning part 213 performs the 3rd learning process which produces the model for predicting the change by aging about face shape.
  • the principal component analysis unit 214a performs principal component analysis processing in response to instructions from the learning management unit 210 and the learning units 211 to 213.
  • the machine learning unit 214b performs a process of calculating an explanatory variable (a feature amount used at the time of prediction) using a dependent variable (a prediction target feature amount) according to an instruction from the learning units 211 to 213.
  • the aging management unit 215 executes processing for generating a face image after aging using a two-dimensional face image.
  • the aging management unit 215 acquires a two-dimensional face image to be predicted, and uses the first to third processing units 216 to 218 to perform an aging simulation of the texture and shape.
  • the first processing unit 216 executes processing for generating three-dimensional face data represented by a three-dimensional cylindrical coordinate system from the processing target two-dimensional face image.
  • the second processing unit 217 executes a process of predicting a change due to aging for the texture (texture) of the face image.
  • the texture aging process is executed using a texture aging model using principal component analysis and a texture aging model using wavelet (WAVELET) transformation.
  • the second processing unit 217 stores the weighting coefficient w used for this processing in the memory.
  • the weight coefficient w is a value for determining which model is to be emphasized when using a model using principal component analysis and a model using wavelet transform. In the present embodiment, “1” is used as the weighting coefficient w.
  • the 3rd process part 218 performs the process which estimates the change by aging about a face shape using 3D face data. Further, the aging data storage unit 22 stores three-dimensional face data (aging) before and after aging (10 years in this embodiment) for a predetermined number of learning subjects (samples used for learning). Data) is recorded. By using this aging data, changes before and after aging can be grasped. In this embodiment, data for about 170 people is used as the secular change data.
  • snapshot data storage unit 23 three-dimensional face data (snapshot data) obtained by photographing more learning subjects is recorded. This snapshot data is taken only once, and no aging data is recorded in the snapshot data storage unit 23. In this embodiment, data for about 870 people is used as snapshot data.
  • the model storage unit 25 stores a model (an algorithm for calculating a result variable using an explanatory variable) used when performing various predictions in an aging simulation.
  • a model an algorithm for calculating a result variable using an explanatory variable used when performing various predictions in an aging simulation.
  • data related to an angle prediction model, a three-dimensional prediction model, a texture aging model, and a shape aging model are stored.
  • model data for predicting the direction (angle with respect to the face front) where the face of the two-dimensional face image to be processed is photographed is stored.
  • the angle prediction model data is calculated and recorded by the first learning unit 211.
  • model data for converting a front-facing two-dimensional face image into three-dimensional face data is stored.
  • the three-dimensional prediction model data is calculated and recorded by the first learning unit 211.
  • model data for predicting the texture after aging is stored for the facial texture.
  • the texture aging model data is calculated and recorded by the second learning unit 212.
  • texture aging model data using principal component analysis and texture aging model data using wavelet transform are stored.
  • model data for predicting the shape after aging is stored for the face shape. This shape aging model data is calculated and recorded by the third learning unit 213.
  • the control unit 21 of the aging prediction system 20 generates the cylindrical coordinate system image D2 and the cylindrical coordinate system coordinate data D3 using the three-dimensional face data D1 stored in the aging data storage unit 22.
  • the cylindrical coordinate system image D2 is two-dimensional image data created by projecting three-dimensional face data onto a cylindrical coordinate system and interpolating into a “900 ⁇ 900” equidistant mesh.
  • the cylindrical coordinate system coordinate data D3 is data relating to the three-dimensional coordinates of each point of the “900 ⁇ 900” image generated by projecting the three-dimensional face data onto the cylindrical coordinate system.
  • the control unit 21 generates face feature point data D4 using the cylindrical coordinate system image D2 and the cylindrical coordinate system coordinate data D3.
  • the face feature point data D4 is data relating to the coordinates of the face feature points in the cylindrical coordinate system image D2. Details of the facial feature points will be described later.
  • the control unit 21 uses the cylindrical coordinate system coordinate data D3 to normalize the face feature point data D4 and the cylindrical coordinate system image D2.
  • a normalized cylindrical coordinate system image D5, normalized face feature point data D6, and three-dimensional mesh data D7 (homology model) are generated. Details of this normalization processing will be described later.
  • the homologous model is three-dimensional coordinate data in which three-dimensional data about a face is expressed by a mesh and converted so that the vertices of corresponding meshes in different data are anatomically the same position.
  • control unit 21 generates a two-dimensional face image D8 photographed from an arbitrary angle using the normalized cylindrical coordinate system image D5 and the three-dimensional mesh data D7 (homology model).
  • the control unit 21 changes the 2D face image to the 3D face data using the 2D face image D8 photographed from an arbitrary angle, the normalized face feature point data D6, and the 3D mesh data D7 (homology model).
  • the first learning process for the conversion is performed. Details of the first learning process will be described later.
  • control part 21 performs the 2nd learning process about texture aging using the normalized cylindrical coordinate system image D5. Details of the second learning process will be described later. Moreover, the control part 21 performs the 3rd learning process about three-dimensional shape aging using the three-dimensional mesh data D7 (homology model). Details of the third learning process will be described later.
  • control unit 21 executes conversion processing into a cylindrical coordinate system (step S1-1). Details of this processing will be described later with reference to FIGS.
  • control unit 21 executes face feature point extraction processing (step S1-2). Details of this processing will be described later with reference to FIG.
  • control unit 21 executes normalization processing of face feature points (step S1-3). Details of this processing will be described later with reference to FIGS.
  • control unit 21 executes homology modeling processing (step S1-4). Here, a face shape homology model and a texture homology model are generated. Details of these processes will be described later with reference to FIGS. 9 and 10.
  • control unit 21 executes a process for generating a normalized cylindrical coordinate system image (step S1-5). Details of this processing will be described later with reference to FIG. ⁇ Conversion processing to cylindrical coordinate system> Next, the conversion process to the cylindrical coordinate system (step S1-1) will be described with reference to FIGS.
  • the learning management unit 210 of the control unit 21 performs a missing portion interpolation process (step S2-1). Specifically, the learning management unit 210 checks whether there is a missing part in the three-dimensional face data. When the missing part is detected, the learning management unit 210 performs interpolation of the missing part using the peripheral information of the missing part. Specifically, the missing portion is compensated using a known interpolation method based on a predetermined range of data adjacent to the periphery of the missing portion.
  • the three-dimensional face data shown in FIG. 5B is represented by a cylindrical coordinate system having a radius and a cylindrical direction angle in the cylindrical coordinate system as two axes.
  • this three-dimensional face data some data around the chin and ears are missing.
  • the images of these portions are generated by interpolation processing using surrounding images and are made up for.
  • the learning management unit 210 of the control unit 21 executes a cylindrical coordinate system image generation process (step S2-2). Specifically, the learning management unit 210 projects the three-dimensional face data that has undergone the missing portion interpolation processing onto a cylindrical coordinate system (two-dimensional mapping). The learning management unit 210 interpolates the projected face image data into a “900 ⁇ 900” equally spaced mesh to generate a two-dimensional face image in a cylindrical coordinate system. The learning management unit 210 records the generated two-dimensional face image in the learning memory as a cylindrical coordinate system image D2.
  • FIG. 5C is a two-dimensional face image in which the three-dimensional face data projected onto the cylindrical coordinate system is represented by two axes (cylinder height and circumferential angle).
  • the learning management unit 210 of the control unit 21 executes cylindrical coordinate system coordinate generation processing (step S2-3). Specifically, the learning management unit 210 projects each coordinate (X, Y, Z) of the three-dimensional face data onto the cylindrical coordinate system, and for each point of the “900 ⁇ 900” image described above, Generate coordinate data (radial direction, angle, height). The learning management unit 210 records the generated two-dimensional face image in the learning memory as cylindrical coordinate system coordinate data D3.
  • the face feature point is a characteristic position (for example, the outermost point of the eyebrows, the innermost point of the eyebrows) in the face parts (eyebrows, eyes, nose, mouth, ears, cheeks, lower jaw, etc.) constituting the face , Mouth corner point, etc.).
  • 33 face feature points are used.
  • the face feature points may be arbitrarily added / deleted by the user. In this case, processing described later is performed using the added / deleted face feature points.
  • FIG. 6 shows the facial feature points (32) used in this embodiment with numbers.
  • the feature point number “33” is calculated as “the midpoint of a straight line connecting the center of gravity of the feature points of both eyes and the nose vertex” using the other feature points.
  • the learning management unit 210 of the control unit 21 specifies face feature points from the generated cylindrical coordinate system coordinates, and calculates the position (coordinates).
  • automatic extraction is performed using a well-known AAM (Active Appearance Models) method used for facial expression tracking, facial recognition, and the like.
  • AAM Active Appearance Models
  • a target object here, a face
  • this model is fitted to an input image to extract feature points of the target object.
  • the learning management unit 210 displays a face image in which the extracted face feature points are associated with the extraction position on the output unit 15.
  • the position of each face feature point is movably arranged.
  • the person in charge confirms the position of the facial feature point on the facial image displayed on the output unit 15 and corrects it if necessary.
  • face feature point confirmation or correction completion input is performed on the face image
  • the learning management unit 210 associates the cylindrical coordinate system coordinates of each face feature point with the number of each face feature point. D4 is generated and stored in the learning memory.
  • step S1-3 the facial feature point normalization process
  • the learning management unit 210 of the control unit 21 executes normalization processing by multiple regression analysis using the extracted face feature point data (step S3-1).
  • rotation on the XYZ axes is obtained from the feature points of the face by multiple regression analysis, and the face orientation is matched.
  • the size of the face is normalized so that the distance between the centers of the eyes becomes a predetermined value (64 mm in this embodiment). Details of this processing will be described later with reference to FIG.
  • the learning management unit 210 of the control unit 21 executes an average feature point calculation process (step S3-2). Specifically, the learning management unit 210 calculates the average coordinates of each feature point using the coordinates of 33 face feature point data for each learning target person. Thereby, the coordinates of each feature point in the average face of the learning subject are calculated.
  • the learning management unit 210 of the control unit 21 executes normalization processing by procrustes analysis (step S3-3). Specifically, the learning management unit 210 uses the least square method to minimize the sum of the square distance between the average coordinates calculated in step S3-2 and each facial feature point. Move, rotate, and resize. In this case, 25 points (feature point numbers 1 to 6, 10, 14 to 22, 24 in FIG. 6) excluding the tragus points (feature point numbers 7 and 13), mandibular corner points (feature point numbers 8 and 12), and the like. To 27 and 28 to 32) are used. As a result, the facial feature points are adjusted to be close to the average face.
  • step S3-1 normalization processing by multiple regression analysis
  • the learning management unit 210 of the control unit 21 executes the process of specifying the center of gravity of the feature points of both eyes (step S4-1). Specifically, the learning management unit 210 identifies facial feature points related to eyes in the facial feature point data. Next, the learning management unit 210 calculates the position of the center of gravity of both eyes using the center of gravity calculating means for the coordinates of the extracted face feature points. The learning management unit 210 specifies the calculated position of the center of gravity of both eyes as the origin.
  • the learning management unit 210 of the control unit 21 executes an inclination correction process around the X axis and the Y axis (step S4-2). Specifically, the learning management unit 210 sets the Z coordinate as an objective variable and the X and Y coordinates as explanatory variables for a set of face feature points excluding the face outline and nose vertex with the position of the center of gravity of both eyes as the origin. Multiple regression analysis.
  • a regression plane RPS is calculated by multiple regression analysis.
  • the learning management unit 210 rotates the regression plane RPS about the X and Y axes so that the calculated normal line NV of the regression plane RPS is parallel to the Z axis.
  • the learning management unit 210 of the control unit 21 executes an inclination correction process around the Z axis (step S4-3). Specifically, the learning management unit 210 uses a set of face feature points for calculating a face centerline. In this embodiment, a set of the center of gravity of the eyes, the apex of the nose, the lower end of the nose, the upper end of the upper lip, the lower end of the lower lip, and the chin tip coordinates are used as the facial feature points.
  • a regression line RL is calculated for this set using the Y coordinate as an objective variable and the X coordinate as an explanatory variable.
  • the learning management unit 210 rotates around the Z axis so that the calculated slope of the regression line RL is parallel to the Y axis.
  • the learning management unit 210 of the control unit 21 executes scaling processing (step S4-4). Specifically, the learning management unit 210 performs enlargement or reduction so that the distance between the centers of the eyes is 64 mm.
  • the learning management unit 210 of the control unit 21 executes a process of matching facial feature point coordinates (step S5-1). Specifically, the learning management unit 210 uses the generic model mesh point identification information stored in the memory to match the coordinates of each normalized facial feature point with the identified facial feature point of the mesh point.
  • the learning management unit 210 of the control unit 21 executes a shape fitting process (step S5-2). Specifically, the learning management unit 210 matches the shape of each facial part in the generic model in which each facial feature point is matched with the normalized shape of each facial part.
  • the learning management unit 210 of the control unit 21 executes a triangulation process (step S5-3). Specifically, the learning management unit 210 calculates the coordinates of each mesh point corresponding to each polygon (triangle) of the generic model in the normalized shape of each face part matched with the shape of the generic model ( The shape homology model) is stored in the learning memory.
  • the number of mesh points can be reduced by using a mesh model in which the mesh around the face parts such as eyes, nose and mouth is made fine and the other area meshes are widened. it can. Note that the forehead portion is deleted because the presence of bangs adversely affects the statistical processing.
  • the learning management unit 210 of the control unit 21 executes an average calculation process for the coordinates of each vertex of the normalized mesh model (step S6-1). Specifically, the learning management unit 210 calculates the average coordinates of each mesh point (vertex) from the normalized coordinates of each face feature point using a texture average calculation rule stored in advance.
  • the learning management unit 210 of the control unit 21 executes a process of transforming the texture on the cylindrical coordinate system two-dimensional polygon into an averaged two-dimensional polygon (step S6-2). Specifically, the learning management unit 210 transforms the texture (pixel information) on the polygon of each two-dimensional face data in the cylindrical coordinate system into the average coordinates calculated in step S6-1, and the texture at the deformed average coordinates. Is stored in the learning memory.
  • the learning management unit 210 of the control unit 21 calculates the texture average at each average coordinate, thereby calculating a texture homology model and stores it in the learning memory.
  • FIG. 10B shows an average face obtained by averaging textures transformed into average coordinates of each mesh model.
  • step S1-5 a process for generating a normalized cylindrical coordinate system image (step S1-5) will be described.
  • the cylindrical coordinate system image generated in step S2-2 cannot be analyzed as it is because the position of the face parts (eyes, nose, mouth, etc.) differs for each data. Therefore, the cylindrical coordinate system image is normalized so that the positions of the face parts of each data are aligned.
  • This image normalization mesh model uses 33 face feature points and pastes the mesh in a grid pattern on a cylindrical coordinate system.
  • a mesh model having 5588 mesh points and 10862 polygons (triangles) is used.
  • the learning management unit 210 of the control unit 21 calculates the average value of the image normalized mesh model and the average value of the texture of each polygon for all data.
  • an average face is generated from the calculated texture of the average value.
  • the mesh constituting the face matches the average face mesh. Therefore, the cylindrical coordinate system image is normalized.
  • the first learning unit 211 of the control unit 21 repeatedly executes the following processing for each predetermined processing target angle.
  • the first learning unit 211 of the control unit 21 performs a rotation process to a specified angle (step S7-1). Specifically, the first learning unit 211 rotates the three-dimensional homologous model to a predetermined target angle.
  • the first learning unit 211 stores the rotation angle when rotated to the predetermined target angle in the learning memory.
  • the first learning unit 211 of the control unit 21 executes a conversion process from the three-dimensional homologous model to the two-dimensional homologous model (step S7-2). Specifically, the learning management unit 210 generates a two-dimensional homology model by projecting the rotated three-dimensional homology model onto the XY plane.
  • the first learning unit 211 of the control unit 21 executes a two-dimensional feature point specifying process (step S7-3). Specifically, the first learning unit 211 of the control unit 21 specifies coordinates corresponding to face feature points in the three-dimensional homology model in the calculated two-dimensional homology model. The first learning unit 211 stores the identified face feature points in the learning memory as two-dimensional feature points.
  • the first learning unit 211 of the control unit 21 performs a process of excluding feature points hidden behind the face (step S7-4).
  • the first learning unit 211 includes a facial feature point on the photographing side (viewpoint side) and a facial feature point on the back side in the three-dimensional homologous model among the feature points of the two-dimensional homologous model. Identify The first learning unit 211 deletes the face feature points on the back side from the learning memory, and stores only the two-dimensional feature points on the photographing side.
  • the above processing is executed by repeating a loop for each angle to be processed.
  • the machine learning unit 214b of the control unit 21 executes machine learning processing (step S7-6). Specifically, the machine learning unit 214b uses “rotation angle ( ⁇ , ⁇ )” as a dependent variable (feature to be predicted) and “two-dimensional all data” as an explanatory variable (feature used in prediction). "The principal component score of the feature point divided by the standard deviation” is used. The machine learning unit 214b executes machine learning processing using the dependent variable and the explanatory variable. The first learning unit 211 records the angle prediction model generated by the machine learning unit 214b in the model storage unit 25.
  • ⁇ Machine learning process> the machine learning process will be described with reference to FIGS. 13 and 14.
  • another feature vector y (a feature amount used for prediction, which is an explanatory variable) is predicted from a certain feature vector x (a prediction target feature amount that is a dependent variable).
  • a model for predicting y i, j from x s (n), j is obtained by learning the relationship between y and x using multiple regression analysis. Specifically, “a i, s (n) ” and “b i ” in the following equation are calculated.
  • variable increase / decrease method (“stepwise method”)
  • the machine learning unit 214b of the control unit 21 executes an initial value setting process (step S8-1). Specifically, the machine learning unit 214b initializes a very large value to the minimum value (bic_min) of the Bayes information amount criterion stored in the memory, and resets the variable set (select_id) to an empty set.
  • the machine learning unit 214b of the control unit 21 executes a setting process for the current Bayes information amount standard (step S8-2). Specifically, the machine learning unit 214b substitutes the minimum value (bic_min) based on the Bayes information amount into the minimum value (bic_min_pre) based on the current Bayes information amount.
  • I is a dimension number selected as a processing target. In this iterative process, it is determined whether or not the principal component number i to be processed is a component to be added to the variable set (addition target component).
  • the machine learning unit 214b of the control unit 21 performs a determination process as to whether or not the minimum value of the correlation between the principal component number i and the variable set (select_id) is smaller than the maximum correlation coefficient Cmax (step). S8-3). Specifically, the machine learning unit 214b of the control unit 21 calculates a correlation coefficient between the principal component number i to be processed and the variable set (select_id) stored in the memory. The machine learning unit 214b compares the calculated correlation coefficient with the maximum correlation coefficient Cmax.
  • the machine learning unit 214b of the control unit 21 adds an additional target component to the variable set (select_id).
  • the multiple regression analysis is performed using the obtained variables, and the Bayes information criterion and the t value calculation process of the added variable are executed (step S8-4).
  • the machine learning unit 214b of the control unit 21 calculates a Bayesian information amount criterion by performing multiple regression analysis using a variable obtained by adding the principal component number i to be processed to the variable set stored in the memory.
  • the t value is calculated by a known t test.
  • the machine learning unit 214b of the control unit 21 performs a determination process as to whether or not the minimum value and t value of the Bayes information criterion satisfy the condition (step S8-5).
  • the calculated minimum value of the Bayes information amount criterion is larger than the current Bayes information amount criterion, and the t value is “2” or more.
  • the machine learning unit 214b of the control unit 21 sets the minimum value of the Bayes information amount criterion and Update processing of the principal component number is executed (step S8-6). Specifically, the machine learning unit 214b of the control unit 21 substitutes the Bayes information criterion (bic) for the minimum value (bic_min) of the Bayes information criterion. Furthermore, the main component number i to be processed is stored as an additional target component (add_id).
  • Step S8-3 when the minimum value of the correlation between the principal component number i and the variable set (select_id) is greater than or equal to the maximum correlation coefficient Cmax (“NO” in step S8-3), the machine learning unit 214b of the control unit 21 Steps S8-4 to S8-6 are skipped.
  • step S8-5 If either the minimum value of the Bayes information criterion or the t value does not satisfy the condition (“NO” in step S8-5), the machine learning unit 214b of the control unit 21 performs the process in step S8-6. To skip.
  • the machine learning unit 214b of the control unit 21 determines whether or not the minimum value of the Bayes information amount criterion has been updated. Processing is executed (step S8-7). Specifically, the machine learning unit 214b determines whether or not the minimum value (bic_min) of the Bayes information amount reference matches the current minimum value (bic_min_pre) of the Bayes information amount reference set in step S8-2. Determine. The machine learning unit 214b determines that the minimum value of the Bayes information criterion is not updated when they match, and determines that the minimum value of the Bayes information criterion is updated when they do not match.
  • step S8-7 when the minimum value of the Bayes information criterion is updated (in the case of “YES” in step S8-7), the machine learning unit 214b of the control unit 21 executes variable update processing (step S8-8). . Details of this variable update processing will be described with reference to FIG.
  • the machine learning unit 214b of the control unit 21 executes a process for determining whether or not the variable update is successful (step S8-9). Specifically, the machine learning unit 214b determines the variable update success based on the flags (variable update success flag, variable update failure flag) recorded in the memory in the variable update process described later.
  • step S8-9 when the variable update success flag is recorded in the memory and it is determined that the variable update is successful (in the case of “YES” in step S8-9), the machine learning unit 214b of the control unit 21 performs step S8— Steps 2 and after are repeated.
  • variable update process (step S8-8) will be described with reference to FIG.
  • this variable set is specified as the explanatory variable.
  • the machine learning unit 214b executes a new variable set setting process (step S9-1). Specifically, the machine learning unit 214b adds a component to be added (add_id) to the variable set stored in the memory to generate a new variable set (select_id_new).
  • the machine learning unit 214b repeats the following steps S9-2 to S9-4 in an infinite loop.
  • the machine learning unit 214b performs a multiple regression analysis using a new variable set (select_id_new), and executes a Bayes information criterion (bic) and t value calculation processing for all variables (step S9-2).
  • the machine learning unit 214b calculates a Bayes information criterion by multiple regression analysis using a new variable set. Further, the t value is calculated by a known t test.
  • the machine learning unit 214b determines whether or not the minimum t value among the t values of each variable of the new variable set is smaller than 2 (step S9-3).
  • the machine learning unit 214b executes the process of deleting the variable having the minimum t value from the new variable set. (Step S9-4). Specifically, the machine learning unit 214b deletes the variable for which the minimum t value has been calculated in the new variable set from the new variable set.
  • step S9-2 The processes after step S9-2 are repeatedly executed.
  • the machine learning unit 214b executes a process of determining whether or not the Bayes information amount criterion is smaller than the current minimum value of the Bayes information amount criterion (step S9-5). Specifically, the machine learning unit 214b determines whether or not the Bayes information amount criterion (bic) is smaller than the current minimum value (bic_min_pre) of the Bayes information amount criterion set in step S8-2.
  • the machine learning unit 214b executes a variable update success process (Step S9). -6). Specifically, the machine learning unit 214b substitutes a new variable set (select_id_new) for the variable set (select_id). Further, the machine learning unit 214b records a variable update success flag in the memory.
  • the machine learning unit 214b executes variable update failure processing (Step S9—). 7). Specifically, the machine learning unit 214b records a variable update failure flag in the memory.
  • the principal component analysis unit 214a of the control unit 21 performs a three-dimensional shape principal component analysis process in advance using a three-dimensional mesh (homology model) (step S10-1). Specifically, the principal component analysis unit 214a performs principal component analysis on the three-dimensional mesh points of each data. As a result, the two-dimensional mesh point is expressed by the following equation when expressed by an average value, a principal component score, and a principal component vector.
  • the first learning unit 211 of the control unit 21 executes the rotation process to the specified angle (step S10-2). Specifically, the first learning unit 211 displays a screen for designating the rotation angle on the output unit 15. Here, for example, the front (0 degree) and the side (90 degrees) are designated.
  • the rotation angle is input, the first learning unit 211 rotates the three-dimensional homologous model according to the input rotation angle.
  • the first learning unit 211 of the control unit 21 executes a process for generating a two-dimensional homologous model from the three-dimensional homologous model (step S10-3). Specifically, the first learning unit 211 generates a two-dimensional homology model by projecting the rotated three-dimensional homology model onto a two-dimensional plane (XY plane).
  • the first learning unit 211 of the control unit 21 executes a two-dimensional image generation process (step S10-4).
  • a gray two-dimensional homology model is used.
  • the first learning unit 211 generates a grayed image based on the luminance in each mesh of the generated two-dimensional homologous model.
  • the principal component analysis unit 214a of the control unit 21 executes a principal component analysis process of the two-dimensional image (step S10-5). Specifically, the principal component analysis unit 214a performs principal component analysis on the two-dimensional image generated in step S10-4 and expresses it as follows.
  • the first learning unit 211 of the control unit 21 executes a two-dimensional feature point specifying process (step S10-6). Specifically, the first learning unit 211 specifies the coordinates of the facial feature points in the calculated two-dimensional homology model, as in step S7-3. The first learning unit 211 stores the specified coordinates in the memory.
  • the first learning unit 211 of the control unit 21 executes a feature point exclusion process that hides behind the face (step S10-7). Specifically, the first learning unit 211 deletes the face feature points hidden behind the face from the memory, as in step S7-4.
  • the principal component analysis unit 214a of the control unit 21 executes a principal component analysis process of the two-dimensional feature points (step S10-8). Specifically, the principal component analysis unit 214a executes principal component analysis processing using the facial feature points stored in the memory, as in step S7-5.
  • the machine learning unit 214b of the control unit 21 executes machine learning processing in the same manner as in step S7-6 (step S10-9). Specifically, the machine learning unit 214b executes machine learning processing using the dependent variable and the explanatory variable.
  • the principal component score of the three-dimensional mesh point divided by the standard deviation is used as the dependent variable
  • the two-dimensional feature point of all data and the principal component score of the two-dimensional image of all data are used as explanatory variables.
  • “Divided by standard deviation” is used.
  • the first learning unit 211 records the three-dimensional prediction model generated by the machine learning unit 214b in the model storage unit 25.
  • the maximum correlation coefficient is secured to approximately 0.2 or more for the principal components up to the 100th, but decreases to a value having little correlation in the principal components after the 200th. .
  • the correlation tends to be small as compared with the two-dimensional feature point, but the maximum correlation coefficient is secured to about 0.1 in the 200th and subsequent principal components.
  • the F value is a parameter indicating the validity of the model.
  • the t value is a parameter indicating the validity of each variable. Although it is considered that the F value and the t value are each “2” or more, it is found that a value of “2” or more is secured in any component.
  • the coefficient of determination is a parameter indicating the explanatory power of the model, and the value indicates the rate at which the model explains the prediction target data. Specifically, when the value is “1”, it is “all predictable”, and when the value is “0”, it indicates “not predicted at all”.
  • the coefficient of determination is ensured to be approximately 50% or more for the main components up to the 40th, but less than 20% for the main components near the 100th.
  • the coefficient of determination was ensured to be approximately 50% or more for the principal components up to the 50th, and exceeded 20% even for the principal component near the 100th. As a result, it can be seen that the accuracy in the case of using the two-dimensional feature point and the image is improved as compared with the case of only the two-dimensional feature point.
  • the maximum correlation coefficient Cmax is set to “0.15” as a variable selection criterion.
  • the validity verification processing of the prediction model data to be converted from the two dimensions to the three dimensions is executed.
  • the first learning unit 211 of the control unit 21 executes a process of creating a prediction model machine-learned with the remaining [n-1] human data excluding the processing target data j (step S11-1). Specifically, the first learning unit 211 generates the three-dimensional conversion model by executing the first learning process described above using the data of [n ⁇ 1] people.
  • the first learning unit 211 of the control unit 21 performs an estimation process using a prediction model for the three-dimensional mesh point of the processing target data j (step S11-2). Specifically, the first learning unit 211 uses the processing target data j as input data and applies the generated three-dimensional conversion model to calculate a three-dimensional mesh point.
  • the first learning unit 211 of the control unit 21 performs a comparison process between the three-dimensional data (correct answer) of the processing target data j and the estimated result (step S11-3). Specifically, the first learning unit 211 compares the three-dimensional face data generated in step S11-1 with the three-dimensional face data of the processing target data j. As a result of the comparison, the shift amount of each point of the three-dimensional mesh is recorded in the memory. In this case, the average prediction error of the principal component scores was less than “0.22”. Since the variance of the principal component score is normalized to “1”, it can be seen that the estimation can be performed with high accuracy. It has also been found that the prediction using only the two-dimensional feature points and the prediction using the two-dimensional image can improve accuracy.
  • the average prediction error of the 3D mesh points is “1.546391 mm” when only 2D feature points are used, and “1.477514 mm” when 2D feature points and 2D images are used. .
  • the prediction with only the two-dimensional feature points and the prediction with the two-dimensional image can improve accuracy.
  • FIG. 17A shows a two-dimensional face image (input data) before aging.
  • FIG. 17B is a face image (correct answer) after 10 years of the face image shown in FIG.
  • FIG. 17C shows an aging face image predicted using only the two-dimensional feature points of the face image shown in FIG.
  • FIG. 17D is an aging face image predicted using the two-dimensional feature points and images of the face image shown in FIG.
  • the second learning process for texture aging is executed using FIG.
  • a texture aging process using principal component analysis and a texture aging process using wavelet transform are executed.
  • the texture aging process using wavelet transform will be described.
  • ⁇ Aging process of texture using principal component analysis> using the normalized cylindrical coordinate system image generated in step S1-5, a model for predicting the texture change due to aging in the three-dimensional face data is calculated by machine learning.
  • the control unit 21 executes principal component analysis processing of the normalized cylindrical coordinate system image (step S12-1). Specifically, the second learning unit 212 of the control unit 21 acquires aging data from the aging data storage unit 22 and snapshot data from the snapshot data storage unit 23.
  • the principal component analysis unit 214a of the control unit 21 performs principal component analysis on a cylindrical coordinate system image (a cylindrical coordinate system image of acquired secular change data and snapshot data). In this case, the principal component analysis unit 214a determines the direction of the principal component vector using the snapshot data before (or after) the aging data, and uses the aging data to determine the principal component. Score is calculated.
  • each data is expressed as an average value, a principal component score, and a principal component vector as follows.
  • j is an aging index
  • control unit 21 executes machine learning processing as in step S7-6 (step S12-2).
  • the machine learning unit 214b of the control unit 21 uses “an aging change difference vector of texture principal component scores normalized per unit year” as a dependent variable, and “pre-aging” as an explanatory variable.
  • the “principal score of texture” is used.
  • the second learning unit 212 records the texture aging model generated by the machine learning unit 214b in the model storage unit 25.
  • the cumulative contribution ratio up to the 35th principal component calculated in texture aging using the principal component analysis shown in FIG. 18 exceeds 95%, and the contribution ratio of the 25th and subsequent principal components is 0.1%. Is less than.
  • Each main component is shown in FIG. It can be seen that the higher the contribution ratio, the higher the frequency component.
  • FIG. 21 shows an image reconstructed by changing the upper limit of the principal component of two images for confirming the contribution of each principal component with the image.
  • the validity verification process of the texture aging prediction model data using this principal component analysis is executed.
  • control unit 21 executes a process for creating a prediction model that has been machine-learned with the data of the remaining [n ⁇ 1] people excluding the data j (step S13-1). Specifically, the second learning unit 212 of the control unit 21 executes the second learning process in steps S12-1 to S12-2 using [n-1] data, thereby aging the texture. Generate a conversion model of.
  • control unit 21 executes an aging process using the prediction model using the pre-aging data of the data j (step S13-2).
  • the second learning unit 212 of the control unit 21 uses the data j before aging as input data, applies the generated conversion model to texture aging, and applies the three-dimensional mesh points after aging. Is calculated.
  • control unit 21 executes a comparison process between the post-aging data (correct answer) of the data j and the aging result (step S13-3). Specifically, the second learning unit 212 of the control unit 21 performs post-aging change of the three-dimensional face data after aging generated in step S13-2 and the data j stored in the aging data storage unit 22. And the error of each point of the three-dimensional mesh is calculated. In this case, the calculated error was found to be about 60% or less.
  • texture aging processing using wavelet transform will be described with reference to FIG.
  • aging difference data is estimated.
  • aging using principal component analysis does not increase them. Therefore, in order to age using existing stains and wrinkles, aging change estimation using wavelet transform is performed.
  • the control unit 21 performs a calculation process of an increase rate (wavelet coefficient Ri) due to aging of the wavelet component (step S14-1).
  • the second learning unit 212 of the control unit 21 extracts the aging data stored in the aging data storage unit 22.
  • the second learning unit 212 calculates all the wavelet coefficients Ri of each two-dimensional image with the data number j for each wavelet coefficient number i (for each pixel).
  • the second learning unit 212 sums up the calculated wavelet coefficients Ri (values for each pixel) of the calculated image data before aging.
  • the second learning unit 212 sums up the calculated wavelet coefficients Ri (values for each pixel) of the image data after aging.
  • the wavelet coefficient Ri (value per pixel) in the summed image after aging by the wavelet coefficient Ri (value per pixel) in the summed image before aging. Calculate the rate of change. In this case, when the wavelet coefficient Ri is less than 1, the second learning unit 212 calculates the rate of change as “1”.
  • FIG. 22A shows an image displayed by visualizing the wavelet coefficient Ri.
  • black indicates a wavelet coefficient Ri having a minimum value of “1”, and the whiter the value, the larger the value.
  • the image shows the low frequency component as it goes to the upper left. Specifically, one-dimensional wavelet transform in the horizontal direction is performed for each row to separate the low-frequency component and the high-frequency component, and one-dimensional conversion in the vertical direction is performed on each column of the converted signal. The image which repeated is shown.
  • ⁇ 3D shape aging learning process a third learning process for aging the three-dimensional shape is executed using FIG.
  • a model for predicting a shape change due to aging in a three-dimensional face image is calculated by machine learning using the homology model generated in the above-described shape homology modeling process.
  • the maximum correlation coefficient Cmax between the selected variables is set to “0.15”.
  • the control unit 21 executes the calculation process of the principal component score of the three-dimensional mesh (step S15-1). Specifically, the third learning unit 213 of the control unit 21 extracts the aging data stored in the aging data storage unit 22. Here, 144 pieces of aging data are extracted. The third learning unit 213 uses the principal component vector of the three-dimensional mesh generated in the principal component analysis of the three-dimensional mesh point in step S10-1 described above, and uses the three-dimensional mesh principal component score for the extracted secular change data. Is calculated.
  • control unit 21 executes machine learning processing as in step S7-6 (step S15-2). Specifically, the machine learning unit 214b of the control unit 21 executes machine learning processing using the dependent variable and the explanatory variable.
  • the aging change difference vector of the three-dimensional mesh principal component score normalized per unit year is used as the dependent variable
  • the principal component score of the three-dimensional mesh before aging is used as the explanatory variable.
  • the third learning unit 213 records the shape aging model generated by the machine learning unit 214b in the model storage unit 25.
  • the maximum value of the correlation coefficient between the aging change difference vector calculated in this way and the principal component score has a correlation of about “0.3”, and there is a constant correlation between the aging change and the principal component score. Therefore, it is appropriate to use for regression analysis.
  • the number of selected variables is around 30, which is reasonable when compared with the number of secular change data used for calculation. Further, it is found that the F value is 10 or more, the t value is 2 or more, and the determination coefficient is almost 70% or more in any principal component, and it is understood that the accuracy of the model is high.
  • the control unit 21 executes feature point extraction processing (step S16-1). Specifically, the aging management unit 215 of the control unit 21 executes face feature point extraction processing from the processing target two-dimensional face image data in the same manner as in step S1-2.
  • control unit 21 executes face orientation extraction processing (step S16-2). Specifically, the first processing unit 216 of the control unit 21 uses the angle prediction model stored in the model storage unit 25 to specify the direction in which the face was photographed from the coordinates of the extracted face feature points, A two-dimensional face image is converted to the front direction.
  • control unit 21 executes a three-dimensional meshing process (step S16-3). Specifically, the aging management unit 215 of the control unit 21 generates a three-dimensional mesh for a front-facing two-dimensional face image using the three-dimensional prediction model stored in the model storage unit 25. .
  • the control unit 21 executes a process for generating a normalized cylindrical coordinate system image (step S16-4). Specifically, the aging management unit 215 of the control unit 21 uses the prediction model calculated in step S16-3 to create a two-dimensional mesh of the processing target two-dimensional face image. The aging management unit 215 creates an image in each polygon by affine transformation into a polygon in cylindrical coordinate system coordinates. Here, the image information in the polygon may be insufficient depending on the face orientation of the two-dimensional face image to be processed.
  • the aging management unit 215 assumes that the left and right sides of the face are symmetric, and uses the center line of the left and right sides of the face as a symmetric line, and converts the polygons with insufficient image information into the polygons on the opposite left and right sides. Complement using image information.
  • control unit 21 executes an aging process for the three-dimensional mesh (step S16-5). Specifically, the third processing unit 218 of the control unit 21 inputs the three-dimensional mesh generated in step S16-3 to the shape aging model stored in the model storage unit 25, and ages the three-dimensional mesh. To do. In this case, the third processing unit 218 performs aging only on regions other than the shape prediction mask region.
  • the white portion shown in FIG. 26A is used as the shape prediction mask region. These portions include cheeks and nose muscles, and are regions where there is little change in shape due to aging.
  • the control unit 21 executes a texture aging process (step S16-6). Details of this processing will be described later with reference to FIG.
  • control unit 21 executes an aging three-dimensional face image generation process (step S16-7). Specifically, the third processing unit 218 of the control unit 21 synthesizes the aged images generated in steps S16-5 and S16-6, and generates an image in which the shape and texture are aged.
  • FIG. 27 (a) displays 30-year-old face image data.
  • FIGS. 27B and 27C show images calculated by the control unit 21 using the face image data as input data and performing aging prediction after 10 years and 15 years later. Here, the conversion from the two-dimensional to the three-dimensional is not performed, but it can be seen that it is reasonably aged.
  • control unit 21 executes an aging two-dimensional face image generation process (step S16-8). Specifically, the aging management unit 215 of the control unit 21 rotates the generated aging three-dimensional face image so that the face direction specified in step S16-2 is oriented, and the two-dimensional face at that time Generate an image. The aging management unit 215 displays the generated two-dimensional face image on the display.
  • step S16-6 the texture aging process (step S16-6) described above will be described with reference to FIGS.
  • the second processing unit 217 of the control unit 21 executes a wavelet transform process (step S17-1). Specifically, the second processing unit 217 calculates a wavelet coefficient R1i obtained by wavelet transforming the post-age image Ii using principal component analysis.
  • the second processing unit 217 of the control unit 21 compares the absolute values of the two wavelet coefficients, and executes a magnitude relation determination process (step S17-2).
  • the absolute value of the wavelet coefficient R1i based on the principal component analysis is compared with the value obtained by multiplying the absolute value of the wavelet coefficient R2i calculated by the texture aging process using the wavelet transform by the weighting coefficient w. It is determined whether or not the absolute value of the wavelet coefficient R1i is larger than a value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w.
  • step S17-2 when the absolute value of the wavelet coefficient R1i is larger than the value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w (in the case of “YES” in step S17-2), the second processing unit 217 of the control unit 21. Executes a process of substituting the wavelet coefficient R1i into the wavelet coefficient R3i to be used (step S17-3).
  • step S17-4 when the absolute value of the wavelet coefficient R1i is equal to or smaller than the value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w (in the case of “NO” in step S17-2), the second processing unit 217 of the control unit 21 Then, a process of substituting the wavelet coefficient R2i into the wavelet coefficient R3i to be used is executed (step S17-4).
  • the above processing is repeated for the wavelet coefficient number i.
  • the second processing unit 217 of the control unit 21 executes wavelet inverse transformation processing of the wavelet coefficient R3 to be used (step S17-5).
  • the second processing unit 217 of the control unit 21 performs a mask area reflection process (step S17-6). Specifically, the second processing unit 217 performs aging change only on regions other than the texture prediction mask region.
  • the white portion shown in FIG. 26B is used as the texture prediction mask region.
  • These portions include the eyes, nose, mouth, and the like, and are regions where there is little change in texture due to aging.
  • the control unit 21 of the present embodiment executes an aging management unit 215 that generates a face image after aging, and a second processing unit that performs a process of predicting a change due to aging for the texture (texture) of the face image. 217, a third processing unit 218 that predicts changes due to aging of the face shape.
  • aging prediction is performed in consideration of aging of the face shape and changes due to aging of the texture of the face image, so that an aging face image can be generated more accurately.
  • the learning management unit 210 of the aging prediction system 20 of the present embodiment uses the data recorded in the secular change data storage unit 22 and the snapshot data storage unit 23 to extract facial feature points, facial features A point normalization process and a homology modeling process are executed (steps S1-2 to S1-4). This makes it possible to generate a texture aging model and a shape aging model by using a plurality of actual sample data and sharing them in terms of anatomy.
  • the control unit 21 of the present embodiment includes a first processing unit 216 that executes a process of generating three-dimensional face data from a processing target two-dimensional face image. Thereby, even if the orientation of the two-dimensional face image to be processed is not the front or the like, an aging face image can be generated with the face orientation at the specified angle.
  • the first processing unit 216 of the aging prediction system 20 executes a two-dimensional face image angle learning process using a three-dimensional mesh (homology model).
  • a three-dimensional prediction model for converting from two dimensions to three dimensions can be generated using actual sample data.
  • the second processing unit 217 of the aging prediction system 20 of the present embodiment uses a texture aging model using principal component analysis and a texture aging model using wavelet (WAVELET) transformation, Execute texture aging processing. This makes it possible to more accurately generate an aged face image by using wavelet transformation that is aging using existing stains and wrinkles.
  • WAVELET wavelet
  • the second processing unit 217 of the aging prediction system 20 of the present embodiment stores a weighting coefficient w that is a value that determines the importance of a model using principal component analysis and a model using wavelet transform. . Thereby, the weighting of the texture aging model by principal component analysis and the texture aging model by wavelet transform can be changed by changing the weighting coefficient w.
  • the control unit 21 of the above embodiment uses the aging data stored in the aging data storage unit 22 and the snapshot data stored in the snapshot data storage unit 23 to use the texture aging model and the shape aging model.
  • a model was generated.
  • a texture aging model and a shape aging model may be generated for each attribute (for example, sex or age group) of secular change data or snapshot data.
  • the control unit 21 uses the secular change data and the snapshot data having the same attribute to normalize the cylindrical coordinate system image D5, the normalized face feature point data D6, and the three-dimensional mesh data D7 ( Homology model).
  • the control unit 21 uses these, the control unit 21 generates a texture aging model and a shape aging model for each attribute, and stores them in the model storage unit 25 in association with each attribute information.
  • the control unit 21 acquires face attribute information included in this image together with the processing target two-dimensional face image data.
  • the control part 21 performs an aging prediction process using the texture aging model and the shape aging model of the attribute corresponding to the acquired attribute information. Accordingly, more accurate face image data can be generated in consideration of the influence of texture and shape according to attributes such as sex and age group.
  • the control unit 21 of the above embodiment uses the wavelet coefficient R1i obtained by wavelet transforming the post-age image Ii using the principal component analysis, and the wavelet coefficient R2i of the texture aging model based on the wavelet transform.
  • the texture aging process is not limited to the case where these two wavelet coefficients R1i and R2i are used alternatively, but the statistical values of these two wavelet coefficients R1i and R2i (for example, depending on the average value or attribute) (Combined value by ratio) may be used.
  • the control unit 21 of the above embodiment uses wavelet transform in the texture aging process.
  • the aging process is not limited to wavelet transform. It is possible to use a method of deepening (emphasizing) the stains and wrinkles on the face texture.
  • the face texture may be aged using a known sharpening filter or Fourier transform.
  • the control part 21 of the said embodiment produced
  • the analysis process used to generate the aging model is not limited to the principal component analysis process. Any analysis process can be used to identify variables that express individual differences.
  • an aging model can be generated using independent component analysis (ICA) or multidimensional scaling (MDS). You may do it.
  • ICA independent component analysis
  • MDS multidimensional scaling
  • control unit 21 converts the two-dimensional face image to the front direction using the angle prediction model in the face direction estimation process (step S16-2) of the aging prediction process.
  • the face direction estimation process (step S16-2) of the aging prediction process is not limited to the method using the angle prediction model.
  • the direction in which the face was photographed in the image may be specified using a known procrustes analysis.
  • the control unit 21 performs the first learning process for the conversion from the two-dimensional face image to the three-dimensional face data, the second learning process for the texture aging, and the third learning process for the three-dimensional shape aging.
  • a machine learning process was executed.
  • the machine learning unit 214b of the control unit 21 calculates an explanatory variable (a feature amount used at the time of prediction) using a dependent variable (a prediction target feature amount) by multiple regression analysis.
  • Machine learning processing executed by the machine learning unit 214b of the control unit 21 is not limited to learning processing using multiple regression analysis, and other analysis methods may be used. For example, a coupling learning process, a learning process based on PLS regression (Partial Last Squares Regression), a learning process based on Support Vector Regression (Support Vector Regression; SVR), or the like may be performed.
  • the machine learning unit 214b generates a single row vector (one-dimensional vector) by combining the “prediction target feature amount” and “feature amount used during prediction” of each sample data.
  • the principal component coefficient of “rotation angle ( ⁇ , ⁇ )” is used as the “prediction target feature amount” in the first learning process, and “two-dimensional feature point of all data is used as the“ feature amount used during prediction ”.
  • Principal component coefficients of “the principal component score divided by the standard deviation” are used.
  • the machine learning unit 214b generates a data matrix of combined patch vectors in which the generated one-dimensional vectors are arranged in the column direction for each sample data.
  • a principal component analysis of the data matrix is performed to generate a principal component vector matrix.
  • This principal component vector matrix is a matrix in which the row vectors are arranged in the order of principal components that change drastically between the “prediction target feature value” and the “feature value used for prediction”.
  • the machine learning unit 214b executes orthogonalization processing of the principal component vector matrix.
  • the “feature value used for prediction” matrix is orthogonalized by the Gram Schmid method.
  • the “prediction target feature amount” is converted by multiplying the orthogonalization coefficient in the “feature amount used during prediction”.
  • the machine learning unit 214b generates using the orthogonalized “features to be used for prediction” (matrix Di, j) and the “prediction target feature values” (matrix Ei, j) converted accordingly.
  • the predicted model is recorded in the model storage unit 25.
  • the control unit 21 calculates the coefficient bi representing the weight of the principal component by inner product of the input data Xj and the matrix Di, j stored in the model storage unit 25.
  • the control unit 21 reconstructs prediction data Yj, which is a prediction vector, using the coefficient bi and the matrix Ei, j. Thereby, the control part 21 can calculate the prediction data Yj based on the input data Xj.
  • This PLS regression uses the covariance w i of independent variables (features to be predicted) and explanatory variables (features used at the time of prediction), and adds multiple regression analysis to the variables in descending order of their correlation. By doing so, a regression coefficient matrix is calculated. Specifically, the following processes [1] to [4] are repeated until the intersection determination error is minimized.
  • the intersection determination error is an error between the prediction result and the prediction target when the sample data is divided into the prediction target and the input data, the prediction target is predicted using the input data.
  • machine learning unit 214b calculates an independent variable (prediction target features), the covariance matrix of explanatory variables (features to be used for prediction) (correlation matrix) W i.
  • the covariance matrix W i of the independent variable and explanatory variables is calculated by the following equation.
  • T means a transposed matrix.
  • the machine learning unit 214b projects the independent variable X i onto the space of the covariance w i and calculates the score matrix ti.
  • the machine learning unit 214b executes an independent variable update process. Specifically, similarly to the update processing of the explanatory variables, the machine learning unit 214b calculates a regression coefficient matrix that predicts the independent variable from the score matrix, deletes information used for the regression once, and the remaining independent variables Is calculated.
  • the machine learning unit 214b determines whether or not the intersection determination error is minimum. Specifically, first, the machine learning unit 214b assumes that a part of the learning data (for example, 1/4 of the entire learning data) is a prediction target, and uses the data excluding these prediction targets as input data. Then, using the explanatory variable Y i + 1 calculated in the process [4] and the independent variable X i + 1 calculated in the process [4], an error from the prediction target is calculated.
  • the control unit 21 determines that the intersection determination error is not the minimum.
  • the machine learning unit 214b uses the explanatory variable Y i + 1 as Y i and the independent variable X i + 1 as X i and repeatedly executes the processing [1] and subsequent steps.
  • the machine learning unit 214b performs the calculation calculated up to the previous process [1].
  • the covariance matrix W is generated by arranging the variances w i in the horizontal direction.
  • the control unit 21 generates a matrix C by arranging the regression coefficients c i calculated in the process [3] in the horizontal direction using the covariance matrix W, and uses the covariance matrix W in the process [4].
  • the calculated regression coefficient p i is arranged in the horizontal direction to generate a matrix P.
  • a prediction model generated using the regression coefficient matrix B is recorded in the model storage unit 25.
  • control unit 21 performs prediction using the input data Xj and the recorded regression coefficient matrix B.
  • This support vector regression analysis is a nonlinear analysis and calculates a regression curve. Specifically, in the support vector regression analysis, the regression curve is calculated so that the sample data falls within the range (tube) of the regression curve ⁇ predetermined distance w. The data that goes out of the tube is taken as penalty data ⁇ , and a curve that minimizes the following equation and a predetermined distance w are calculated.
  • the adjustment constant C is a parameter for adjusting the allowable range of outliers. If this adjustment constant C is large, the allowable range becomes small. “ ⁇ + i” is “0” if the data i is in the tube, and is a value into which the distance from the tube is substituted if it is above the tube. “ ⁇ ⁇ i” is “0” if the data i is in the tube, and if it is below the tube, the distance to the tube is substituted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Provided is an aging prediction system, comprising: a model storage unit which stores shape age models for predicting age-related changes in face shapes, texture age models for predicting age-related changes in facial surface textures, and three-dimensional predict models for predicting three-dimensional data from two-dimensional images; and a control unit which predicts aging. The control unit is configured to execute a predict process of generating an aging face model, and of outputting the generated aging face model to an output unit.

Description

加齢化予測システム、加齢化予測方法及び加齢化予測プログラムAging prediction system, aging prediction method, and aging prediction program
 本発明は、顔画像の加齢シミュレーションを行なうための加齢化予測システム、加齢化予測方法及び加齢化予測プログラムに関する。 The present invention relates to an aging prediction system, an aging prediction method, and an aging prediction program for performing an aging simulation of a face image.
 美容において、加齢後の顔や体の形状変化を予測するシステムが検討されている(例えば、特許文献1参照。)。この特許文献1に記載の技術においては、顔をスキャンして取得した画像からポリゴンメッシュを構築し、このポリゴンメッシュが再パラメータ化されて、ベースメッシュ及び変位画像が算出される。この変位画像は複数のタイルに分割され、各タイルの統計値が測定される。この統計値の変更によって変位画像を変形させ、変形された変位画像がベースメッシュと結合されて新たな顔が合成される。 In beauty, a system for predicting changes in the shape of a face or body after aging has been studied (for example, see Patent Document 1). In the technique described in Patent Document 1, a polygon mesh is constructed from an image acquired by scanning a face, the polygon mesh is re-parameterized, and a base mesh and a displacement image are calculated. This displacement image is divided into a plurality of tiles, and the statistical value of each tile is measured. The displacement image is deformed by changing the statistical value, and the deformed displacement image is combined with the base mesh to synthesize a new face.
特開2007-265396号公報JP 2007-265396 A
 しかしながら、顔画像についての加齢シミュレーションを行なう場合、3次元顔データを入手できるとは限らず、多様な角度から撮影された顔の2次元顔画像を用いて加齢シミュレーションを行わなければならない場合もある。この場合、2次元顔画像の顔が撮影された向き(角度)を考慮する必要がある。また、顔には、解剖学的に同じ位置にある特徴点がある。従って、解剖学的な特徴点を考慮せずに、顔画像を単なるテクスチャ画像として扱ったのでは、的確な加齢シミュレーションを効率的に実行することが困難である。 However, when performing an aging simulation on a face image, it is not always possible to obtain 3D face data, and an aging simulation must be performed using a 2D face image of a face taken from various angles. There is also. In this case, it is necessary to consider the direction (angle) at which the face of the two-dimensional face image was captured. Also, the face has feature points that are anatomically the same position. Therefore, if a face image is handled as a simple texture image without considering anatomical feature points, it is difficult to efficiently execute an accurate aging simulation.
 本発明の目的は、顔画像の加齢シミュレーションを効率的かつ的確に行なうための加齢化予測システム、加齢化予測方法及び加齢化予測プログラムを提供することにある。 An object of the present invention is to provide an aging prediction system, an aging prediction method, and an aging prediction program for efficiently and accurately performing an aging simulation of a face image.
 一態様では、上記課題を解決する加齢化予測システムは、顔形状の加齢による変化を予測する形状加齢モデルと、顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、入力部、出力部に接続されるように構成され、加齢化を予測する制御部とを備える。前記制御部が、前記入力部から予測対象画像を取得し、前記予測対象画像の特徴点を抽出し、抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、生成した前記加齢化顔モデルを前記出力部に出力する予測処理を実行するように構成されている。これにより、形状加齢モデルとテクスチャ加齢モデルとを用いて加齢化顔モデルを生成するので、形状のみ又はテクスチャのみを加齢化させるよりも的確に行なうことができる。更に、予測対象画像の顔が正面から撮影していなくても、学習によって予測対象画像における顔向きを推定して効率よく加齢化させることができる。 In one aspect, an aging prediction system that solves the above problems includes a shape aging model that predicts changes due to aging of the face shape, a texture aging model that predicts changes due to aging of the texture of the face, A model storage unit that stores a three-dimensional prediction model that predicts three-dimensional data from a two-dimensional image, and a control unit that is configured to be connected to an input unit and an output unit and that predicts aging. The control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, estimates a face direction in the prediction target image using the extracted feature point, and Generating first three-dimensional data based on the predicted prediction model and the estimated face orientation; generating second three-dimensional data from the first three-dimensional data using the shape aging model; Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture, and for the second three-dimensional data, An aging texture is synthesized to generate an aging face model, and a prediction process for outputting the generated aging face model to the output unit is executed. Thereby, since an aging face model is produced | generated using a shape aging model and a texture aging model, it can carry out more precisely than aging only a shape or only a texture. Furthermore, even if the face of the prediction target image is not photographed from the front, the face direction in the prediction target image can be estimated by learning and efficiently aged.
 一実施形態では、前記制御部は、前記入力部から、出力用として指定された顔向き角度を取得し、生成した前記加齢化顔モデルを用いて、前記顔向き角度の2次元顔画像を生成して、生成した前記2次元顔画像を前記出力部に出力するようにさらに構成されてよい。これにより、指定された顔向きで加齢化した顔の画像を出力することができる。 In one embodiment, the control unit obtains a face orientation angle designated for output from the input unit, and uses the generated aging face model to generate a two-dimensional face image of the face orientation angle. It may be further configured to generate and output the generated two-dimensional face image to the output unit. As a result, an image of an aging face with a designated face orientation can be output.
 一実施形態では、前記2次元画像は第1の2次元画像であり、前記制御部は、取得した3次元顔サンプルデータに基づいて、第2の2次元画像を生成し、前記第2の2次元画像において特徴点を特定し、前記特徴点を用いて、前記3次元顔サンプルデータを正規化した正規化サンプルデータを生成し、前記正規化サンプルデータを用いて、前記形状加齢モデル及び前記テクスチャ加齢モデルを生成し、生成した前記形状加齢モデル及び生成した前記テクスチャ加齢モデルを前記モデル記憶部に記憶する学習処理を実行するようにさらに構成されてよい。これにより、実際のサンプルデータに基づいて加齢モデルを生成することができる。 In one embodiment, the two-dimensional image is a first two-dimensional image, and the control unit generates a second two-dimensional image based on the acquired three-dimensional face sample data, and the second 2D image A feature point is specified in a three-dimensional image, and the feature point is used to generate normalized sample data obtained by normalizing the three-dimensional face sample data. Using the normalized sample data, the shape aging model and the It may be further configured to generate a texture aging model and execute a learning process for storing the generated shape aging model and the generated texture aging model in the model storage unit. Thereby, an aging model can be generated based on actual sample data.
 一実施形態では、前記学習処理は、前記正規化サンプルデータを用いて前記3次元化予測モデルを生成して、前記モデル記憶部に記憶することを含んでよい。これにより、実際のサンプルデータに基づいて3次元化予測モデルを生成することができる。 In one embodiment, the learning process may include generating the three-dimensional prediction model using the normalized sample data and storing it in the model storage unit. Thereby, a three-dimensional prediction model can be generated based on actual sample data.
 一実施形態では、前記モデル記憶部には、主成分分析を用いて算出した第1のテクスチャ加齢モデルと、ウェーブレット変換を用いて算出した第2のテクスチャ加齢モデルとが記憶されており、前記制御部は、前記第1の2次元画像に対して、第1のテクスチャ加齢モデルを適用した画像をウェーブレット変換した第1のウェーブレット係数と、前記第1の2次元画像に対して、第2のテクスチャ加齢モデルを適用した第2のウェーブレット係数とを比較した結果に応じて、適用する前記テクスチャ加齢モデルを特定するようにさらに構成されてよい。これにより、既に存在するシミ及び皺等を用いて予測する第2のテクスチャ加齢モデルを用いることができるので、より適切な加齢モデルを生成することができる。 In one embodiment, the model storage unit stores a first texture aging model calculated using principal component analysis and a second texture aging model calculated using wavelet transform. The control unit performs a first wavelet coefficient obtained by performing a wavelet transform on an image obtained by applying a first texture aging model to the first two-dimensional image, and a first wavelet coefficient for the first two-dimensional image. The texture aging model to be applied may be further specified according to a result of comparison with the second wavelet coefficient to which the texture aging model of 2 is applied. This makes it possible to use the second texture aging model that is predicted by using existing stains, wrinkles, and the like, so that a more appropriate aging model can be generated.
 別の一態様では、顔形状の加齢による変化を予測する形状加齢モデルと、顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、入力部、出力部に接続されるように構成された制御部とを備えた加齢化予測システムを用いて、加齢化を予測する方法を提供する。前記制御部が、前記入力部から予測対象画像を取得し、前記予測対象画像の特徴点を抽出し、抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、生成した前記加齢化顔モデルを前記出力部に出力する予測処理を実行する。 In another aspect, a shape aging model that predicts changes in facial shape due to aging, a texture aging model that predicts changes in facial texture due to aging, and 3D data from 2D images Method for predicting aging by using an aging prediction system including a model storage unit storing a three-dimensional prediction model, and a control unit configured to be connected to an input unit and an output unit I will provide a. The control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, estimates a face direction in the prediction target image using the extracted feature point, and Generating first three-dimensional data based on the predicted prediction model and the estimated face orientation; generating second three-dimensional data from the first three-dimensional data using the shape aging model; Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture, and for the second three-dimensional data, A prediction process for generating an aging face model by synthesizing aging textures and outputting the generated aging face model to the output unit is executed.
 さらに別の態様では、顔形状の加齢による変化を予測する形状加齢モデルと、顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、入力部、出力部に接続されるように構成された制御部とを備えた加齢化予測システムを用いて、加齢化を予測するプログラムを記憶する非一時的なコンピュータ可読記憶媒体を提供する。加齢化予測システムによる前記プログラムの実行時、前記制御部が、前記入力部から予測対象画像を取得し、前記予測対象画像の特徴点を抽出し、抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、生成した前記加齢化顔モデルを前記出力部に出力する予測処理を実行する。 In yet another aspect, a shape aging model that predicts changes due to aging of the face shape, a texture aging model that predicts changes due to aging of the texture of the face, and 3D data from 2D images are predicted. A program for predicting aging by using an aging prediction system including a model storage unit storing a three-dimensional prediction model, and a control unit configured to be connected to an input unit and an output unit A non-transitory computer-readable storage medium is provided. When the program is executed by the aging prediction system, the control unit acquires a prediction target image from the input unit, extracts a feature point of the prediction target image, and uses the extracted feature point to perform the prediction Estimating the face orientation in the target image, generating first three-dimensional data based on the three-dimensional prediction model and the estimated face orientation, and using the shape aging model, the first three-dimensional Generating second three-dimensional data from the data, applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data, generating an aging texture, Synthesizing the aging texture with the second three-dimensional data to generate an aging face model, and executing a prediction process for outputting the generated aging face model to the output unit .
本実施形態の加齢化予測システムの説明図。Explanatory drawing of the aging prediction system of this embodiment. 本実施形態に用いられる全体的なデータ生成処理の流れ図。The flowchart of the whole data generation process used for this embodiment. 本実施形態の学習前処理の処理手順の流れ図。The flowchart of the process sequence of the learning pre-processing of this embodiment. 本実施形態の円柱座標系への変換処理の処理手順の流れ図。The flowchart of the process sequence of the conversion process to the cylindrical coordinate system of this embodiment. 本実施形態の円柱座標系への変換処理を説明する説明図であって、(a)は欠落部がある画像、(b)は円柱座標系における半径と円柱方向角度とを2軸とした2次元の顔画像、(c)は円柱座標系における円柱高と円周方向角度とを2軸とした2次元の顔画像。It is explanatory drawing explaining the conversion process to the cylindrical coordinate system of this embodiment, Comprising: (a) is an image with a missing part, (b) is 2 which made the radius and cylindrical direction angle in a cylindrical coordinate system 2 axes. A three-dimensional face image, (c) is a two-dimensional face image with the cylinder height and the circumferential angle in the cylindrical coordinate system as two axes. 本実施形態の顔特徴点抽出処理の顔特徴点を説明する説明図。Explanatory drawing explaining the face feature point of the face feature point extraction process of this embodiment. 本実施形態の顔特徴点の正規化処理の処理手順の流れ図。The flowchart of the process sequence of the normalization process of the face feature point of this embodiment. 本実施形態の重回帰分析による正規化処理を説明する説明図であって、(a)は処理手順の流れ図、(b)はX軸、Y軸周りの傾き補正の説明図、(c)はZ軸周りの傾き補正の説明図。It is explanatory drawing explaining the normalization process by the multiple regression analysis of this embodiment, Comprising: (a) is a flowchart of a processing procedure, (b) is explanatory drawing of the inclination correction | amendment around an X-axis and a Y-axis, (c) is Explanatory drawing of the inclination correction | amendment around Z-axis. 本実施形態の形状の相同モデル化処理を説明する説明図であって、(a)は形状の相同モデル化処理の処理手順の流れ図、(b)メッシュモデル。It is explanatory drawing explaining the homology modeling process of the shape of this embodiment, Comprising: (a) is a flowchart of the process sequence of the homology modeling process of a shape, (b) A mesh model. 本実施形態のテクスチャの相同モデル化処理を説明する説明図であって、(a)はテクスチャの相同モデル化処理の処理手順の流れ図、(b)は各メッシュモデルの平均座標に変形したテクスチャの平均を示した平均顔。It is explanatory drawing explaining the homologous modeling process of the texture of this embodiment, (a) is a flowchart of the process sequence of the homologous modeling process of a texture, (b) is the texture transformed into the average coordinate of each mesh model Average face showing average. 本実施形態の形状の相同モデル化処理を説明する説明図であって、(a)は画像正規化メッシュモデル、(b)は算出した平均値のテクスチャから生成した平均顔。It is explanatory drawing explaining the homologous modeling process of the shape of this embodiment, Comprising: (a) is an image normalization mesh model, (b) is the average face produced | generated from the texture of the calculated average value. 本実施形態の2次元顔画像の角度学習処理の処理手順の流れ図。The flowchart of the process sequence of the angle learning process of the two-dimensional face image of this embodiment. 本実施形態の機械学習処理における変数選択アルゴリズムを説明する説明図。Explanatory drawing explaining the variable selection algorithm in the machine learning process of this embodiment. 本実施形態の機械学習処理における変数更新処理アルゴリズムを説明する説明図。Explanatory drawing explaining the variable update process algorithm in the machine learning process of this embodiment. 本実施形態の3次元化変換の学習処理の処理手順の流れ図。The flowchart of the process sequence of the learning process of the three-dimensional conversion of this embodiment. 本実施形態の3次元化変換の予測検証処理の処理手順の流れ図。The flowchart of the process sequence of the prediction verification process of the three-dimensional conversion of this embodiment. 本実施形態の3次元化変換の検証結果を説明する説明図であって、(a)は入力データ、(b)は正解、(c)は2次元特徴点のみで予測、(d)は2次元特徴点及び画像で予測した場合の顔画像。It is explanatory drawing explaining the verification result of the three-dimensional conversion of this embodiment, (a) is input data, (b) is a correct answer, (c) is predicted only by a two-dimensional feature point, (d) is 2 Face image when predicted with dimension feature points and image. 本実施形態の主成分分析を利用したテクスチャ加齢化の学習処理の処理手順の流れ図。The flowchart of the process sequence of the learning process of texture aging using the principal component analysis of this embodiment. 本実施形態のテクスチャ加齢化の予測検証処理の処理手順の流れ図。The flowchart of the process sequence of the prediction verification process of the texture aging of this embodiment. 本実施形態のテクスチャ加齢化の予測検証における各主成分を説明する説明図。Explanatory drawing explaining each main component in the prediction verification of the texture aging of this embodiment. 本実施形態において主成分の上限を変更して再構成した画像を説明する説明図。Explanatory drawing explaining the image reconfigure | reconstructed by changing the upper limit of a main component in this embodiment. 本実施形態のウェーブレット変換を利用したテクスチャの加齢化処理を説明する説明図であって、(a)は処理手順の流れ図、(b)はウェーブレット係数を可視化して表示した画像。It is explanatory drawing explaining the aging process of the texture using the wavelet transformation of this embodiment, (a) is a flowchart of a processing procedure, (b) is the image which visualized and displayed the wavelet coefficient. 本実施形態の3次元形状加齢化の学習処理の処理手順の流れ図。The flowchart of the process sequence of the learning process of the three-dimensional shape aging of this embodiment. 本実施形態の加齢化予測処理の処理手順の流れ図。The flowchart of the process sequence of the aging prediction process of this embodiment. 本実施形態のテクスチャの加齢化予測処理の処理手順の流れ図。The flowchart of the process sequence of the aging prediction process of the texture of this embodiment. 本実施形態の加齢化予測処理に用いられるマスク領域を説明する説明図であって、(a)は形状予測マスク領域、(b)はテクスチャ予測マスク領域を示す。It is explanatory drawing explaining the mask area | region used for the aging prediction process of this embodiment, Comprising: (a) shows a shape prediction mask area | region, (b) shows a texture prediction mask area | region. 本実施形態の加齢化予測処理における画像であって、(a)を入力データ、(b)は10年後の加齢化予測、(c)は15年後の加齢化を予測した画像。It is the image in the aging prediction process of this embodiment, Comprising: (a) is input data, (b) is aging prediction after 10 years, (c) is the image which predicted aging after 15 years. .
 以下、図1~図27を用いて、加齢化予測システム、加齢化予測方法及び加齢化予測プログラムを具体化した一実施形態を説明する。本実施形態では、経年変化前後の3次元顔データを用いて、加齢による顔の経年変化を学習する。撮影された2次元顔画像を用いて、加齢化後の顔画像を予測する加齢シミュレーションを行なう。 Hereinafter, an embodiment embodying the aging prediction system, the aging prediction method, and the aging prediction program will be described with reference to FIGS. In this embodiment, the aging of the face due to aging is learned using the three-dimensional face data before and after aging. An aging simulation for predicting an aging face image is performed using the photographed two-dimensional face image.
 <加齢化予測システムの構成>
 図1に示すように、本実施形態では、加齢化予測システム20を用いる。この加齢化予測システム20には、入力部10、出力部15が接続されている。入力部10は、各種情報を入力するための手段であり、キーボードやポインティングデバイス、記録媒体からデータを取得する入力インターフェイス等により構成される。出力部15は、各種情報を出力するための手段であり、ディスプレイ等により構成される。
<Configuration of aging prediction system>
As shown in FIG. 1, in this embodiment, an aging prediction system 20 is used. An input unit 10 and an output unit 15 are connected to the aging prediction system 20. The input unit 10 is a means for inputting various types of information, and includes a keyboard, a pointing device, an input interface for acquiring data from a recording medium, and the like. The output unit 15 is a means for outputting various types of information, and includes a display or the like.
 加齢化予測システム20は、加齢化予測処理を行なうためのコンピュータシステムである。この加齢化予測システム20は、制御部21、経年変化データ記憶部22、スナップショットデータ記憶部23、モデル記憶部25を備えている。 The aging prediction system 20 is a computer system for performing aging prediction processing. The aging prediction system 20 includes a control unit 21, an aging data storage unit 22, a snapshot data storage unit 23, and a model storage unit 25.
 制御部21は、制御手段(CPU、RAM、ROM等)を備え、後述する処理(学習管理段階、第1学習段階、第2学習段階、第3学習段階、主成分分析段階、機械学習段階、加齢管理段階、第1処理段階、第2処理段階、第3処理段階等の各処理等)を行なう。そのための加齢化予測プログラムを実行することにより、制御部21は、学習管理部210、第1学習部211、第2学習部212、第3学習部213、主成分分析部214a、機械学習部214b、加齢管理部215、第1処理部216、第2処理部217、第3処理部218として機能する。 The control unit 21 includes control means (CPU, RAM, ROM, etc.), and processes to be described later (a learning management stage, a first learning stage, a second learning stage, a third learning stage, a principal component analysis stage, a machine learning stage, (Aging management stage, first treatment stage, second treatment stage, third treatment stage, etc.). By executing the aging prediction program for that purpose, the control unit 21 has a learning management unit 210, a first learning unit 211, a second learning unit 212, a third learning unit 213, a principal component analysis unit 214a, and a machine learning unit. 214b, an aging manager 215, a first processor 216, a second processor 217, and a third processor 218.
 学習管理部210は、加齢シミュレーションにおける顔の経年変化を学習する処理を実行する。この学習管理部210は、後述するように、顔特徴点を用いて両目の重心位置を算出する重心算出手段を記憶している。更に、学習管理部210は、後述する形状の相同モデル化処理に用いるジェネリックモデル(基本モデル)に関するデータを予め作成して、メモリに記憶している。このジェネリックモデルは、日本人の一般的な特徴を表した顔に関するモデルである。本実施形態では、メッシュポイント数が10741個、ポリゴン(三角形)数が21256個のジェネリックモデルを用いる。このジェネリックモデルには、顔特徴点に対応するメッシュポイントが含まれており、各顔特徴点を特定する識別情報が設定されている。 The learning management unit 210 executes a process of learning the secular change of the face in the aging simulation. As will be described later, the learning management unit 210 stores centroid calculating means for calculating the centroid position of both eyes using face feature points. Furthermore, the learning management unit 210 creates in advance data related to a generic model (basic model) used in a homologous modeling process for shapes to be described later, and stores the data in a memory. This generic model is a model related to the face that represents the general characteristics of Japanese people. In the present embodiment, a generic model having 10741 mesh points and 21256 polygons (triangles) is used. This generic model includes mesh points corresponding to face feature points, and identification information for specifying each face feature point is set.
 また、学習管理部210は、後述するテクスチャの相同モデル化処理において、顔特徴点の座標から、正規化したメッシュモデルの各頂点の座標の平均を算出する予め定めたテクスチャ平均算出ルールを記憶している。 In addition, the learning management unit 210 stores a predetermined texture average calculation rule for calculating the average of the coordinates of each vertex of the normalized mesh model from the coordinates of the face feature points in the texture homology modeling process described later. ing.
 更に、学習管理部210は、学習に用いる円柱座標系画像、円柱座標系座標、相同モデルを記録するための学習用メモリを備えている。
 第1学習部211は、2次元顔データ(顔画像)から、3次元円柱座標系で表わされた3次元顔データを予測するためのモデルを作成する第1学習処理を実行する。
Further, the learning management unit 210 includes a learning memory for recording a cylindrical coordinate system image, a cylindrical coordinate system coordinate, and a homologous model used for learning.
The first learning unit 211 executes a first learning process for creating a model for predicting three-dimensional face data represented by a three-dimensional cylindrical coordinate system from two-dimensional face data (face image).
 第2学習部212は、顔画像の質感(テクスチャ)について、加齢による変化を予測するためのモデルを作成する第2学習処理を実行する。
 第3学習部213は、顔形状について、加齢による変化を予測するためのモデルを作成する第3学習処理を実行する。
The 2nd learning part 212 performs the 2nd learning process which produces the model for predicting the change by aging about the texture (texture) of a face image.
The 3rd learning part 213 performs the 3rd learning process which produces the model for predicting the change by aging about face shape.
 主成分分析部214aは、学習管理部210及び学習部211~213からの指示に応じて、主成分分析処理を行なう。
 機械学習部214bは、学習部211~213からの指示に応じて、従属変数(予測対象特徴量)を用いて説明変数(予測時に使用する特徴量)を算出する処理を行なう。
The principal component analysis unit 214a performs principal component analysis processing in response to instructions from the learning management unit 210 and the learning units 211 to 213.
The machine learning unit 214b performs a process of calculating an explanatory variable (a feature amount used at the time of prediction) using a dependent variable (a prediction target feature amount) according to an instruction from the learning units 211 to 213.
 加齢管理部215は、2次元顔画像を用いて、加齢後の顔画像を生成する処理を実行する。この加齢管理部215は、予測対象の2次元顔画像を取得し、第1~第3処理部216~218を用いて、テクスチャ及び形状の加齢化シミュレーションを行なう。 The aging management unit 215 executes processing for generating a face image after aging using a two-dimensional face image. The aging management unit 215 acquires a two-dimensional face image to be predicted, and uses the first to third processing units 216 to 218 to perform an aging simulation of the texture and shape.
 第1処理部216は、処理対象の2次元顔画像から、3次元円柱座標系で表わされた3次元顔データを生成する処理を実行する。
 第2処理部217は、顔画像の質感(テクスチャ)について、加齢による変化を予測する処理を実行する。本実施形態では、主成分分析を利用したテクスチャ加齢モデルと、ウェーブレット(WAVELET)変換を利用したテクスチャ加齢モデルとを用いて、テクスチャ加齢化処理を実行する。第2処理部217は、この処理に用いる重み係数wをメモリに記憶している。この重み係数wは、主成分分析を利用したモデルとウェーブレット変換を利用したモデルとを用いる場合、どちらのモデルに重点を置くかを決めるための値である。本実施形態では、重み係数wとして「1」を用いる。
The first processing unit 216 executes processing for generating three-dimensional face data represented by a three-dimensional cylindrical coordinate system from the processing target two-dimensional face image.
The second processing unit 217 executes a process of predicting a change due to aging for the texture (texture) of the face image. In the present embodiment, the texture aging process is executed using a texture aging model using principal component analysis and a texture aging model using wavelet (WAVELET) transformation. The second processing unit 217 stores the weighting coefficient w used for this processing in the memory. The weight coefficient w is a value for determining which model is to be emphasized when using a model using principal component analysis and a model using wavelet transform. In the present embodiment, “1” is used as the weighting coefficient w.
 第3処理部218は、3次元顔データを用いて、顔形状について加齢による変化を予測する処理を実行する。
 また、経年変化データ記憶部22には、所定数の学習対象者(学習に用いるサンプル)について、経年変化前と経年変化後(本実施形態では、10年後)の3次元顔データ(経年変化データ)が記録されている。この経年変化データを用いることにより、この加齢前後の変化を把握することができる。本実施形態では、経年変化データとして、約170名分のデータを用いる。
The 3rd process part 218 performs the process which estimates the change by aging about a face shape using 3D face data.
Further, the aging data storage unit 22 stores three-dimensional face data (aging) before and after aging (10 years in this embodiment) for a predetermined number of learning subjects (samples used for learning). Data) is recorded. By using this aging data, changes before and after aging can be grasped. In this embodiment, data for about 170 people is used as the secular change data.
 スナップショットデータ記憶部23には、より多くの学習対象者を撮影した3次元顔データ(スナップショットデータ)が記録されている。このスナップショットデータは、1回のみの撮影であり、スナップショットデータ記憶部23には、経年変化データは記録されていない。本実施形態では、スナップショットデータとして、約870名分のデータを用いる。 In the snapshot data storage unit 23, three-dimensional face data (snapshot data) obtained by photographing more learning subjects is recorded. This snapshot data is taken only once, and no aging data is recorded in the snapshot data storage unit 23. In this embodiment, data for about 870 people is used as snapshot data.
 なお、本実施形態の経年変化データ記憶部22、スナップショットデータ記憶部23には、円柱座標系以外のデータフォーマット(XYZ座標)で3次元データが記録されている場合を想定する。 It is assumed that three-dimensional data is recorded in the data format (XYZ coordinates) other than the cylindrical coordinate system in the aging data storage unit 22 and the snapshot data storage unit 23 of the present embodiment.
 モデル記憶部25には、加齢シミュレーションにおける各種予測を行なう場合に用いるモデル(説明変数を使って結果変数を計算するアルゴリズム)が記憶されている。本実施形態では、角度予測モデル、3次元化予測モデル、テクスチャ加齢モデル、形状加齢モデルに関するデータが記憶される。 The model storage unit 25 stores a model (an algorithm for calculating a result variable using an explanatory variable) used when performing various predictions in an aging simulation. In the present embodiment, data related to an angle prediction model, a three-dimensional prediction model, a texture aging model, and a shape aging model are stored.
 角度予測モデルデータ領域には、処理対象の2次元顔画像の顔が撮影された向き(顔正面に対する角度)を予測するためのモデルデータが記憶される。この角度予測モデルデータは、第1学習部211によって算出されて記録される。 In the angle prediction model data area, model data for predicting the direction (angle with respect to the face front) where the face of the two-dimensional face image to be processed is photographed is stored. The angle prediction model data is calculated and recorded by the first learning unit 211.
 3次元化予測モデルデータ領域には、正面向きの2次元顔画像を3次元顔データに変換するためのモデルデータが記憶される。この3次元化予測モデルデータは、第1学習部211によって算出されて記録される。 In the three-dimensional prediction model data area, model data for converting a front-facing two-dimensional face image into three-dimensional face data is stored. The three-dimensional prediction model data is calculated and recorded by the first learning unit 211.
 テクスチャ加齢モデルデータ領域には、顔のテクスチャについて、加齢化後のテクスチャを予測するためのモデルデータが記憶される。このテクスチャ加齢モデルデータは、第2学習部212によって算出されて記録される。本実施形態では、主成分分析を利用したテクスチャ加齢モデルデータと、ウェーブレット変換を利用したテクスチャ加齢モデルデータとが記憶されている。 In the texture aging model data area, model data for predicting the texture after aging is stored for the facial texture. The texture aging model data is calculated and recorded by the second learning unit 212. In the present embodiment, texture aging model data using principal component analysis and texture aging model data using wavelet transform are stored.
 形状加齢モデルデータ領域には、顔形状について、加齢化後の形状を予測するためのモデルデータが記憶される。この形状加齢モデルデータは、第3学習部213によって算出されて記録される。 In the shape aging model data area, model data for predicting the shape after aging is stored for the face shape. This shape aging model data is calculated and recorded by the third learning unit 213.
 <モデルの生成>
 次に、図2を用いて、各モデルの生成について、生成処理の概要を説明する。
 まず、加齢化予測システム20の制御部21は、経年変化データ記憶部22に記憶された3次元顔データD1を用いて、円柱座標系画像D2と円柱座標系座標データD3とを生成する。円柱座標系画像D2は、3次元顔データを、円柱座標系に射影し、「900×900」の等間隔メッシュに補間して作成した2次元画像データである。円柱座標系座標データD3は、3次元顔データを、円柱座標系に射影して生成した「900×900」画像の各点の3次元座標に関するデータである。
<Model generation>
Next, the outline of the generation process will be described with reference to FIG.
First, the control unit 21 of the aging prediction system 20 generates the cylindrical coordinate system image D2 and the cylindrical coordinate system coordinate data D3 using the three-dimensional face data D1 stored in the aging data storage unit 22. The cylindrical coordinate system image D2 is two-dimensional image data created by projecting three-dimensional face data onto a cylindrical coordinate system and interpolating into a “900 × 900” equidistant mesh. The cylindrical coordinate system coordinate data D3 is data relating to the three-dimensional coordinates of each point of the “900 × 900” image generated by projecting the three-dimensional face data onto the cylindrical coordinate system.
 次に、制御部21は、円柱座標系画像D2及び円柱座標系座標データD3を用いて顔特徴点データD4を生成する。この顔特徴点データD4は、円柱座標系画像D2における顔の特徴点の座標に関するデータである。この顔の特徴点の詳細については後述する。 Next, the control unit 21 generates face feature point data D4 using the cylindrical coordinate system image D2 and the cylindrical coordinate system coordinate data D3. The face feature point data D4 is data relating to the coordinates of the face feature points in the cylindrical coordinate system image D2. Details of the facial feature points will be described later.
 制御部21は、円柱座標系座標データD3を用いて、顔特徴点データD4、円柱座標系画像D2の正規化処理を行なう。正規化された円柱座標系画像D5と、正規化された顔特徴点データD6と、3次元メッシュデータD7(相同モデル)とを生成する。この正規化処理の詳細については、後述する。ここで、相同モデルとは、顔についての3次元データをメッシュで表現し、異なるデータにおける対応する各メッシュの頂点が、解剖学的に同じ位置になるように変換した3次元座標データである。 The control unit 21 uses the cylindrical coordinate system coordinate data D3 to normalize the face feature point data D4 and the cylindrical coordinate system image D2. A normalized cylindrical coordinate system image D5, normalized face feature point data D6, and three-dimensional mesh data D7 (homology model) are generated. Details of this normalization processing will be described later. Here, the homologous model is three-dimensional coordinate data in which three-dimensional data about a face is expressed by a mesh and converted so that the vertices of corresponding meshes in different data are anatomically the same position.
 更に、制御部21は、正規化された円柱座標系画像D5、3次元メッシュデータD7(相同モデル)を用いて、任意角度から撮影した2次元顔画像D8を生成する。
 制御部21は、任意角度から撮影した2次元顔画像D8と正規化された顔特徴点データD6と3次元メッシュデータD7(相同モデル)とを用いて、2次元顔画像から3次元顔データへの変換についての第1学習処理を実行する。この第1学習処理の詳細は後述する。
Further, the control unit 21 generates a two-dimensional face image D8 photographed from an arbitrary angle using the normalized cylindrical coordinate system image D5 and the three-dimensional mesh data D7 (homology model).
The control unit 21 changes the 2D face image to the 3D face data using the 2D face image D8 photographed from an arbitrary angle, the normalized face feature point data D6, and the 3D mesh data D7 (homology model). The first learning process for the conversion is performed. Details of the first learning process will be described later.
 更に、制御部21は、正規化された円柱座標系画像D5を用いて、テクスチャ加齢化についての第2学習処理を行なう。この第2学習処理の詳細は後述する。
 また、制御部21は、3次元メッシュデータD7(相同モデル)を用いて、3次元形状加齢化についての第3学習処理を行なう。この第3学習処理の詳細は後述する。
Furthermore, the control part 21 performs the 2nd learning process about texture aging using the normalized cylindrical coordinate system image D5. Details of the second learning process will be described later.
Moreover, the control part 21 performs the 3rd learning process about three-dimensional shape aging using the three-dimensional mesh data D7 (homology model). Details of the third learning process will be described later.
 <学習前処理>
 次に、図3~図11を用いて、上述した第1~第3学習処理の前に行なわれる学習前処理を説明する。この処理は、経年変化データ記憶部22、スナップショットデータ記憶部23に記録された各3次元データについて、個別に行なわれる。
<Pre-learning process>
Next, the pre-learning process performed before the first to third learning processes described above will be described with reference to FIGS. This process is performed individually for each three-dimensional data recorded in the aging data storage unit 22 and the snapshot data storage unit 23.
 図3に示すように、まず、制御部21は、円柱座標系への変換処理を実行する(ステップS1-1)。この処理の詳細については、図4及び図5を用いて後述する。
 次に、制御部21は、顔特徴点の抽出処理を実行する(ステップS1-2)。この処理の詳細については、図6を用いて後述する。
As shown in FIG. 3, first, the control unit 21 executes conversion processing into a cylindrical coordinate system (step S1-1). Details of this processing will be described later with reference to FIGS.
Next, the control unit 21 executes face feature point extraction processing (step S1-2). Details of this processing will be described later with reference to FIG.
 次に、制御部21は、顔特徴点の正規化処理を実行する(ステップS1-3)。この処理の詳細については、図7及び図8を用いて後述する。
 次に、制御部21は、相同モデル化処理を実行する(ステップS1-4)。ここでは、顔の形状の相同モデルとテクスチャの相同モデルとを生成する。これらの処理の詳細については、図9及び図10を用いて後述する。
Next, the control unit 21 executes normalization processing of face feature points (step S1-3). Details of this processing will be described later with reference to FIGS.
Next, the control unit 21 executes homology modeling processing (step S1-4). Here, a face shape homology model and a texture homology model are generated. Details of these processes will be described later with reference to FIGS. 9 and 10.
 次に、制御部21は、正規化された円柱座標系画像の生成処理を実行する(ステップS1-5)。この処理の詳細については、図11を用いて後述する。
 <円柱座標系への変換処理>
 次に、図4及び図5を用いて、円柱座標系への変換処理(ステップS1-1)を説明する。
Next, the control unit 21 executes a process for generating a normalized cylindrical coordinate system image (step S1-5). Details of this processing will be described later with reference to FIG.
<Conversion processing to cylindrical coordinate system>
Next, the conversion process to the cylindrical coordinate system (step S1-1) will be described with reference to FIGS.
 まず、制御部21の学習管理部210は、欠落部の補間処理を実行する(ステップS2-1)。具体的には、学習管理部210は、3次元顔データにおいて欠落している部分があるかどうかを確認する。欠落部を検出した場合、学習管理部210は、この欠落部の周辺情報を用いて、欠落部の補間を行なう。具体的には、欠落部周囲に隣接する所定範囲のデータに基づいて、公知の補間方法を用いて、欠落部の埋め合わせを行なう。 First, the learning management unit 210 of the control unit 21 performs a missing portion interpolation process (step S2-1). Specifically, the learning management unit 210 checks whether there is a missing part in the three-dimensional face data. When the missing part is detected, the learning management unit 210 performs interpolation of the missing part using the peripheral information of the missing part. Specifically, the missing portion is compensated using a known interpolation method based on a predetermined range of data adjacent to the periphery of the missing portion.
 例えば、図5(a)に示す3次元顔データを用いる場合を想定する。図5(b)には、この3次元顔データを、円柱座標系における半径と円柱方向角度とを2軸とした円柱座標系で表している。この3次元顔データにおいては、顎や耳の周囲の一部のデータが欠落している。これらの部分の画像を、周辺の画像を用いた補間処理によって生成して埋め合わせを行なう。 For example, assume that the 3D face data shown in FIG. In FIG. 5B, the three-dimensional face data is represented by a cylindrical coordinate system having a radius and a cylindrical direction angle in the cylindrical coordinate system as two axes. In this three-dimensional face data, some data around the chin and ears are missing. The images of these portions are generated by interpolation processing using surrounding images and are made up for.
 次に、制御部21の学習管理部210は、円柱座標系画像の生成処理を実行する(ステップS2-2)。具体的には、学習管理部210は、欠落部の補間処理を行なった3次元顔データを、円柱座標系に射影する(2次元マッピング)。学習管理部210は、射影した顔画像データを「900×900」の等間隔メッシュに補間し、円柱座標系の2次元顔画像を生成する。学習管理部210は、生成した2次元顔画像を、円柱座標系画像D2として学習用メモリに記録する。 Next, the learning management unit 210 of the control unit 21 executes a cylindrical coordinate system image generation process (step S2-2). Specifically, the learning management unit 210 projects the three-dimensional face data that has undergone the missing portion interpolation processing onto a cylindrical coordinate system (two-dimensional mapping). The learning management unit 210 interpolates the projected face image data into a “900 × 900” equally spaced mesh to generate a two-dimensional face image in a cylindrical coordinate system. The learning management unit 210 records the generated two-dimensional face image in the learning memory as a cylindrical coordinate system image D2.
 図5(c)は、円柱座標系に投影させた3次元顔データを、2軸(円柱高、円周方向角度)で表した2次元顔画像である。
 次に、制御部21の学習管理部210は、円柱座標系座標の生成処理を実行する(ステップS2-3)。具体的には、学習管理部210は、3次元顔データの各座標(X,Y,Z)を円柱座標系に射影し、上述した「900×900」の画像各点について、円柱座標系の座標データ(径方向,角度,高さ)を生成する。学習管理部210は、生成した2次元顔画像を、円柱座標系座標データD3として学習用メモリに記録する。
FIG. 5C is a two-dimensional face image in which the three-dimensional face data projected onto the cylindrical coordinate system is represented by two axes (cylinder height and circumferential angle).
Next, the learning management unit 210 of the control unit 21 executes cylindrical coordinate system coordinate generation processing (step S2-3). Specifically, the learning management unit 210 projects each coordinate (X, Y, Z) of the three-dimensional face data onto the cylindrical coordinate system, and for each point of the “900 × 900” image described above, Generate coordinate data (radial direction, angle, height). The learning management unit 210 records the generated two-dimensional face image in the learning memory as cylindrical coordinate system coordinate data D3.
 <顔特徴点抽出処理>
 次に、図6を用いて、顔特徴点抽出処理(ステップS1-2)を説明する。ここで、顔特徴点とは、顔を構成する顔パーツ(眉毛、目、鼻、口、耳、頬部、下顎部等)における特徴的な位置(例えば、眉毛最外側点、眉毛最内側点、口角点等)である。本実施形態では、33個の顔特徴点を用いる。なお、顔特徴点を、ユーザーが任意に追加・削除できるようにしてもよい。この場合には、追加・削除された顔特徴点を用いて、後述する処理を行なう。
<Face feature point extraction processing>
Next, face feature point extraction processing (step S1-2) will be described with reference to FIG. Here, the face feature point is a characteristic position (for example, the outermost point of the eyebrows, the innermost point of the eyebrows) in the face parts (eyebrows, eyes, nose, mouth, ears, cheeks, lower jaw, etc.) constituting the face , Mouth corner point, etc.). In the present embodiment, 33 face feature points are used. The face feature points may be arbitrarily added / deleted by the user. In this case, processing described later is performed using the added / deleted face feature points.
 図6には、本実施形態で用いる顔特徴点(32個)を、番号を付して示している。特徴点番号「33」は、他の特徴点を用いて「両目特徴点の重心と鼻頂点を結んだ直線の中点」として算出する。 FIG. 6 shows the facial feature points (32) used in this embodiment with numbers. The feature point number “33” is calculated as “the midpoint of a straight line connecting the center of gravity of the feature points of both eyes and the nose vertex” using the other feature points.
 制御部21の学習管理部210は、生成した円柱座標系座標から顔特徴点を特定し、その位置(座標)を算出する。本実施形態では、顔の表情のトラッキングや顔の認識等に使われている公知のAAM(Active Appearance Models)法を用いた自動抽出を行なう。このAAM法では、対象物体(ここでは、顔)を有限個の頂点でモデル化し、このモデルを入力画像に対してフィッティングすることにより、対象物体の特徴点を抽出する。 The learning management unit 210 of the control unit 21 specifies face feature points from the generated cylindrical coordinate system coordinates, and calculates the position (coordinates). In the present embodiment, automatic extraction is performed using a well-known AAM (Active Appearance Models) method used for facial expression tracking, facial recognition, and the like. In this AAM method, a target object (here, a face) is modeled with a finite number of vertices, and this model is fitted to an input image to extract feature points of the target object.
 学習管理部210は、抽出した顔特徴点を、抽出位置に対応付けた顔画像を出力部15に表示する。この場合、各顔特徴点の位置を移動可能に配置する。
 ここで、担当者は、出力部15に表示された顔画像上の顔特徴点の位置を確認し、必要に応じて修正を行なう。顔画像上において顔特徴点の確認や修正の完了入力が行なわれた場合、学習管理部210は、各顔特徴点の円柱座標系座標を、各顔特徴点の番号に関連付けた顔特徴点データD4を生成して学習用メモリに記憶する。
The learning management unit 210 displays a face image in which the extracted face feature points are associated with the extraction position on the output unit 15. In this case, the position of each face feature point is movably arranged.
Here, the person in charge confirms the position of the facial feature point on the facial image displayed on the output unit 15 and corrects it if necessary. When face feature point confirmation or correction completion input is performed on the face image, the learning management unit 210 associates the cylindrical coordinate system coordinates of each face feature point with the number of each face feature point. D4 is generated and stored in the learning memory.
 <顔特徴点の正規化処理>
 次に、図7を用いて、顔特徴点の正規化処理(ステップS1-3)を説明する。
 まず、制御部21の学習管理部210は、抽出した顔特徴点データを用いて、重回帰分析による正規化処理を実行する(ステップS3-1)。ここでは、顔の特徴点から重回帰分析でXYZ軸での回転を求め、顔の向きを合わせる。顔のサイズは、各目の中心同士の間の間隔が所定値(本実施形態では64mm)になるように正規化する。この処理の詳細については、図8を用いて後述する。
<Normalization processing of facial feature points>
Next, the facial feature point normalization process (step S1-3) will be described with reference to FIG.
First, the learning management unit 210 of the control unit 21 executes normalization processing by multiple regression analysis using the extracted face feature point data (step S3-1). Here, rotation on the XYZ axes is obtained from the feature points of the face by multiple regression analysis, and the face orientation is matched. The size of the face is normalized so that the distance between the centers of the eyes becomes a predetermined value (64 mm in this embodiment). Details of this processing will be described later with reference to FIG.
 次に、制御部21の学習管理部210は、特徴点の平均の計算処理を実行する(ステップS3-2)。具体的には、学習管理部210は、各学習対象者についての33個の各顔特徴点データの座標を用いて、各特徴点の平均座標を算出する。これにより、学習対象者の平均顔における各特徴点の座標が算出される。 Next, the learning management unit 210 of the control unit 21 executes an average feature point calculation process (step S3-2). Specifically, the learning management unit 210 calculates the average coordinates of each feature point using the coordinates of 33 face feature point data for each learning target person. Thereby, the coordinates of each feature point in the average face of the learning subject are calculated.
 次に、制御部21の学習管理部210は、プロクラステス解析による正規化処理を実行する(ステップS3-3)。具体的には、学習管理部210は、ステップS3-2で算出した平均座標と各顔特徴点間の二乗距離の和が最小になるように、最小二乗法を用いて、各特徴点の平行移動、回転、リサイズを行なう。ここでは、耳珠点(特徴点番号7、13)や下顎角点(特徴点番号8、12)等を除いた25個(図6の特徴点番号1~6,10,14~22,24~27,28~32)の顔特徴点を用いる。これにより、顔の特徴点を平均顔に近くなるように調整される。 Next, the learning management unit 210 of the control unit 21 executes normalization processing by procrustes analysis (step S3-3). Specifically, the learning management unit 210 uses the least square method to minimize the sum of the square distance between the average coordinates calculated in step S3-2 and each facial feature point. Move, rotate, and resize. In this case, 25 points (feature point numbers 1 to 6, 10, 14 to 22, 24 in FIG. 6) excluding the tragus points (feature point numbers 7 and 13), mandibular corner points (feature point numbers 8 and 12), and the like. To 27 and 28 to 32) are used. As a result, the facial feature points are adjusted to be close to the average face.
 <重回帰分析による正規化処理>
 次に、図8(a)を用いて、重回帰分析による正規化処理(ステップS3-1)を説明する。この処理により、顔の向きを合わせるとともに、顔のサイズの正規化を行なう。
<Normalization processing by multiple regression analysis>
Next, normalization processing (step S3-1) by multiple regression analysis will be described with reference to FIG. By this processing, the face orientation is matched and the face size is normalized.
 まず、制御部21の学習管理部210は、両目特徴点の重心の特定処理を実行する(ステップS4-1)。具体的には、学習管理部210は、顔特徴点データの内、目に関する顔特徴点を特定する。次に、学習管理部210は、抽出した顔特徴点の座標について、重心算出手段を用いて、両目の重心の位置を算出する。学習管理部210は、算出した両目の重心の位置を原点として特定する。 First, the learning management unit 210 of the control unit 21 executes the process of specifying the center of gravity of the feature points of both eyes (step S4-1). Specifically, the learning management unit 210 identifies facial feature points related to eyes in the facial feature point data. Next, the learning management unit 210 calculates the position of the center of gravity of both eyes using the center of gravity calculating means for the coordinates of the extracted face feature points. The learning management unit 210 specifies the calculated position of the center of gravity of both eyes as the origin.
 次に、制御部21の学習管理部210は、X軸、Y軸周りの傾き補正処理を実行する(ステップS4-2)。具体的には、学習管理部210は、両目の重心の位置を原点として、顔輪郭と鼻頂点を除いた顔特徴点の集合に対して、Z座標を目的変数、X,Y座標を説明変数として重回帰分析を行なう。 Next, the learning management unit 210 of the control unit 21 executes an inclination correction process around the X axis and the Y axis (step S4-2). Specifically, the learning management unit 210 sets the Z coordinate as an objective variable and the X and Y coordinates as explanatory variables for a set of face feature points excluding the face outline and nose vertex with the position of the center of gravity of both eyes as the origin. Multiple regression analysis.
 ここでは、図8(b)に示すように、重回帰分析により、回帰平面RPSを算出する。学習管理部210は、算出した回帰平面RPSの法線NVがZ軸と平行になるように、回帰平面RPSをX,Y軸周りに回転させる。 Here, as shown in FIG. 8B, a regression plane RPS is calculated by multiple regression analysis. The learning management unit 210 rotates the regression plane RPS about the X and Y axes so that the calculated normal line NV of the regression plane RPS is parallel to the Z axis.
 次に、制御部21の学習管理部210は、Z軸周りの傾き補正処理を実行する(ステップS4-3)。具体的には、学習管理部210は、顔の中心線を算出する顔特徴点の集合を用いる。本実施形態では、顔特徴点として、両目の重心、鼻の頂点、鼻の下端、上唇の上端、下唇の下端、顎先端座標の集合を用いる。 Next, the learning management unit 210 of the control unit 21 executes an inclination correction process around the Z axis (step S4-3). Specifically, the learning management unit 210 uses a set of face feature points for calculating a face centerline. In this embodiment, a set of the center of gravity of the eyes, the apex of the nose, the lower end of the nose, the upper end of the upper lip, the lower end of the lower lip, and the chin tip coordinates are used as the facial feature points.
 図8(c)に示すように、この集合に対して、Y座標を目的変数、X座標を説明変数として、回帰直線RLを算出する。学習管理部210は、算出した回帰直線RLの傾きがY軸と平行になるように、Z軸周りに回転させる。 As shown in FIG. 8C, a regression line RL is calculated for this set using the Y coordinate as an objective variable and the X coordinate as an explanatory variable. The learning management unit 210 rotates around the Z axis so that the calculated slope of the regression line RL is parallel to the Y axis.
 次に、制御部21の学習管理部210は、スケーリング処理を実行する(ステップS4-4)。具体的には、学習管理部210は、各目の中心同士の間の距離が64mmとなるように、拡大又は縮小を行なう。 Next, the learning management unit 210 of the control unit 21 executes scaling processing (step S4-4). Specifically, the learning management unit 210 performs enlargement or reduction so that the distance between the centers of the eyes is 64 mm.
 <形状の相同モデル化処理>
 次に、図9(a)を用いて、相同モデル化処理(ステップS1-4)における形状の相同モデル化処理について説明する。
<Shape homology modeling process>
Next, the shape homology modeling process in the homology modeling process (step S1-4) will be described with reference to FIG.
 まず、制御部21の学習管理部210は、顔の特徴点座標の合わせ込み処理を実行する(ステップS5-1)。具体的には、学習管理部210は、メモリに記憶したジェネリックモデルのメッシュポイントの識別情報を用いて、正規化した各顔特徴点の座標を、特定したメッシュポイントの顔特徴点に一致させる。 First, the learning management unit 210 of the control unit 21 executes a process of matching facial feature point coordinates (step S5-1). Specifically, the learning management unit 210 uses the generic model mesh point identification information stored in the memory to match the coordinates of each normalized facial feature point with the identified facial feature point of the mesh point.
 次に、制御部21の学習管理部210は、形状への合わせ込み処理を実行する(ステップS5-2)。具体的には、学習管理部210は、各顔特徴点を一致させたジェネリックモデルにおける各顔パーツの形状を、正規化した各顔パーツの形状に一致させる。 Next, the learning management unit 210 of the control unit 21 executes a shape fitting process (step S5-2). Specifically, the learning management unit 210 matches the shape of each facial part in the generic model in which each facial feature point is matched with the normalized shape of each facial part.
 次に、制御部21の学習管理部210は、三角形分割処理を実行する(ステップS5-3)。具体的には、学習管理部210は、ジェネリックモデルの形状に一致させた正規化した各顔パーツの形状において、ジェネリックモデルの各ポリゴン(三角形)に対応する各メッシュポイントの座標を算出したモデル(形状の相同モデル)を、学習用メモリに記憶する。 Next, the learning management unit 210 of the control unit 21 executes a triangulation process (step S5-3). Specifically, the learning management unit 210 calculates the coordinates of each mesh point corresponding to each polygon (triangle) of the generic model in the normalized shape of each face part matched with the shape of the generic model ( The shape homology model) is stored in the learning memory.
 図9(b)に示すように、目、鼻、口といった顔パーツ周辺のメッシュを細かくして、その他の領域メッシュを広くしたメッシュモデルを用いることにより、全体のメッシュポイントの数を抑えることができる。なお、額部分については、前髪の存在が統計処理において悪影響を与えるため、削除している。 As shown in FIG. 9B, the number of mesh points can be reduced by using a mesh model in which the mesh around the face parts such as eyes, nose and mouth is made fine and the other area meshes are widened. it can. Note that the forehead portion is deleted because the presence of bangs adversely affects the statistical processing.
 <テクスチャの相同モデル化処理>
 次に、図10(a)を用いて、テクスチャの相同モデル化処理について説明する。
 まず、制御部21の学習管理部210は、正規化したメッシュモデルの各頂点の座標について平均の算出処理を実行する(ステップS6-1)。具体的には、学習管理部210は、予め記憶しているテクスチャ平均算出ルールを用いて、正規化した各顔特徴点の座標から、各メッシュポイント(頂点)の平均座標を算出する。
<Texture homology modeling>
Next, the texture homology modeling process will be described with reference to FIG.
First, the learning management unit 210 of the control unit 21 executes an average calculation process for the coordinates of each vertex of the normalized mesh model (step S6-1). Specifically, the learning management unit 210 calculates the average coordinates of each mesh point (vertex) from the normalized coordinates of each face feature point using a texture average calculation rule stored in advance.
 次に、制御部21の学習管理部210は、円柱座標系2次元ポリゴン上のテクスチャを平均化した2次元ポリゴンに変形する処理を実行する(ステップS6-2)。具体的には、学習管理部210は、円柱座標系の各2次元顔データのポリゴン上のテクスチャ(画素情報)を、ステップS6-1において算出した平均座標に変形し、変形した平均座標におけるテクスチャを学習用メモリに記憶する。 Next, the learning management unit 210 of the control unit 21 executes a process of transforming the texture on the cylindrical coordinate system two-dimensional polygon into an averaged two-dimensional polygon (step S6-2). Specifically, the learning management unit 210 transforms the texture (pixel information) on the polygon of each two-dimensional face data in the cylindrical coordinate system into the average coordinates calculated in step S6-1, and the texture at the deformed average coordinates. Is stored in the learning memory.
 制御部21の学習管理部210は、各平均座標におけるテクスチャの平均を算出することにより、テクスチャの相同モデルを算出し、学習用メモリに記憶する。
 図10(b)には、各メッシュモデルの平均座標に変形したテクスチャを平均化した平均顔を示す。
The learning management unit 210 of the control unit 21 calculates the texture average at each average coordinate, thereby calculating a texture homology model and stores it in the learning memory.
FIG. 10B shows an average face obtained by averaging textures transformed into average coordinates of each mesh model.
 <正規化された円柱座標系画像の生成処理>
 次に、正規化された円柱座標系画像の生成処理(ステップS1-5)を説明する。
 ステップS2-2において生成した円柱座標系画像は、データ毎に顔パーツ(目、鼻、口等)の位置が異なっているため、そのまま解析できない。そこで、各データの顔パーツの位置が揃うように、円柱座標系画像の正規化を行なう。
<Generation processing of normalized cylindrical coordinate system image>
Next, a process for generating a normalized cylindrical coordinate system image (step S1-5) will be described.
The cylindrical coordinate system image generated in step S2-2 cannot be analyzed as it is because the position of the face parts (eyes, nose, mouth, etc.) differs for each data. Therefore, the cylindrical coordinate system image is normalized so that the positions of the face parts of each data are aligned.
 ここで、正規化に利用する画像正規化メッシュモデルについて説明する。この画像正規化メッシュモデルは、33個の顔特徴点を利用し、円柱座標系上で、メッシュを格子状に張り付ける。 Here, the image normalization mesh model used for normalization will be described. This image normalization mesh model uses 33 face feature points and pastes the mesh in a grid pattern on a cylindrical coordinate system.
 ここでは、図11(a)に示すように、メッシュポイント数として5584個、ポリゴン(三角形)数として10862個のメッシュモデルを用いる。
 制御部21の学習管理部210は、全データについて、画像正規化メッシュモデルの平均値と、各ポリゴンのテクスチャの平均値を算出する。
Here, as shown in FIG. 11 (a), a mesh model having 5588 mesh points and 10862 polygons (triangles) is used.
The learning management unit 210 of the control unit 21 calculates the average value of the image normalized mesh model and the average value of the texture of each polygon for all data.
 この結果、図11(b)に示すように、算出した平均値のテクスチャから平均顔が生成される。
 入力された円柱座標系画像を平均メッシュと一致させるように各ポリゴンを変形すると顔を構成するメッシュが平均顔メッシュに一致する。従って、円柱座標系画像が正規化されたことになる。
As a result, as shown in FIG. 11B, an average face is generated from the calculated texture of the average value.
When each polygon is deformed so that the input cylindrical coordinate system image matches the average mesh, the mesh constituting the face matches the average face mesh. Therefore, the cylindrical coordinate system image is normalized.
 <2次元顔画像の角度学習処理>
 次に、図12を用いて、2次元顔画像の角度学習処理を説明する。この処理では、2次元顔画像から顔が撮影された角度(顔の向き)を推定するモデルを生成する。この角度学習処理は、経年変化データ記憶部22及びスナップショットデータ記憶部23に記録された3次元顔データ(合計N個)の相同モデルを用いて実行される。なお、経年変化データ記憶部22に記録された3次元顔データにおいては、加齢後の3次元顔データのみの相同モデルを用いる。
<An angle learning process for a two-dimensional face image>
Next, the angle learning process of the two-dimensional face image will be described with reference to FIG. In this process, a model for estimating an angle (face orientation) at which a face is photographed from a two-dimensional face image is generated. This angle learning process is executed using a homologous model of three-dimensional face data (a total of N) recorded in the aging data storage unit 22 and the snapshot data storage unit 23. In the three-dimensional face data recorded in the aging data storage unit 22, a homologous model of only the three-dimensional face data after aging is used.
 まず、制御部21の第1学習部211は、予め定められた処理対象角度毎に、以下の処理を繰り返して実行する。
 まず、制御部21の第1学習部211は、指定角度への回転処理を実行する(ステップS7-1)。具体的には、第1学習部211は、3次元相同モデルを、所定対象角度に回転させる。第1学習部211は、所定対象角度に回転させたときの回転角度を学習用メモリに記憶する。
First, the first learning unit 211 of the control unit 21 repeatedly executes the following processing for each predetermined processing target angle.
First, the first learning unit 211 of the control unit 21 performs a rotation process to a specified angle (step S7-1). Specifically, the first learning unit 211 rotates the three-dimensional homologous model to a predetermined target angle. The first learning unit 211 stores the rotation angle when rotated to the predetermined target angle in the learning memory.
 次に、制御部21の第1学習部211は、3次元相同モデルから2次元相同モデルへの変換処理を実行する(ステップS7-2)。具体的には、学習管理部210は、回転させた3次元相同モデルをXY平面に射影することにより、2次元相同モデルを生成する。 Next, the first learning unit 211 of the control unit 21 executes a conversion process from the three-dimensional homologous model to the two-dimensional homologous model (step S7-2). Specifically, the learning management unit 210 generates a two-dimensional homology model by projecting the rotated three-dimensional homology model onto the XY plane.
 次に、制御部21の第1学習部211は、2次元特徴点の特定処理を実行する(ステップS7-3)。具体的には、制御部21の第1学習部211は、算出した2次元相同モデルにおいて、3次元相同モデルにおける顔特徴点に対応する座標を特定する。第1学習部211は、特定した顔特徴点を2次元特徴点として学習用メモリに記憶する。 Next, the first learning unit 211 of the control unit 21 executes a two-dimensional feature point specifying process (step S7-3). Specifically, the first learning unit 211 of the control unit 21 specifies coordinates corresponding to face feature points in the three-dimensional homology model in the calculated two-dimensional homology model. The first learning unit 211 stores the identified face feature points in the learning memory as two-dimensional feature points.
 次に、制御部21の第1学習部211は、顔の背後に隠れる特徴点の除外処理を実行する(ステップS7-4)。具体的には、第1学習部211は、2次元相同モデルの特徴点の中で、3次元相同モデルにおいて、撮影側(視点側)にある顔特徴点と、背面側にある顔特徴点とを識別する。第1学習部211は、背面側にある顔特徴点を学習用メモリから削除して、撮影側の2次元特徴点のみを記憶する。 Next, the first learning unit 211 of the control unit 21 performs a process of excluding feature points hidden behind the face (step S7-4). Specifically, the first learning unit 211 includes a facial feature point on the photographing side (viewpoint side) and a facial feature point on the back side in the three-dimensional homologous model among the feature points of the two-dimensional homologous model. Identify The first learning unit 211 deletes the face feature points on the back side from the learning memory, and stores only the two-dimensional feature points on the photographing side.
 以上の処理を、処理対象の角度毎にループを繰り返して実行する。
 制御部21の主成分分析部214aは、上述の繰り返し処理により特定した2次元特徴点を用いて、主成分分析処理を実行する(ステップS7-5)。具体的には、主成分分析部214aは、各データi(i=1~N×処理対象角度数)の回転後の2次元特徴点について主成分分析を行なう。この場合、2次元特徴点は、以下のように示される。
The above processing is executed by repeating a loop for each angle to be processed.
The principal component analysis unit 214a of the control unit 21 executes the principal component analysis process using the two-dimensional feature points specified by the above-described repetition process (step S7-5). Specifically, the principal component analysis unit 214a performs principal component analysis on the two-dimensional feature points after rotation of each data i (i = 1 to N × number of processing target angles). In this case, the two-dimensional feature points are indicated as follows.
Figure JPOXMLDOC01-appb-M000001
 次に、制御部21の機械学習部214bは、機械学習処理を実行する(ステップS7-6)。具体的には、機械学習部214bは、従属変数(予測対象特徴量)として「回転角度(θ,ω)」を用いて、説明変数(予測時に使用する特徴量)として「全データの2次元特徴点の主成分得点を標準偏差で除算したもの」を用いる。機械学習部214bは、この従属変数と説明変数を用いて機械学習処理を実行する。第1学習部211は、機械学習部214bによって生成された角度予測モデルをモデル記憶部25に記録する。
Figure JPOXMLDOC01-appb-M000001
Next, the machine learning unit 214b of the control unit 21 executes machine learning processing (step S7-6). Specifically, the machine learning unit 214b uses “rotation angle (θ, ω)” as a dependent variable (feature to be predicted) and “two-dimensional all data” as an explanatory variable (feature used in prediction). "The principal component score of the feature point divided by the standard deviation" is used. The machine learning unit 214b executes machine learning processing using the dependent variable and the explanatory variable. The first learning unit 211 records the angle prediction model generated by the machine learning unit 214b in the model storage unit 25.
 <機械学習処理>
 ここで、図13及び図14を用いて、機械学習処理を説明する。この機械学習処理では、ある特徴ベクトルx(従属変数である予測対象特徴量)から別の特徴ベクトルy(説明変数である予測時に使用する特徴量)を予測する。この場合、重回帰分析を用いて、yとxの関係を学習し、xs(n),jからyi,jを予測するモデルを求める。具体的には、以下の式における「ai,s(n)」と「b」を算出する。
<Machine learning process>
Here, the machine learning process will be described with reference to FIGS. 13 and 14. In this machine learning process, another feature vector y (a feature amount used for prediction, which is an explanatory variable) is predicted from a certain feature vector x (a prediction target feature amount that is a dependent variable). In this case, a model for predicting y i, j from x s (n), j is obtained by learning the relationship between y and x using multiple regression analysis. Specifically, “a i, s (n) ” and “b i ” in the following equation are calculated.
Figure JPOXMLDOC01-appb-M000002
 重回帰分析を行なう場合、独立変数の選択が重要である。単に「y」と相関の高い「x」を順に選択すると、変数間に相関が高いものだけが選択され、変数間の独立性が保てない可能性がある。そこで、各変数の妥当性(t値)と独立性(相関係数が最大相関係数Cmax以下)が高く、予測モデル全体の信頼性(bic:ベイズ情報量基準)が高いものを選択するアルゴリズムを用いる。以下、本実施形態では、一般に「変数増減法」(「ステップワイズ法」)と呼ばれるアルゴリズムにおける処理を説明する。
Figure JPOXMLDOC01-appb-M000002
When performing multiple regression analysis, it is important to select independent variables. If “y” and “x” having high correlation are selected in order, only those having high correlation between variables are selected, and there is a possibility that independence between variables cannot be maintained. Therefore, an algorithm for selecting a variable having high validity (t value) and independence (correlation coefficient is equal to or less than the maximum correlation coefficient Cmax) and high reliability (bic: Bayes information criterion) of the entire prediction model Is used. Hereinafter, in this embodiment, processing in an algorithm generally called “variable increase / decrease method” (“stepwise method”) will be described.
 図13に示すように、まず、制御部21の機械学習部214bは、初期値設定処理を実行する(ステップS8-1)。具体的には、機械学習部214bは、メモリに記憶されているベイズ情報量基準の最小値(bic_min)に極めて大きな値を初期設定し、変数セット(select_id)を空集合にリセットする。 As shown in FIG. 13, first, the machine learning unit 214b of the control unit 21 executes an initial value setting process (step S8-1). Specifically, the machine learning unit 214b initializes a very large value to the minimum value (bic_min) of the Bayes information amount criterion stored in the memory, and resets the variable set (select_id) to an empty set.
 次に、制御部21の機械学習部214bは、現時点でのベイズ情報量基準の設定処理を実行する(ステップS8-2)。具体的には、機械学習部214bは、現時点でのベイズ情報量基準の最小値(bic_min_pre)に、ベイズ情報量基準の最小値(bic_min)を代入する。 Next, the machine learning unit 214b of the control unit 21 executes a setting process for the current Bayes information amount standard (step S8-2). Specifically, the machine learning unit 214b substitutes the minimum value (bic_min) based on the Bayes information amount into the minimum value (bic_min_pre) based on the current Bayes information amount.
 処理対象の主成分番号iを順次、特定し、以下の繰り返し処理を実行する。「i」は、処理対象として選択された次元番号である。この繰り返し処理においては、処理対象の主成分番号iが、変数セットに追加すべき成分(追加対象成分)か否かを判定する。 主 成分 Principal component number i to be processed is identified sequentially, and the following repeated processing is executed. “I” is a dimension number selected as a processing target. In this iterative process, it is determined whether or not the principal component number i to be processed is a component to be added to the variable set (addition target component).
 ここで、まず、制御部21の機械学習部214bは、主成分番号iと変数セット(select_id)との相関の最小値が最大相関係数Cmaxより小さいか否かの判定処理を実行する(ステップS8-3)。具体的には、制御部21の機械学習部214bは、処理対象の主成分番号iと、メモリに記憶された変数セット(select_id)との相関係数を算出する。機械学習部214bは、算出した相関係数と最大相関係数Cmaxとを比較する。 Here, first, the machine learning unit 214b of the control unit 21 performs a determination process as to whether or not the minimum value of the correlation between the principal component number i and the variable set (select_id) is smaller than the maximum correlation coefficient Cmax (step). S8-3). Specifically, the machine learning unit 214b of the control unit 21 calculates a correlation coefficient between the principal component number i to be processed and the variable set (select_id) stored in the memory. The machine learning unit 214b compares the calculated correlation coefficient with the maximum correlation coefficient Cmax.
 ここで、相関の最小値が最大相関係数Cmaxより小さい場合(ステップS8-3において「YES」の場合)、制御部21の機械学習部214bは、変数セット(select_id)に追加対象成分を加えた変数を用いて重回帰分析を行ない、ベイズ情報量基準及び追加した変数のt値の算出処理を実行する(ステップS8-4)。具体的には、制御部21の機械学習部214bは、メモリに記憶された変数セットに、処理対象の主成分番号iを加えた変数で重回帰分析を行なってベイズ情報量基準を算出する。公知のt検定によりt値を算出する。 If the minimum correlation value is smaller than the maximum correlation coefficient Cmax (“YES” in step S8-3), the machine learning unit 214b of the control unit 21 adds an additional target component to the variable set (select_id). The multiple regression analysis is performed using the obtained variables, and the Bayes information criterion and the t value calculation process of the added variable are executed (step S8-4). Specifically, the machine learning unit 214b of the control unit 21 calculates a Bayesian information amount criterion by performing multiple regression analysis using a variable obtained by adding the principal component number i to be processed to the variable set stored in the memory. The t value is calculated by a known t test.
 次に、制御部21の機械学習部214bは、ベイズ情報量基準の最小値、t値が条件を満足するか否かの判定処理を実行する(ステップS8-5)。ここでは、条件として、算出したベイズ情報量基準の最小値が現在のベイズ情報量基準より大きいこと、かつt値が「2」以上であることを用いる。 Next, the machine learning unit 214b of the control unit 21 performs a determination process as to whether or not the minimum value and t value of the Bayes information criterion satisfy the condition (step S8-5). Here, as the conditions, the calculated minimum value of the Bayes information amount criterion is larger than the current Bayes information amount criterion, and the t value is “2” or more.
 ここで、ベイズ情報量基準の最小値、t値の条件を満足する場合(ステップS8-5において「YES」の場合)、制御部21の機械学習部214bは、ベイズ情報量基準の最小値及び主成分番号の更新処理を実行する(ステップS8-6)。具体的には、制御部21の機械学習部214bは、ベイズ情報量基準の最小値(bic_min)に、ベイズ情報量基準(bic)を代入する。更に、処理対象の主成分番号iを追加対象成分(add_id)として記憶する。 Here, when the condition of the minimum value and t value of the Bayes information amount criterion is satisfied (in the case of “YES” in step S8-5), the machine learning unit 214b of the control unit 21 sets the minimum value of the Bayes information amount criterion and Update processing of the principal component number is executed (step S8-6). Specifically, the machine learning unit 214b of the control unit 21 substitutes the Bayes information criterion (bic) for the minimum value (bic_min) of the Bayes information criterion. Furthermore, the main component number i to be processed is stored as an additional target component (add_id).
 一方、主成分番号iと変数セット(select_id)との相関の最小値が最大相関係数Cmax以上の場合(ステップS8-3において「NO」の場合)、制御部21の機械学習部214bは、ステップS8-4~S8-6の処理をスキップする。 On the other hand, when the minimum value of the correlation between the principal component number i and the variable set (select_id) is greater than or equal to the maximum correlation coefficient Cmax (“NO” in step S8-3), the machine learning unit 214b of the control unit 21 Steps S8-4 to S8-6 are skipped.
 また、ベイズ情報量基準の最小値、t値のいずれかが条件を満足しない場合(ステップS8-5において「NO」の場合)、制御部21の機械学習部214bは、ステップS8-6の処理をスキップする。 If either the minimum value of the Bayes information criterion or the t value does not satisfy the condition (“NO” in step S8-5), the machine learning unit 214b of the control unit 21 performs the process in step S8-6. To skip.
 すべての主成分番号iについて、繰り返し処理(ステップS8-3~S8-6)を終了した場合、制御部21の機械学習部214bは、ベイズ情報量基準の最小値が更新されたか否かの判定処理を実行する(ステップS8-7)。具体的には、機械学習部214bは、ベイズ情報量基準の最小値(bic_min)が、ステップS8-2において設定した現時点でのベイズ情報量基準の最小値(bic_min_pre)と一致しているか否かを判定する。機械学習部214bは、一致している場合にはベイズ情報量基準の最小値は更新されていないと判定し、一致していない場合にはベイズ情報量基準の最小値は更新されたと判定する。 When the iterative process (steps S8-3 to S8-6) is completed for all principal component numbers i, the machine learning unit 214b of the control unit 21 determines whether or not the minimum value of the Bayes information amount criterion has been updated. Processing is executed (step S8-7). Specifically, the machine learning unit 214b determines whether or not the minimum value (bic_min) of the Bayes information amount reference matches the current minimum value (bic_min_pre) of the Bayes information amount reference set in step S8-2. Determine. The machine learning unit 214b determines that the minimum value of the Bayes information criterion is not updated when they match, and determines that the minimum value of the Bayes information criterion is updated when they do not match.
 ここで、ベイズ情報量基準の最小値が更新された場合(ステップS8-7において「YES」の場合)、制御部21の機械学習部214bは、変数更新処理を実行する(ステップS8-8)。この変数更新処理の詳細については、図14を用いて説明する。 Here, when the minimum value of the Bayes information criterion is updated (in the case of “YES” in step S8-7), the machine learning unit 214b of the control unit 21 executes variable update processing (step S8-8). . Details of this variable update processing will be described with reference to FIG.
 次に、制御部21の機械学習部214bは、変数更新が成功したか否かの判定処理を実行する(ステップS8-9)。具体的には、機械学習部214bは、後述する変数更新処理において、メモリに記録されたフラグ(変数更新成功フラグ、変数更新失敗フラグ)により、変数更新成功を判定する。 Next, the machine learning unit 214b of the control unit 21 executes a process for determining whether or not the variable update is successful (step S8-9). Specifically, the machine learning unit 214b determines the variable update success based on the flags (variable update success flag, variable update failure flag) recorded in the memory in the variable update process described later.
 ここで、メモリに変数更新成功フラグが記録されており、変数更新が成功したと判定した場合(ステップS8-9において「YES」の場合)、制御部21の機械学習部214bは、ステップS8-2以降の処理を繰り返して実行する。 Here, when the variable update success flag is recorded in the memory and it is determined that the variable update is successful (in the case of “YES” in step S8-9), the machine learning unit 214b of the control unit 21 performs step S8— Steps 2 and after are repeated.
 一方、ベイズ情報量基準の最小値が更新されなかった場合、又は変数更新失敗フラグが記録されており、変数更新が成功しなかったと判定した場合(ステップS8-7,S8-9において「NO」の場合)、制御部21の機械学習部214bは、機械学習処理を終了する。 On the other hand, when the minimum value of the Bayes information criterion is not updated, or when the variable update failure flag is recorded and it is determined that the variable update is not successful ("NO" in steps S8-7 and S8-9) In this case, the machine learning unit 214b of the control unit 21 ends the machine learning process.
 <変数更新処理>
 次に、図14を用いて、変数更新処理(ステップS8-8)を説明する。この処理においては、追加対象成分を含めた変数セットが妥当か否かを判定する。妥当でない変数を削除することにより、妥当な変数セットになった場合には、この変数セットを説明変数として特定する。
<Variable update processing>
Next, the variable update process (step S8-8) will be described with reference to FIG. In this process, it is determined whether or not the variable set including the addition target component is valid. When the invalid variable set is obtained by deleting the invalid variable, this variable set is specified as the explanatory variable.
 まず、機械学習部214bは、新たな変数セットの設定処理を実行する(ステップS9-1)。具体的には、機械学習部214bは、メモリに記憶されている変数セットに追加対象成分(add_id)を追加して、新たな変数セット(select_id_new)を生成する。 First, the machine learning unit 214b executes a new variable set setting process (step S9-1). Specifically, the machine learning unit 214b adds a component to be added (add_id) to the variable set stored in the memory to generate a new variable set (select_id_new).
 次に、機械学習部214bは、以下のステップS9-2~S9-4の処理を、無限ループにおいて繰り返す。
 まず、機械学習部214bは、新たな変数セット(select_id_new)を用いて重回帰分析を行ない、ベイズ情報量基準(bic)とすべての変数のt値の算出処理を実行する(ステップS9-2)。具体的には、機械学習部214bは、新たな変数セットを用いて重回帰分析により、ベイズ情報量基準を算出する。更に、公知のt検定によりt値を算出する。
Next, the machine learning unit 214b repeats the following steps S9-2 to S9-4 in an infinite loop.
First, the machine learning unit 214b performs a multiple regression analysis using a new variable set (select_id_new), and executes a Bayes information criterion (bic) and t value calculation processing for all variables (step S9-2). . Specifically, the machine learning unit 214b calculates a Bayes information criterion by multiple regression analysis using a new variable set. Further, the t value is calculated by a known t test.
 次に、機械学習部214bは、新たな変数セットの各変数のt値のうちで、最小のt値が2より小さいか否かの判定処理を実行する(ステップS9-3)。
 ここで、最小のt値が「2」より小さい場合(ステップS9-3において「YES」の場合)、機械学習部214bは、新たな変数セットから最小のt値となる変数の削除処理を実行する(ステップS9-4)。具体的には、機械学習部214bは、新たな変数セットにおいて最小のt値を算出した変数を、この新たな変数セットから削除する。
Next, the machine learning unit 214b determines whether or not the minimum t value among the t values of each variable of the new variable set is smaller than 2 (step S9-3).
Here, when the minimum t value is smaller than “2” (in the case of “YES” in step S9-3), the machine learning unit 214b executes the process of deleting the variable having the minimum t value from the new variable set. (Step S9-4). Specifically, the machine learning unit 214b deletes the variable for which the minimum t value has been calculated in the new variable set from the new variable set.
 ステップS9-2以降の処理を繰り返して実行する。
 一方、新たな変数セットに含まれる変数のt値がすべて2以上であることにより、最小のt値が2以上の場合(ステップS9-3において「NO」の場合)、機械学習部214bは、この無限ループを終了する。機械学習部214bは、ベイズ情報量基準が現時点でのベイズ情報量基準の最小値より小さいか否かの判定処理を実行する(ステップS9-5)。具体的には、機械学習部214bは、ベイズ情報量基準(bic)が、ステップS8-2において設定した現時点でのベイズ情報量基準の最小値(bic_min_pre)より小さいか否かを判定する。
The processes after step S9-2 are repeatedly executed.
On the other hand, when the t values of the variables included in the new variable set are all 2 or more, and the minimum t value is 2 or more (in the case of “NO” in step S9-3), the machine learning unit 214b End this infinite loop. The machine learning unit 214b executes a process of determining whether or not the Bayes information amount criterion is smaller than the current minimum value of the Bayes information amount criterion (step S9-5). Specifically, the machine learning unit 214b determines whether or not the Bayes information amount criterion (bic) is smaller than the current minimum value (bic_min_pre) of the Bayes information amount criterion set in step S8-2.
 ここで、ベイズ情報量基準が現時点でのベイズ情報量基準の最小値より小さい場合(ステップS9-5において「YES」の場合)、機械学習部214bは、変数更新成功処理を実行する(ステップS9-6)。具体的には、機械学習部214bは、変数セット(select_id)に、新たな変数セット(select_id_new)を代入する。更に、機械学習部214bは、変数更新成功フラグをメモリに記録する。 Here, when the Bayes information criterion is smaller than the current minimum value of the Bayes information criterion (“YES” in Step S9-5), the machine learning unit 214b executes a variable update success process (Step S9). -6). Specifically, the machine learning unit 214b substitutes a new variable set (select_id_new) for the variable set (select_id). Further, the machine learning unit 214b records a variable update success flag in the memory.
 一方、ベイズ情報量基準が現時点でのベイズ情報量基準の最小値以上の場合(ステップS9-5において「NO」の場合)、機械学習部214bは、変数更新失敗処理を実行する(ステップS9-7)。具体的には、機械学習部214bは、変数更新失敗フラグをメモリに記録する。 On the other hand, when the Bayes information criterion is equal to or greater than the current minimum value of the Bayes information criterion (in the case of “NO” in Step S9-5), the machine learning unit 214b executes variable update failure processing (Step S9—). 7). Specifically, the machine learning unit 214b records a variable update failure flag in the memory.
 <2次元から3次元への学習処理>
 次に、図15を用いて、2次元顔画像を3次元顔データに変換する第1学習処理を説明する。ここでは、同じ場面で任意の枚数の2次元顔画像がある場合や、任意の角度から顔が撮影された2次元顔画像がある場合にも、的確な予測ができるようにするための学習を行なう。その場合、組み合わせが膨大になり、データベースの容量が問題となるため、予め指定された組み合わせ(例えば正面と側面)のみ学習しておき、それ以外の組み合わせの場合は、その都度、学習してモデルを作成する。
<Learning process from 2D to 3D>
Next, a first learning process for converting a two-dimensional face image into three-dimensional face data will be described with reference to FIG. Here, when there is an arbitrary number of two-dimensional face images in the same scene, or when there is a two-dimensional face image in which a face is photographed from an arbitrary angle, learning to enable accurate prediction is performed. Do. In that case, the number of combinations becomes enormous, and the capacity of the database becomes a problem. Therefore, only the combinations specified in advance (for example, front and side surfaces) are learned, and in the case of other combinations, the model is learned each time. Create
 まず、制御部21の主成分分析部214aは、3次元メッシュ(相同モデル)を用いて、予め3次元形状の主成分分析処理を実行する(ステップS10-1)。具体的には、主成分分析部214aは、各データの3次元メッシュ点について主成分分析を行なう。これにより、2次元メッシュ点は、平均値、主成分得点、主成分ベクトルで表現すると、次式のようになる。 First, the principal component analysis unit 214a of the control unit 21 performs a three-dimensional shape principal component analysis process in advance using a three-dimensional mesh (homology model) (step S10-1). Specifically, the principal component analysis unit 214a performs principal component analysis on the three-dimensional mesh points of each data. As a result, the two-dimensional mesh point is expressed by the following equation when expressed by an average value, a principal component score, and a principal component vector.
Figure JPOXMLDOC01-appb-M000003
 制御部21の第1学習部211は、指定角度への回転処理を実行する(ステップS10-2)。具体的には、第1学習部211は、回転角度を指定する画面を出力部15に表示する。ここでは、例えば正面(0度)と側面(90度)とを指定する。回転角度が入力された場合、第1学習部211は、入力された回転角度に応じて3次元相同モデルを回転させる。
Figure JPOXMLDOC01-appb-M000003
The first learning unit 211 of the control unit 21 executes the rotation process to the specified angle (step S10-2). Specifically, the first learning unit 211 displays a screen for designating the rotation angle on the output unit 15. Here, for example, the front (0 degree) and the side (90 degrees) are designated. When the rotation angle is input, the first learning unit 211 rotates the three-dimensional homologous model according to the input rotation angle.
 次に、制御部21の第1学習部211は、3次元相同モデルから2次元相同モデルの生成処理を実行する(ステップS10-3)。具体的には、第1学習部211は、回転させた3次元相同モデルを2次元面(XY平面)に射影することにより、2次元相同モデルを生成する。 Next, the first learning unit 211 of the control unit 21 executes a process for generating a two-dimensional homologous model from the three-dimensional homologous model (step S10-3). Specifically, the first learning unit 211 generates a two-dimensional homology model by projecting the rotated three-dimensional homology model onto a two-dimensional plane (XY plane).
 次に、制御部21の第1学習部211は、2次元画像の生成処理を実行する(ステップS10-4)。ここでは、グレーの2次元相同モデルを用いる場合を想定する。具体的には、第1学習部211は、生成した2次元相同モデルの各メッシュにおける輝度に基づいてグレー化した画像を生成する。 Next, the first learning unit 211 of the control unit 21 executes a two-dimensional image generation process (step S10-4). Here, it is assumed that a gray two-dimensional homology model is used. Specifically, the first learning unit 211 generates a grayed image based on the luminance in each mesh of the generated two-dimensional homologous model.
 次に、制御部21の主成分分析部214aは、2次元画像の主成分分析処理を実行する(ステップS10-5)。具体的には、主成分分析部214aは、ステップS10-4において生成した2次元画像を主成分分析し、以下のように表現する。 Next, the principal component analysis unit 214a of the control unit 21 executes a principal component analysis process of the two-dimensional image (step S10-5). Specifically, the principal component analysis unit 214a performs principal component analysis on the two-dimensional image generated in step S10-4 and expresses it as follows.
Figure JPOXMLDOC01-appb-M000004
 次に、制御部21の第1学習部211は、2次元特徴点の特定処理を実行する(ステップS10-6)。具体的には、第1学習部211は、ステップS7-3と同様に、算出した2次元相同モデルにおける顔特徴点の座標を特定する。第1学習部211は、特定した座標をメモリに記憶する。
Figure JPOXMLDOC01-appb-M000004
Next, the first learning unit 211 of the control unit 21 executes a two-dimensional feature point specifying process (step S10-6). Specifically, the first learning unit 211 specifies the coordinates of the facial feature points in the calculated two-dimensional homology model, as in step S7-3. The first learning unit 211 stores the specified coordinates in the memory.
 次に、制御部21の第1学習部211は、顔の背後に隠れる特徴点の除外処理を実行する(ステップS10-7)。具体的には、第1学習部211は、ステップS7-4と同様に、顔の背後に隠れる顔特徴点をメモリから削除する。 Next, the first learning unit 211 of the control unit 21 executes a feature point exclusion process that hides behind the face (step S10-7). Specifically, the first learning unit 211 deletes the face feature points hidden behind the face from the memory, as in step S7-4.
 次に、制御部21の主成分分析部214aは、2次元特徴点の主成分分析処理を実行する(ステップS10-8)。具体的には、主成分分析部214aは、ステップS7-5と同様に、メモリに記憶した顔特徴点を用いて主成分分析処理を実行する。 Next, the principal component analysis unit 214a of the control unit 21 executes a principal component analysis process of the two-dimensional feature points (step S10-8). Specifically, the principal component analysis unit 214a executes principal component analysis processing using the facial feature points stored in the memory, as in step S7-5.
 制御部21の機械学習部214bは、ステップS7-6と同様に、機械学習処理を実行する(ステップS10-9)。具体的には、機械学習部214bは、従属変数及び説明変数を用いて、機械学習処理を実行する。ここでは、従属変数として「3次元メッシュ点の主成分得点を標準偏差で除算したもの」を用い、説明変数として「全データの2次元特徴点、及び全データの2次元画像の主成分得点を標準偏差で除算したもの」を用いる。第1学習部211は、機械学習部214bによって生成された3次元化予測モデルをモデル記憶部25に記録する。 The machine learning unit 214b of the control unit 21 executes machine learning processing in the same manner as in step S7-6 (step S10-9). Specifically, the machine learning unit 214b executes machine learning processing using the dependent variable and the explanatory variable. Here, “the principal component score of the three-dimensional mesh point divided by the standard deviation” is used as the dependent variable, and “the two-dimensional feature point of all data and the principal component score of the two-dimensional image of all data are used as explanatory variables. “Divided by standard deviation” is used. The first learning unit 211 records the three-dimensional prediction model generated by the machine learning unit 214b in the model storage unit 25.
 <2次元→3次元の予測検証>
 次に、図16及び図17を用いて、第1学習によって算出した2次元から3次元への変換に用いるモデルデータの検証処理を説明する。
<2D → 3D prediction verification>
Next, with reference to FIGS. 16 and 17, a model data verification process used for conversion from two-dimensional to three-dimensional calculated by the first learning will be described.
 図15において、33個の2次元特徴点(座標)と、2次元顔画像、3次元メッシュ点について、それぞれ主成分分析を行なった。この主成分分析において累積寄与率が95%となる主成分数を算出した。累積寄与率が95%となる主成分数は、2次元特徴点(66次元)で「29」、3次元メッシュ点(32223次元)で「60」であった。このように、次元数がかなり異なるにも関わらず、主成分数は同じオーダーとなっている。2次元顔画像(810000次元)は個人差が激しいため、累積寄与率が95%となる主成分数は「226」と多くなった。 15, principal component analysis was performed on 33 two-dimensional feature points (coordinates), two-dimensional face images, and three-dimensional mesh points. In this principal component analysis, the number of principal components with a cumulative contribution rate of 95% was calculated. The number of principal components with a cumulative contribution rate of 95% was “29” for the two-dimensional feature point (66 dimensions) and “60” for the three-dimensional mesh point (32223 dimensions). Thus, although the number of dimensions is quite different, the number of principal components is in the same order. Since the two-dimensional face image (810000 dimensions) has a great individual difference, the number of principal components with a cumulative contribution rate of 95% increased to “226”.
 2次元特徴点との相関では、100番目までの主成分では、最大相関係数はほぼ0.2以上に確保されているが、200番目以降の主成分では、ほとんど相関がない値に低下する。一方、画像との相関では、2次元特徴点と比較すると、相関は小さい傾向になるものの、200番目以降の主成分においても、最大相関係数は0.1程度に確保されている。 In the correlation with the two-dimensional feature points, the maximum correlation coefficient is secured to approximately 0.2 or more for the principal components up to the 100th, but decreases to a value having little correlation in the principal components after the 200th. . On the other hand, in the correlation with the image, the correlation tends to be small as compared with the two-dimensional feature point, but the maximum correlation coefficient is secured to about 0.1 in the 200th and subsequent principal components.
 これは、以下のように理解できる。2次元特徴点はわずか33点であり、小さい主成分番号(形状のおおまかな特徴を示す成分)については相関が高いが、大きい主成分番号(細かな形状を示す成分)との相関は低い。一方、画像は、小さい主成分番号との相関は比較的低いが、大きい主成分番号との相関は比較的高い。本実施形態では、より最大相関値が高くなる特徴点として、2次元特徴点と画像とを結合した特徴量を用いる。学習時に選択された主成分数(説明変数)に基づくと、2次元特徴点と画像とを結合した方が、選択された主成分数が多いことがわかる。 This can be understood as follows. There are only 33 two-dimensional feature points, and a small principal component number (a component indicating a rough feature) has a high correlation, but a correlation with a large principal component number (a component indicating a fine shape) is low. On the other hand, an image has a relatively low correlation with a small principal component number, but a relatively high correlation with a large principal component number. In the present embodiment, a feature amount obtained by combining a two-dimensional feature point and an image is used as a feature point having a higher maximum correlation value. Based on the number of principal components (explanatory variables) selected at the time of learning, it can be seen that the number of selected principal components is larger when two-dimensional feature points and images are combined.
 ここで、求めた予測モデルの妥当性と説明力について説明する。
 F値はモデルの妥当性を示すパラメータである。t値は各変数の妥当性を示すパラメータである。F値及びt値は、それぞれ「2」以上であれば妥当とされているが、いずれの成分においても「2」以上の値が確保されていることが判明した。
Here, the validity and explanatory power of the obtained prediction model will be described.
The F value is a parameter indicating the validity of the model. The t value is a parameter indicating the validity of each variable. Although it is considered that the F value and the t value are each “2” or more, it is found that a value of “2” or more is secured in any component.
 決定係数は、モデルの説明力を示すパラメータであり、値はモデルが予測対象データを説明する割合を示す。具体的には、値が「1」の場合には「すべて予測可能」であり、値が「0」の場合には「全く予測できていない」ことを示す。2次元特徴点のみの場合、決定係数は、40番目までの主成分についてはほぼ50%以上に確保されるが、100番目近傍の主成分では20%を下回っていることが判明した。2次元特徴点及び画像の場合、決定係数は、50番目までの主成分についてもほぼ50%以上に確保され、100番目近傍の主成分でも20%を上回っていることが判明した。これにより、2次元特徴点のみの場合と比較すると、2次元特徴点及び画像を用いた場合の精度が向上していることが分かる。 The coefficient of determination is a parameter indicating the explanatory power of the model, and the value indicates the rate at which the model explains the prediction target data. Specifically, when the value is “1”, it is “all predictable”, and when the value is “0”, it indicates “not predicted at all”. In the case of only two-dimensional feature points, it has been found that the coefficient of determination is ensured to be approximately 50% or more for the main components up to the 40th, but less than 20% for the main components near the 100th. In the case of two-dimensional feature points and images, it was found that the coefficient of determination was ensured to be approximately 50% or more for the principal components up to the 50th, and exceeded 20% even for the principal component near the 100th. As a result, it can be seen that the accuracy in the case of using the two-dimensional feature point and the image is improved as compared with the case of only the two-dimensional feature point.
 次に、2次元から3次元に変換する予測モデルデータの妥当性を検証するために、100個(累積寄与率97.5%となる個数)の3次元メッシュ点の主成分得点P3を予測する重回帰分析を行なう。この場合、変数選択の基準として、最大相関係数Cmaxを「0.15」と設定する。 Next, in order to verify the validity of the prediction model data to be converted from 2D to 3D, 100 principal component scores P3 of 3D mesh points (the number of cumulative contributions 97.5%) are predicted. Perform multiple regression analysis. In this case, the maximum correlation coefficient Cmax is set to “0.15” as a variable selection criterion.
 次に、図16を用いて、この2次元から3次元に変換する予測モデルデータの妥当性検証処理を実行する。ここでは、以下の処理を、処理対象データj(j=1~n)を順次、特定して、以下の処理を繰り返す。 Next, using FIG. 16, the validity verification processing of the prediction model data to be converted from the two dimensions to the three dimensions is executed. Here, in the following processing, the processing target data j (j = 1 to n) is sequentially specified, and the following processing is repeated.
 まず、制御部21の第1学習部211は、処理対象データjを除いた残り〔n-1〕人のデータで機械学習した予測モデルの作成処理を実行する(ステップS11-1)。具体的には、第1学習部211は、〔n-1〕人のデータを用いて、前述した第1学習処理を実行して、3次元化変換モデルを生成する。 First, the first learning unit 211 of the control unit 21 executes a process of creating a prediction model machine-learned with the remaining [n-1] human data excluding the processing target data j (step S11-1). Specifically, the first learning unit 211 generates the three-dimensional conversion model by executing the first learning process described above using the data of [n−1] people.
 次に、制御部21の第1学習部211は、処理対象データjの3次元メッシュ点について予測モデルを利用した推定処理を実行する(ステップS11-2)。具体的には、第1学習部211は、処理対象データjを入力データとし、生成した3次元化変換モデルを適用して、3次元メッシュ点を算出する。 Next, the first learning unit 211 of the control unit 21 performs an estimation process using a prediction model for the three-dimensional mesh point of the processing target data j (step S11-2). Specifically, the first learning unit 211 uses the processing target data j as input data and applies the generated three-dimensional conversion model to calculate a three-dimensional mesh point.
 次に、制御部21の第1学習部211は、処理対象データjの3次元データ(正解)と、推定した結果との比較処理を実行する(ステップS11-3)。具体的には、第1学習部211は、ステップS11-1において生成した3次元顔データと、処理対象データjの3次元顔データとを比較する。比較した結果、3次元メッシュの各点のずれ量をメモリに記録する。この場合、主成分得点の予測誤差の平均は「0.22」未満になった。主成分得点の分散は「1」に正規化したので、精度良く推定できていることがわかる。なお、2次元特徴点のみでの予測より、2次元特徴点及び2次元画像での予測の方が、精度向上を図れることも判明した。 Next, the first learning unit 211 of the control unit 21 performs a comparison process between the three-dimensional data (correct answer) of the processing target data j and the estimated result (step S11-3). Specifically, the first learning unit 211 compares the three-dimensional face data generated in step S11-1 with the three-dimensional face data of the processing target data j. As a result of the comparison, the shift amount of each point of the three-dimensional mesh is recorded in the memory. In this case, the average prediction error of the principal component scores was less than “0.22”. Since the variance of the principal component score is normalized to “1”, it can be seen that the estimation can be performed with high accuracy. It has also been found that the prediction using only the two-dimensional feature points and the prediction using the two-dimensional image can improve accuracy.
 更に、3次元メッシュ点の予測誤差の平均は、2次元特徴点のみを用いた場合は「1.546391mm」、2次元特徴点及び2次元画像を用いた場合は「1.477514mm」となった。この場合も、2次元特徴点のみでの予測より、2次元特徴点及び2次元画像での予測の方が、精度向上を図れることが判明した。 Furthermore, the average prediction error of the 3D mesh points is “1.546391 mm” when only 2D feature points are used, and “1.477514 mm” when 2D feature points and 2D images are used. . In this case as well, it has been found that the prediction with only the two-dimensional feature points and the prediction with the two-dimensional image can improve accuracy.
 図17(a)は、経年変化前の2次元顔画像(入力データ)である。また、図17(b)は、図17(a)に示した顔画像について、10年経過後の顔画像(正解)である。
 図17(c)は、図17(a)に示した顔画像の2次元特徴点のみを用いて予測した加齢化後の顔画像である。また、図17(d)は、図17(a)に示した顔画像の2次元特徴点及び画像を用いて予測した加齢化後の顔画像である。
FIG. 17A shows a two-dimensional face image (input data) before aging. FIG. 17B is a face image (correct answer) after 10 years of the face image shown in FIG.
FIG. 17C shows an aging face image predicted using only the two-dimensional feature points of the face image shown in FIG. FIG. 17D is an aging face image predicted using the two-dimensional feature points and images of the face image shown in FIG.
 2次元特徴点のみでの予測よりも、2次元特徴点及び2次元画像での予測の方が、精度良く予測できていることが確認できる。
 <テクスチャ加齢化の学習処理>
 次に、図18を用いて、テクスチャ加齢化の第2学習処理を実行する。この第2学習処理においては、主成分分析を用いたテクスチャの加齢化処理と、ウェーブレット変換を用いたテクスチャの加齢化処理とを実行する。ここでは、主成分分析を用いたテクスチャの加齢化処理を説明した後、ウェーブレット変換を用いたテクスチャの加齢化処理について説明する。
It can be confirmed that the prediction with the two-dimensional feature point and the two-dimensional image can be predicted with higher accuracy than the prediction with only the two-dimensional feature point.
<Texture aging learning process>
Next, the second learning process for texture aging is executed using FIG. In this second learning process, a texture aging process using principal component analysis and a texture aging process using wavelet transform are executed. Here, after explaining the texture aging process using principal component analysis, the texture aging process using wavelet transform will be described.
 <主成分分析を利用したテクスチャの加齢化処理>
 ここでは、ステップS1-5において生成した正規化円柱座標系画像を用いて、3次元顔データにおける加齢によるテクスチャの変化を予測するモデルを機械学習で算出する。
<Aging process of texture using principal component analysis>
Here, using the normalized cylindrical coordinate system image generated in step S1-5, a model for predicting the texture change due to aging in the three-dimensional face data is calculated by machine learning.
 まず、制御部21は、正規化円柱座標系画像の主成分分析処理を実行する(ステップS12-1)。具体的には、制御部21の第2学習部212は、経年変化データ記憶部22から経年変化データを、スナップショットデータ記憶部23からスナップショットデータを取得する。制御部21の主成分分析部214aは、円柱座標系画像(取得した経年変化データ及びスナップショットデータの円柱座標系画像)について主成分分析を行なう。この場合、主成分分析部214aは、経年変化データの変化前(又は変化後のいずれか)とスナップショットデータとを用いて、主成分ベクトルの方向を決定し、経年変化データを用いて主成分得点を算出する。ここで、各データを、平均値、主成分得点、主成分ベクトルで表現すると、次のようになる。 First, the control unit 21 executes principal component analysis processing of the normalized cylindrical coordinate system image (step S12-1). Specifically, the second learning unit 212 of the control unit 21 acquires aging data from the aging data storage unit 22 and snapshot data from the snapshot data storage unit 23. The principal component analysis unit 214a of the control unit 21 performs principal component analysis on a cylindrical coordinate system image (a cylindrical coordinate system image of acquired secular change data and snapshot data). In this case, the principal component analysis unit 214a determines the direction of the principal component vector using the snapshot data before (or after) the aging data, and uses the aging data to determine the principal component. Score is calculated. Here, each data is expressed as an average value, a principal component score, and a principal component vector as follows.
Figure JPOXMLDOC01-appb-M000005
 ここで、「j」は加齢インデックスであり、加齢後が「j」=1、加齢前が「j」=0を示す。
Figure JPOXMLDOC01-appb-M000005
Here, “j” is an aging index, and “j” = 1 after aging and “j” = 0 before aging.
 次に、制御部21は、ステップS7-6と同様に、機械学習処理を実行する(ステップS12-2)。具体的には、制御部21の機械学習部214bは、従属変数として「単位年月当たりに正規化されたテクスチャ主成分得点の加齢変化差分ベクトル」を用い、説明変数として「加齢前のテクスチャの主成分得点」を用いる。第2学習部212は、機械学習部214bによって生成されたテクスチャ加齢モデルをモデル記憶部25に記録する。 Next, the control unit 21 executes machine learning processing as in step S7-6 (step S12-2). Specifically, the machine learning unit 214b of the control unit 21 uses “an aging change difference vector of texture principal component scores normalized per unit year” as a dependent variable, and “pre-aging” as an explanatory variable. The “principal score of texture” is used. The second learning unit 212 records the texture aging model generated by the machine learning unit 214b in the model storage unit 25.
 <主成分分析を利用したテクスチャ加齢化の予測検証>
 次に、図19を用いて、主成分分析を利用したテクスチャにおいて加齢化に変換する処理を説明する。
<Predictive verification of texture aging using principal component analysis>
Next, processing for converting to aging in a texture using principal component analysis will be described with reference to FIG.
 図18に示した主成分分析を利用したテクスチャ加齢化において算出した35番目までの主成分の累積寄与率は95%を超えており、25番目以降の主成分の寄与率は0.1%未満である。各主成分について図20に示す。寄与率が低い主成分になるほど、高周波成分になっていることがわかる。 The cumulative contribution ratio up to the 35th principal component calculated in texture aging using the principal component analysis shown in FIG. 18 exceeds 95%, and the contribution ratio of the 25th and subsequent principal components is 0.1%. Is less than. Each main component is shown in FIG. It can be seen that the higher the contribution ratio, the higher the frequency component.
 また、図21には、各主成分の寄与を画像で確認するための2枚の画像について、主成分の上限を変更して再構成した画像を示している。この結果、シミ、皺のような細部は、全主成分を考慮しないと再現できないことが分かる。 FIG. 21 shows an image reconstructed by changing the upper limit of the principal component of two images for confirming the contribution of each principal component with the image. As a result, it can be seen that details such as spots and wrinkles cannot be reproduced unless all the main components are considered.
 図19に示すように、この主成分分析を用いたテクスチャ加齢化の予測モデルデータの妥当性検証処理を実行する。ここでは、以下の処理を、処理対象データj(j=1~n)について繰り返して実行する。 As shown in FIG. 19, the validity verification process of the texture aging prediction model data using this principal component analysis is executed. Here, the following processing is repeatedly executed for the processing target data j (j = 1 to n).
 まず、制御部21は、データjを除いた残り〔n-1〕人のデータで機械学習した予測モデルの作成処理を実行する(ステップS13-1)。具体的には、制御部21の第2学習部212は、〔n-1〕人のデータを用いて上記ステップS12-1~S12-2の第2学習処理を実行して、テクスチャ加齢化の変換モデルを生成する。 First, the control unit 21 executes a process for creating a prediction model that has been machine-learned with the data of the remaining [n−1] people excluding the data j (step S13-1). Specifically, the second learning unit 212 of the control unit 21 executes the second learning process in steps S12-1 to S12-2 using [n-1] data, thereby aging the texture. Generate a conversion model of.
 次に、制御部21は、データjの経年変化前データを用いて予測モデルを利用して加齢化処理を実行する(ステップS13-2)。具体的には、制御部21の第2学習部212は、経年変化前のデータjを入力データとし、生成したテクスチャ加齢化への変換モデルを適用して、経年変化後の3次元メッシュ点を算出する。 Next, the control unit 21 executes an aging process using the prediction model using the pre-aging data of the data j (step S13-2). Specifically, the second learning unit 212 of the control unit 21 uses the data j before aging as input data, applies the generated conversion model to texture aging, and applies the three-dimensional mesh points after aging. Is calculated.
 次に、制御部21は、データjの経年変化後データ(正解)と加齢化の結果の比較処理を実行する(ステップS13-3)。具体的には、制御部21の第2学習部212は、ステップS13-2において生成した経年変化後の3次元顔データと、経年変化データ記憶部22に記憶しているデータjの経年変化後の3次元顔データとを比較し、3次元メッシュの各点の誤差を算出する。この場合、算出した誤差は、約60%以下になることが判明した。 Next, the control unit 21 executes a comparison process between the post-aging data (correct answer) of the data j and the aging result (step S13-3). Specifically, the second learning unit 212 of the control unit 21 performs post-aging change of the three-dimensional face data after aging generated in step S13-2 and the data j stored in the aging data storage unit 22. And the error of each point of the three-dimensional mesh is calculated. In this case, the calculated error was found to be about 60% or less.
 <ウェーブレット(WAVELET)変換を利用したテクスチャの加齢化処理>
 次に、図22を用いて、ウェーブレット変換を利用したテクスチャの加齢化処理を説明する。上述した主成分分析を利用したテクスチャの加齢化処理では、加齢差分データを推定する。しかしながら、既に存在しているシミ、皺が存在する場合、主成分分析を利用した加齢化では、これらを濃くするわけではない。そこで、既に存在しているシミ、皺を利用して加齢化するために、ウェーブレット変換を利用した加齢変化推定を行なう。
<Aging process for textures using wavelet transform>
Next, texture aging processing using wavelet transform will be described with reference to FIG. In the texture aging process using the principal component analysis described above, aging difference data is estimated. However, when there are already existing spots and wrinkles, aging using principal component analysis does not increase them. Therefore, in order to age using existing stains and wrinkles, aging change estimation using wavelet transform is performed.
 ここでは、図22(a)に示すように、制御部21は、ウェーブレット成分の加齢による増加率(ウェーブレット係数Ri)の算出処理を実行する(ステップS14-1)。具体的には、制御部21の第2学習部212は、経年変化データ記憶部22に記憶されている経年変化データを抽出する。第2学習部212は、データ番号jの各2次元画像のすべてのウェーブレット係数Riをウェーブレット係数番号i毎(ピクセル毎)に算出する。第2学習部212は、算出した加齢化前の各画像データの各ウェーブレット係数Ri(ピクセル毎の値)を合計する。また、第2学習部212は、算出した加齢化後の各画像データの各ウェーブレット係数Ri(ピクセル毎の値)を合計する。合計した加齢化後の画像におけるウェーブレット係数Ri(ピクセル毎の値)を、合計した加齢前の画像におけるウェーブレット係数Ri(ピクセル毎の値)で除算することにより、各データのウェーブレット係数Riの変化率を計算する。この場合、第2学習部212は、ウェーブレット係数Riが1未満となる場合には、「1」として変化率を計算する。 Here, as shown in FIG. 22A, the control unit 21 performs a calculation process of an increase rate (wavelet coefficient Ri) due to aging of the wavelet component (step S14-1). Specifically, the second learning unit 212 of the control unit 21 extracts the aging data stored in the aging data storage unit 22. The second learning unit 212 calculates all the wavelet coefficients Ri of each two-dimensional image with the data number j for each wavelet coefficient number i (for each pixel). The second learning unit 212 sums up the calculated wavelet coefficients Ri (values for each pixel) of the calculated image data before aging. The second learning unit 212 sums up the calculated wavelet coefficients Ri (values for each pixel) of the image data after aging. By dividing the wavelet coefficient Ri (value per pixel) in the summed image after aging by the wavelet coefficient Ri (value per pixel) in the summed image before aging, the wavelet coefficient Ri of each data is calculated. Calculate the rate of change. In this case, when the wavelet coefficient Ri is less than 1, the second learning unit 212 calculates the rate of change as “1”.
 なお、図22(a)に示す式において、iは各ウェーブレット係数番号、jはデータ番号、aは加齢化後、bは加齢化前を意味している。
 その結果、ウェーブレット係数Riの最大値は「5.407101」、ウェーブレット係数Riの平均値は「1.311112」であった。なお、図22(b)には、ウェーブレット係数Riを可視化して表示した画像を示している。この画像においては、黒が最小値の「1」のウェーブレット係数Riを示し、白い程、値が大きいことを示している。また、図22(b)においては、左上になる程、低周波成分を示した画像である。具体的には、各行について横方向の1次元ウェーブレット変換を行なって、低域成分と高域成分に分離し、その変換された信号の各列に対して縦方向の1次元変換を行なうということを繰り返した画像を示している。
In the expression shown in FIG. 22A, i represents each wavelet coefficient number, j represents a data number, a represents after aging, and b represents before aging.
As a result, the maximum value of the wavelet coefficient Ri was “5.407101”, and the average value of the wavelet coefficient Ri was “1.311112”. Note that FIG. 22B shows an image displayed by visualizing the wavelet coefficient Ri. In this image, black indicates a wavelet coefficient Ri having a minimum value of “1”, and the whiter the value, the larger the value. In addition, in FIG. 22B, the image shows the low frequency component as it goes to the upper left. Specifically, one-dimensional wavelet transform in the horizontal direction is performed for each row to separate the low-frequency component and the high-frequency component, and one-dimensional conversion in the vertical direction is performed on each column of the converted signal. The image which repeated is shown.
 <3次元形状加齢化の学習処理>
 次に、図23を用いて、3次元形状加齢化の第3学習処理を実行する。この場合、上述した形状の相同モデル化処理において生成した相同モデルを用いて、3次元顔画像における加齢による形状変化を予測するモデルを機械学習で算出する。この場合、選択した変数間の最大相関係数Cmaxは「0.15」に設定する。
<3D shape aging learning process>
Next, a third learning process for aging the three-dimensional shape is executed using FIG. In this case, a model for predicting a shape change due to aging in a three-dimensional face image is calculated by machine learning using the homology model generated in the above-described shape homology modeling process. In this case, the maximum correlation coefficient Cmax between the selected variables is set to “0.15”.
 ここで、まず、制御部21は、3次元メッシュの主成分得点の算出処理を実行する(ステップS15-1)。具体的には、制御部21の第3学習部213は、経年変化データ記憶部22に記憶されていた経年変化データを抽出する。ここでは、144個の経年変化データを抽出する。第3学習部213は、上述したステップS10-1における3次元メッシュ点の主成分分析において生成した3次元メッシュの主成分ベクトルを利用し、抽出した経年変化データについて、3次元メッシュの主成分得点を算出する。 Here, first, the control unit 21 executes the calculation process of the principal component score of the three-dimensional mesh (step S15-1). Specifically, the third learning unit 213 of the control unit 21 extracts the aging data stored in the aging data storage unit 22. Here, 144 pieces of aging data are extracted. The third learning unit 213 uses the principal component vector of the three-dimensional mesh generated in the principal component analysis of the three-dimensional mesh point in step S10-1 described above, and uses the three-dimensional mesh principal component score for the extracted secular change data. Is calculated.
 次に、制御部21は、ステップS7-6と同様に、機械学習処理を実行する(ステップS15-2)。具体的には、制御部21の機械学習部214bは、従属変数及び説明変数を用いて機械学習処理を実行する。ここでは、従属変数として「単位年月当たりに正規化された3次元メッシュ主成分得点の加齢変化差分ベクトル」、説明変数として「加齢前の3次元メッシュの主成分得点」を用いる。第3学習部213は、機械学習部214bによって生成された形状加齢モデルをモデル記憶部25に記録する。 Next, the control unit 21 executes machine learning processing as in step S7-6 (step S15-2). Specifically, the machine learning unit 214b of the control unit 21 executes machine learning processing using the dependent variable and the explanatory variable. Here, “the aging change difference vector of the three-dimensional mesh principal component score normalized per unit year” is used as the dependent variable, and “the principal component score of the three-dimensional mesh before aging” is used as the explanatory variable. The third learning unit 213 records the shape aging model generated by the machine learning unit 214b in the model storage unit 25.
 このようにして算出した加齢変化差分ベクトルと主成分得点との相関係数の最大値は、およそ「0.3」程度の相関があり、加齢変化と主成分得点には一定の相関があるため、回帰分析に用いることは妥当である。また、選択された変数の数は30前後であり、算出に用いた経年変化データ数と比較すると妥当である。更に、いずれの主成分においても、F値は10以上、t値は2以上、決定係数はほぼ70%以上が確保されていることが判明し、モデルの精度が高いことがわかる。 The maximum value of the correlation coefficient between the aging change difference vector calculated in this way and the principal component score has a correlation of about “0.3”, and there is a constant correlation between the aging change and the principal component score. Therefore, it is appropriate to use for regression analysis. The number of selected variables is around 30, which is reasonable when compared with the number of secular change data used for calculation. Further, it is found that the F value is 10 or more, the t value is 2 or more, and the determination coefficient is almost 70% or more in any principal component, and it is understood that the accuracy of the model is high.
 <加齢化予測処理>
 次に、図24を用いて加齢化予測処理について説明する。
 まず、制御部21は、特徴点抽出処理を実行する(ステップS16-1)。具体的には、制御部21の加齢管理部215は、処理対象の2次元顔画像データから、ステップS1-2と同様にして、顔特徴点抽出処理を実行する。
<Aging prediction process>
Next, the aging prediction process will be described with reference to FIG.
First, the control unit 21 executes feature point extraction processing (step S16-1). Specifically, the aging management unit 215 of the control unit 21 executes face feature point extraction processing from the processing target two-dimensional face image data in the same manner as in step S1-2.
 次に、制御部21は、顔向き抽出処理を実行する(ステップS16-2)。具体的には、制御部21の第1処理部216は、モデル記憶部25に記憶された角度予測モデルを用いて、抽出した顔特徴点の座標から、顔が撮影された向きを特定し、2次元の顔画像を正面向きに変換する。 Next, the control unit 21 executes face orientation extraction processing (step S16-2). Specifically, the first processing unit 216 of the control unit 21 uses the angle prediction model stored in the model storage unit 25 to specify the direction in which the face was photographed from the coordinates of the extracted face feature points, A two-dimensional face image is converted to the front direction.
 次に、制御部21は、3次元メッシュ化処理を実行する(ステップS16-3)。具体的には、制御部21の加齢管理部215は、モデル記憶部25に記憶された3次元化予測モデルを用いて、正面向きの2次元の顔画像に対して3次元メッシュを生成する。 Next, the control unit 21 executes a three-dimensional meshing process (step S16-3). Specifically, the aging management unit 215 of the control unit 21 generates a three-dimensional mesh for a front-facing two-dimensional face image using the three-dimensional prediction model stored in the model storage unit 25. .
 制御部21は、正規化円柱座標系画像の生成処理を実行する(ステップS16-4)。具体的には、制御部21の加齢管理部215は、ステップS16-3において算出した予測モデルを用いて、処理対象の2次元顔画像の2次元メッシュを作成する。加齢管理部215は、各ポリゴン内の画像を、円柱座標系座標におけるポリゴンにアフィン変換して作成する。ここで、処理対象の2次元顔画像の顔の向きにより、ポリゴン内の画像情報が不足している場合がある。この場合には、加齢管理部215は、顔の左右は対称であると仮定し、顔左右の中央線を対称線として、画像情報が不足しているポリゴンを、左右反対側のポリゴン内の画像情報を用いて補完する。 The control unit 21 executes a process for generating a normalized cylindrical coordinate system image (step S16-4). Specifically, the aging management unit 215 of the control unit 21 uses the prediction model calculated in step S16-3 to create a two-dimensional mesh of the processing target two-dimensional face image. The aging management unit 215 creates an image in each polygon by affine transformation into a polygon in cylindrical coordinate system coordinates. Here, the image information in the polygon may be insufficient depending on the face orientation of the two-dimensional face image to be processed. In this case, the aging management unit 215 assumes that the left and right sides of the face are symmetric, and uses the center line of the left and right sides of the face as a symmetric line, and converts the polygons with insufficient image information into the polygons on the opposite left and right sides. Complement using image information.
 次に、制御部21は、3次元メッシュの加齢化処理を実行する(ステップS16-5)。具体的には、制御部21の第3処理部218は、ステップS16-3で生成した3次元メッシュを、モデル記憶部25に記憶された形状加齢モデルに入力し、3次元メッシュの加齢化を行なう。この場合、第3処理部218は、形状予測マスク領域以外の領域のみ加齢化を行なう。 Next, the control unit 21 executes an aging process for the three-dimensional mesh (step S16-5). Specifically, the third processing unit 218 of the control unit 21 inputs the three-dimensional mesh generated in step S16-3 to the shape aging model stored in the model storage unit 25, and ages the three-dimensional mesh. To do. In this case, the third processing unit 218 performs aging only on regions other than the shape prediction mask region.
 ここで、図26(a)に示す白い部分を、形状予測マスク領域として用いる。これらの部分には、頬や鼻筋等が含まれており、加齢による形状の変化が少ない領域である。
 次に、制御部21は、テクスチャの加齢化処理を実行する(ステップS16-6)。この処理の詳細については、図25を用いて、後述する。
Here, the white portion shown in FIG. 26A is used as the shape prediction mask region. These portions include cheeks and nose muscles, and are regions where there is little change in shape due to aging.
Next, the control unit 21 executes a texture aging process (step S16-6). Details of this processing will be described later with reference to FIG.
 次に、制御部21は、加齢化3次元顔画像の生成処理を実行する(ステップS16-7)。具体的には、制御部21の第3処理部218は、ステップS16-5及びS16-6で生成した加齢化した画像を合成して、形状及びテクスチャを加齢化した画像を生成する。 Next, the control unit 21 executes an aging three-dimensional face image generation process (step S16-7). Specifically, the third processing unit 218 of the control unit 21 synthesizes the aged images generated in steps S16-5 and S16-6, and generates an image in which the shape and texture are aged.
 図27(a)には、30歳の顔画像データを表示している。この顔画像データを入力データとして用いて、制御部21が、10年後及び15年後の加齢化予測を行なって算出した画像を、図27(b)及び図27(c)に示す。ここでは、2次元から3次元化への変換は行なっていないが、尤もらしく加齢化されていることがわかる。 FIG. 27 (a) displays 30-year-old face image data. FIGS. 27B and 27C show images calculated by the control unit 21 using the face image data as input data and performing aging prediction after 10 years and 15 years later. Here, the conversion from the two-dimensional to the three-dimensional is not performed, but it can be seen that it is reasonably aged.
 次に、制御部21は、加齢化2次元顔画像の生成処理を実行する(ステップS16-8)。具体的には、制御部21の加齢管理部215は、生成した加齢化3次元顔画像を、ステップS16-2において特定した顔の向きになるように回転させ、そのときの2次元顔画像を生成する。加齢管理部215は、生成した2次元顔画像をディスプレイに表示する。 Next, the control unit 21 executes an aging two-dimensional face image generation process (step S16-8). Specifically, the aging management unit 215 of the control unit 21 rotates the generated aging three-dimensional face image so that the face direction specified in step S16-2 is oriented, and the two-dimensional face at that time Generate an image. The aging management unit 215 displays the generated two-dimensional face image on the display.
 <テクスチャの加齢化処理>
 次に、図25~図27を用いて、上述したテクスチャの加齢化処理(ステップS16-6)について説明する。
<Aging treatment of texture>
Next, the texture aging process (step S16-6) described above will be described with reference to FIGS.
 まず、制御部21の第2処理部217は、ウェーブレット変換処理を実行する(ステップS17-1)。具体的には、第2処理部217は、主成分分析を利用した加齢後画像Iiをウェーブレット変換したウェーブレット係数R1iを算出する。 First, the second processing unit 217 of the control unit 21 executes a wavelet transform process (step S17-1). Specifically, the second processing unit 217 calculates a wavelet coefficient R1i obtained by wavelet transforming the post-age image Ii using principal component analysis.
 ウェーブレット係数番号iについて、以下の処理を繰り返して実行する。
 まず、制御部21の第2処理部217は、2つのウェーブレット係数の絶対値を比較し、大小関係の判定処理を実行する(ステップS17-2)。ここでは、主成分分析に基づくウェーブレット係数R1iの絶対値と、ウェーブレット変換を利用したテクスチャの加齢化処理により算出したウェーブレット係数R2iの絶対値に重み係数wを乗算した値とを比較する。ウェーブレット係数R1iの絶対値が、ウェーブレット係数R2iの絶対値に重み係数wを乗算した値より大きいか否かを判定する。
The following processing is repeated for the wavelet coefficient number i.
First, the second processing unit 217 of the control unit 21 compares the absolute values of the two wavelet coefficients, and executes a magnitude relation determination process (step S17-2). Here, the absolute value of the wavelet coefficient R1i based on the principal component analysis is compared with the value obtained by multiplying the absolute value of the wavelet coefficient R2i calculated by the texture aging process using the wavelet transform by the weighting coefficient w. It is determined whether or not the absolute value of the wavelet coefficient R1i is larger than a value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w.
 ここで、ウェーブレット係数R1iの絶対値が、ウェーブレット係数R2iの絶対値に重み係数wを乗算した値より大きい場合(ステップS17-2において「YES」の場合)、制御部21の第2処理部217は、使用するウェーブレット係数R3iに、ウェーブレット係数R1iを代入する処理を実行する(ステップS17-3)。 Here, when the absolute value of the wavelet coefficient R1i is larger than the value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w (in the case of “YES” in step S17-2), the second processing unit 217 of the control unit 21. Executes a process of substituting the wavelet coefficient R1i into the wavelet coefficient R3i to be used (step S17-3).
 一方、ウェーブレット係数R1iの絶対値が、ウェーブレット係数R2iの絶対値に重み係数wを乗算した値以下の場合(ステップS17-2において「NO」の場合)、制御部21の第2処理部217は、使用するウェーブレット係数R3iに、ウェーブレット係数R2iを代入する処理を実行する(ステップS17-4)。 On the other hand, when the absolute value of the wavelet coefficient R1i is equal to or smaller than the value obtained by multiplying the absolute value of the wavelet coefficient R2i by the weighting coefficient w (in the case of “NO” in step S17-2), the second processing unit 217 of the control unit 21 Then, a process of substituting the wavelet coefficient R2i into the wavelet coefficient R3i to be used is executed (step S17-4).
 以上の処理を、ウェーブレット係数番号iについて繰り返す。
 制御部21の第2処理部217は、使用するウェーブレット係数R3のウェーブレット逆変換処理を実行する(ステップS17-5)。
The above processing is repeated for the wavelet coefficient number i.
The second processing unit 217 of the control unit 21 executes wavelet inverse transformation processing of the wavelet coefficient R3 to be used (step S17-5).
 次に、制御部21の第2処理部217は、マスク領域反映処理を実行する(ステップS17-6)。具体的には、第2処理部217は、テクスチャ予測マスク領域以外の領域のみ加齢変化を行なう。 Next, the second processing unit 217 of the control unit 21 performs a mask area reflection process (step S17-6). Specifically, the second processing unit 217 performs aging change only on regions other than the texture prediction mask region.
 ここで、図26(b)に示す白い部分を、テクスチャ予測マスク領域として用いる。これらの部分には、目、鼻、口等が含まれており、加齢によるテクスチャの変化が少ない領域である。 Here, the white portion shown in FIG. 26B is used as the texture prediction mask region. These portions include the eyes, nose, mouth, and the like, and are regions where there is little change in texture due to aging.
 以上の処理により、テクスチャを加齢化した画像を生成する。
 従って、本実施形態によれば、以下のような効果を得ることができる。
 (1)本実施形態の制御部21は、加齢後の顔画像を生成する加齢管理部215、顔画像の質感(テクスチャ)について加齢による変化を予測する処理を実行する第2処理部217、顔形状について加齢による変化を予測する第3処理部218を備えている。これにより、顔形状についての加齢と、顔画像の質感について加齢による変化を考慮した加齢化予測を行なうので、加齢化した顔画像を、より的確に生成することができる。
Through the above processing, an image in which the texture is aged is generated.
Therefore, according to the present embodiment, the following effects can be obtained.
(1) The control unit 21 of the present embodiment executes an aging management unit 215 that generates a face image after aging, and a second processing unit that performs a process of predicting a change due to aging for the texture (texture) of the face image. 217, a third processing unit 218 that predicts changes due to aging of the face shape. As a result, aging prediction is performed in consideration of aging of the face shape and changes due to aging of the texture of the face image, so that an aging face image can be generated more accurately.
 (2)本実施形態の加齢化予測システム20の学習管理部210は、経年変化データ記憶部22及びスナップショットデータ記憶部23に記録したデータを用いて、顔特徴点の抽出処理、顔特徴点の正規化処理、相同モデル化処理を実行する(ステップS1-2~S1-4)。これにより、実際の複数のサンプルデータを用いて、解剖学上、共通化させて、テクスチャ加齢モデル及び形状加齢モデルを生成することができる。 (2) The learning management unit 210 of the aging prediction system 20 of the present embodiment uses the data recorded in the secular change data storage unit 22 and the snapshot data storage unit 23 to extract facial feature points, facial features A point normalization process and a homology modeling process are executed (steps S1-2 to S1-4). This makes it possible to generate a texture aging model and a shape aging model by using a plurality of actual sample data and sharing them in terms of anatomy.
 (3)本実施形態の制御部21は、処理対象の2次元顔画像から3次元顔データを生成する処理を実行する第1処理部216を備えている。これにより、処理対象の2次元顔画像の向きが正面等でなくても、指定した角度の顔の向きで、加齢化した顔画像を生成することができる。 (3) The control unit 21 of the present embodiment includes a first processing unit 216 that executes a process of generating three-dimensional face data from a processing target two-dimensional face image. Thereby, even if the orientation of the two-dimensional face image to be processed is not the front or the like, an aging face image can be generated with the face orientation at the specified angle.
 (4)本実施形態の加齢化予測システム20の第1処理部216は、3次元メッシュ(相同モデル)を用いて、2次元顔画像の角度学習処理を実行する。これにより、実際のサンプルデータを用いて、2次元から3次元に変換する3次元化予測モデルを生成することができる。 (4) The first processing unit 216 of the aging prediction system 20 according to the present embodiment executes a two-dimensional face image angle learning process using a three-dimensional mesh (homology model). As a result, a three-dimensional prediction model for converting from two dimensions to three dimensions can be generated using actual sample data.
 (5)本実施形態の加齢化予測システム20の第2処理部217は、主成分分析を利用したテクスチャ加齢モデルと、ウェーブレット(WAVELET)変換を利用したテクスチャ加齢モデルとを用いて、テクスチャ加齢化処理を実行する。これにより、既に存在しているシミ、皺を利用して加齢化するウェーブレット変換を用いて、加齢化した顔画像を更に的確に生成することができる。 (5) The second processing unit 217 of the aging prediction system 20 of the present embodiment uses a texture aging model using principal component analysis and a texture aging model using wavelet (WAVELET) transformation, Execute texture aging processing. This makes it possible to more accurately generate an aged face image by using wavelet transformation that is aging using existing stains and wrinkles.
 (6)本実施形態の加齢化予測システム20の第2処理部217は、主成分分析を利用したモデルとウェーブレット変換を利用したモデルの重点を決める値である重み係数wを記憶している。これにより、この重み係数wを変更することにより、主成分分析によるテクスチャ加齢モデルと、ウェーブレット変換によるテクスチャ加齢モデルの重み付けを変更することができる。 (6) The second processing unit 217 of the aging prediction system 20 of the present embodiment stores a weighting coefficient w that is a value that determines the importance of a model using principal component analysis and a model using wavelet transform. . Thereby, the weighting of the texture aging model by principal component analysis and the texture aging model by wavelet transform can be changed by changing the weighting coefficient w.
 また、上記実施形態は、以下のように変更してもよい。
 ・上記実施形態の制御部21は、経年変化データ記憶部22に記憶された経年変化データと、スナップショットデータ記憶部23に記憶されたスナップショットデータとを用いてテクスチャ加齢モデル及び形状加齢モデルを生成した。経年変化データやスナップショットデータの属性(例えば、性別や年齢層)別に、テクスチャ加齢モデル及び形状加齢モデルを生成してもよい。この場合には、制御部21は、同じ属性の経年変化データやスナップショットデータを用いて、正規化された円柱座標系画像D5や正規化された顔特徴点データD6及び3次元メッシュデータD7(相同モデル)を生成する。制御部21は、これらを用いて、属性別のテクスチャ加齢モデル及び形状加齢モデルを生成し、各属性情報に関連付けてモデル記憶部25に記憶する。制御部21は、加齢化予測処理においては、処理対象の2次元顔画像データとともに、この画像に含まれる顔の属性情報を取得する。制御部21は、取得した属性情報に対応する属性のテクスチャ加齢モデル及び形状加齢モデルを用いて、加齢化予測処理を実行する。これにより、性別や年齢層等の属性に応じたテクスチャや形状の影響を考慮して、より的確な顔画像データを生成することができる。
Moreover, you may change the said embodiment as follows.
The control unit 21 of the above embodiment uses the aging data stored in the aging data storage unit 22 and the snapshot data stored in the snapshot data storage unit 23 to use the texture aging model and the shape aging model. A model was generated. A texture aging model and a shape aging model may be generated for each attribute (for example, sex or age group) of secular change data or snapshot data. In this case, the control unit 21 uses the secular change data and the snapshot data having the same attribute to normalize the cylindrical coordinate system image D5, the normalized face feature point data D6, and the three-dimensional mesh data D7 ( Homology model). Using these, the control unit 21 generates a texture aging model and a shape aging model for each attribute, and stores them in the model storage unit 25 in association with each attribute information. In the aging prediction process, the control unit 21 acquires face attribute information included in this image together with the processing target two-dimensional face image data. The control part 21 performs an aging prediction process using the texture aging model and the shape aging model of the attribute corresponding to the acquired attribute information. Accordingly, more accurate face image data can be generated in consideration of the influence of texture and shape according to attributes such as sex and age group.
 ・上記実施形態の制御部21は、テクスチャの加齢化処理において、主成分分析を利用した加齢後画像Iiをウェーブレット変換したウェーブレット係数R1iと、ウェーブレット変換によるテクスチャ加齢モデルのウェーブレット係数R2iとを択一的に用いた。テクスチャの加齢化処理においては、これら2つのウェーブレット係数R1i,R2iを択一的に用いる場合に限らず、これら2つのウェーブレット係数R1i,R2iの統計値(例えば、平均値や、属性に応じた割合による合成値)を用いてよい。 In the texture aging process, the control unit 21 of the above embodiment uses the wavelet coefficient R1i obtained by wavelet transforming the post-age image Ii using the principal component analysis, and the wavelet coefficient R2i of the texture aging model based on the wavelet transform. Was used alternatively. The texture aging process is not limited to the case where these two wavelet coefficients R1i and R2i are used alternatively, but the statistical values of these two wavelet coefficients R1i and R2i (for example, depending on the average value or attribute) (Combined value by ratio) may be used.
 ・上記実施形態の制御部21は、テクスチャの加齢化処理において、ウェーブレット変換を用いた。加齢化処理は、ウェーブレット変換に限定されるものではない。顔テクスチャのシミ・皺を濃く(強調)する手法を用いることが可能である。例えば、公知の鮮鋭化フィルタやフーリエ変換を用いて、顔テクスチャの加齢化を行なってもよい。 The control unit 21 of the above embodiment uses wavelet transform in the texture aging process. The aging process is not limited to wavelet transform. It is possible to use a method of deepening (emphasizing) the stains and wrinkles on the face texture. For example, the face texture may be aged using a known sharpening filter or Fourier transform.
 ・上記実施形態の制御部21は、主成分分析処理を用いて加齢化モデルを生成した。加齢化モデルの生成に用いる分析処理は、主成分分析処理に限定されるものではない。個人差を表現する変数を特定するための分析処理であれば用いることが可能であり、例えば、独立成分解析(ICA)や多次元尺度法(MDS)を用いて、加齢化モデルの生成を行なってもよい。 -The control part 21 of the said embodiment produced | generated the aging model using the principal component analysis process. The analysis process used to generate the aging model is not limited to the principal component analysis process. Any analysis process can be used to identify variables that express individual differences. For example, an aging model can be generated using independent component analysis (ICA) or multidimensional scaling (MDS). You may do it.
 ・上記実施形態において、制御部21は、加齢化予測処理の顔向き推定処理(ステップS16-2)において、角度予測モデルを用いて、2次元の顔画像を正面向きに変換した。この加齢化予測処理の顔向き推定処理(ステップS16-2)は、角度予測モデルを用いた方法に限定されるものではない。例えば、公知のプロクラステス解析を用いて、画像において顔が撮影された向きを特定するようにしてもよい。 In the above embodiment, the control unit 21 converts the two-dimensional face image to the front direction using the angle prediction model in the face direction estimation process (step S16-2) of the aging prediction process. The face direction estimation process (step S16-2) of the aging prediction process is not limited to the method using the angle prediction model. For example, the direction in which the face was photographed in the image may be specified using a known procrustes analysis.
 ・上記実施形態において、制御部21は、2次元顔画像から3次元顔データへの変換についての第1学習処理、テクスチャ加齢化についての第2学習処理、3次元形状加齢化についての第3学習処理において、機械学習処理を実行した。この場合、制御部21の機械学習部214bは、重回帰分析によって、従属変数(予測対象特徴量)を用いて説明変数(予測時に使用する特徴量)を算出する。制御部21の機械学習部214bが実行する機械学習処理は、重回帰分析を用いた学習処理に限られず、他の分析手法を用いてもよい。例えば、カップリング学習処理、PLS回帰(Partial Least Squares Regression)に基づく学習処理、サポートベクトル回帰(Support Vector Regression;SVR)に基づく学習処理等を行なってもよい。 In the above embodiment, the control unit 21 performs the first learning process for the conversion from the two-dimensional face image to the three-dimensional face data, the second learning process for the texture aging, and the third learning process for the three-dimensional shape aging. In the three learning processes, a machine learning process was executed. In this case, the machine learning unit 214b of the control unit 21 calculates an explanatory variable (a feature amount used at the time of prediction) using a dependent variable (a prediction target feature amount) by multiple regression analysis. Machine learning processing executed by the machine learning unit 214b of the control unit 21 is not limited to learning processing using multiple regression analysis, and other analysis methods may be used. For example, a coupling learning process, a learning process based on PLS regression (Partial Last Squares Regression), a learning process based on Support Vector Regression (Support Vector Regression; SVR), or the like may be performed.
 ここで、カップリング学習処理について説明する。
 この処理においては、機械学習部214bは、各サンプルデータの「予測対象特徴量」と「予測時に使用する特徴量」とを結合して一つの行ベクトル(1次元ベクトル)を生成する。例えば、第1学習処理における「予測対象特徴量」としては「回転角度(θ,ω)」の主成分係数を用い、「予測時に使用する特徴量」としては「全データの2次元特徴点の主成分得点を標準偏差で除算したもの」の主成分係数を用いる。
Here, the coupling learning process will be described.
In this process, the machine learning unit 214b generates a single row vector (one-dimensional vector) by combining the “prediction target feature amount” and “feature amount used during prediction” of each sample data. For example, the principal component coefficient of “rotation angle (θ, ω)” is used as the “prediction target feature amount” in the first learning process, and “two-dimensional feature point of all data is used as the“ feature amount used during prediction ”. Principal component coefficients of “the principal component score divided by the standard deviation” are used.
 機械学習部214bは、生成した1次元ベクトルを、サンプルデータ毎に列方向に並べた結合パッチベクトルのデータ行列を生成する。このデータ行列の主成分分析を行なって、主成分ベクトル行列を生成する。この主成分ベクトル行列は、「予測対象特徴量」と「予測時に使用する特徴量」との変化の激しい主成分の順番に、各行ベクトルが整列している行列である。 The machine learning unit 214b generates a data matrix of combined patch vectors in which the generated one-dimensional vectors are arranged in the column direction for each sample data. A principal component analysis of the data matrix is performed to generate a principal component vector matrix. This principal component vector matrix is a matrix in which the row vectors are arranged in the order of principal components that change drastically between the “prediction target feature value” and the “feature value used for prediction”.
 次に、機械学習部214bは、主成分ベクトル行列の直交化処理を実行する。ここでは、グラムシュミット法により、「予測時に使用する特徴量」行列の直交化を行なう。この場合、「予測対象特徴量」については、「予測時に使用する特徴量」における直交化係数を乗算して変換する。 Next, the machine learning unit 214b executes orthogonalization processing of the principal component vector matrix. Here, the “feature value used for prediction” matrix is orthogonalized by the Gram Schmid method. In this case, the “prediction target feature amount” is converted by multiplying the orthogonalization coefficient in the “feature amount used during prediction”.
 機械学習部214bは、直交化された「予測時に使用する特徴量」(行列Di,j)と、これに伴って変換された「予測対象特徴量」(行列Ei,j)とを用いて生成した予測モデルをモデル記憶部25に記録する。 The machine learning unit 214b generates using the orthogonalized “features to be used for prediction” (matrix Di, j) and the “prediction target feature values” (matrix Ei, j) converted accordingly. The predicted model is recorded in the model storage unit 25.
 予測処理においては、制御部21は、入力データXjと、モデル記憶部25に記憶した行列Di,jとを内積して、主成分の重みを表す係数biを計算する。次に、制御部21は、この係数biと行列Ei,jとを用いて、予測ベクトルである予測データYjを再構築する。これにより、制御部21は、入力データXjに基づいて、予測データYjを算出することができる。 In the prediction process, the control unit 21 calculates the coefficient bi representing the weight of the principal component by inner product of the input data Xj and the matrix Di, j stored in the model storage unit 25. Next, the control unit 21 reconstructs prediction data Yj, which is a prediction vector, using the coefficient bi and the matrix Ei, j. Thereby, the control part 21 can calculate the prediction data Yj based on the input data Xj.
 このカップリング学習処理を用いることにより、変化が激しく、影響の大きい成分を優先的に使用しながら、全体バランスを考慮した予測データを算出することができる。
 次に、PLS回帰に基づく学習処理について説明する。
By using this coupling learning process, it is possible to calculate prediction data in consideration of the overall balance while preferentially using components that change drastically and have a large influence.
Next, a learning process based on PLS regression will be described.
 このPLS回帰は、独立変数(予測対象特徴量)と説明変数(予測時に使用する特徴量)の共分散wを利用し、両者の相関の高い成分から順に変数に追加して重回帰分析を行なうことにより、回帰係数行列を算出する。具体的には、以下の処理〔1〕~〔4〕を、交差判定誤差が最小になるまで繰り返す。ここで、交差判定誤差は、サンプルデータを予測対象と入力データとに分け、入力データを用いて予測対象を予測し、この予測結果と予測対象との誤差である。 This PLS regression uses the covariance w i of independent variables (features to be predicted) and explanatory variables (features used at the time of prediction), and adds multiple regression analysis to the variables in descending order of their correlation. By doing so, a regression coefficient matrix is calculated. Specifically, the following processes [1] to [4] are repeated until the intersection determination error is minimized. Here, the intersection determination error is an error between the prediction result and the prediction target when the sample data is divided into the prediction target and the input data, the prediction target is predicted using the input data.
 〔1〕機械学習部214bは、独立変数(予測対象特徴量)と、説明変数(予測時に使用する特徴量)の共分散行列(相関行列)Wを算出する。ここで、独立変数と説明変数の共分散行列Wは、次の式で算出される。 (1) machine learning unit 214b calculates an independent variable (prediction target features), the covariance matrix of explanatory variables (features to be used for prediction) (correlation matrix) W i. Here, the covariance matrix W i of the independent variable and explanatory variables is calculated by the following equation.
 W=X /||X ||
 この式で、Tは転置行列を意味している。
 〔2〕次に、機械学習部214bは、独立変数Xを共分散wの空間に射影し、スコア行列tiを算出する。
W i = X i T Y i / || X i T Y i ||
In this equation, T means a transposed matrix.
[2] Next, the machine learning unit 214b projects the independent variable X i onto the space of the covariance w i and calculates the score matrix ti.
 〔3〕次に、機械学習部214bは、説明変数の更新処理を実行する。具体的には、制御部21は、スコア行列tより説明変数Yi+1を予測する回帰係数行列Cを算出する。制御部21は、一度、回帰に用いた情報を削除し、残りの説明変数ci+1を特定する。この場合、制御部21は、「c=Y (t -1及びYi+1=Y-t 」を用いる。 [3] Next, the machine learning unit 214b executes an explanatory variable update process. Specifically, the control unit 21 calculates a regression coefficient matrix C that predicts the explanatory variable Y i + 1 from the score matrix t i . The control unit 21 once deletes the information used for the regression, and specifies the remaining explanatory variables c i + 1 . In this case, the control unit 21 uses “c i = Y i T t i (t i T t i ) −1 and Y i + 1 = Y i −t i c i T ”.
 〔4〕次に、機械学習部214bは、独立変数の更新処理を実行する。具体的には、説明変数の更新処理と同様に、機械学習部214bは、スコア行列より独立変数を予測する回帰係数行列を算出し、一度、回帰に用いた情報を削除し、残りの独立変数を算出する。この場合、制御部21は、「p=X (t -1及びXi+1=X-t 」を用いる。 [4] Next, the machine learning unit 214b executes an independent variable update process. Specifically, similarly to the update processing of the explanatory variables, the machine learning unit 214b calculates a regression coefficient matrix that predicts the independent variable from the score matrix, deletes information used for the regression once, and the remaining independent variables Is calculated. In this case, the control unit 21 uses “p i = X i T t i (t i T t i ) −1 and X i + 1 = X i −t i p i T ”.
 〔5〕機械学習部214bは、交差判定誤差が最小か否かを判定する。具体的には、まず、機械学習部214bは、学習データの一部(例えば学習データ全体の1/4個)を予測対象と仮定し、これら予測対象を除いたデータを入力データとして、これまでに処理〔4〕で算出した説明変数Yi+1と処理〔4〕で算出した独立変数Xi+1とを用いて、予測対象との誤差を算出する。 [5] The machine learning unit 214b determines whether or not the intersection determination error is minimum. Specifically, first, the machine learning unit 214b assumes that a part of the learning data (for example, 1/4 of the entire learning data) is a prediction target, and uses the data excluding these prediction targets as input data. Then, using the explanatory variable Y i + 1 calculated in the process [4] and the independent variable X i + 1 calculated in the process [4], an error from the prediction target is calculated.
 ここで、算出した誤差が、前回(Y,X)の処理における予測対象との誤差以下の場合には、制御部21は、交差判定誤差が最小でないと判定する。この場合、機械学習部214bは、説明変数Yi+1をYとして用い、独立変数Xi+1をXとして用いて、処理〔1〕以降を繰り返して実行する。 Here, when the calculated error is equal to or less than the error from the prediction target in the previous (Y i , X i ) process, the control unit 21 determines that the intersection determination error is not the minimum. In this case, the machine learning unit 214b uses the explanatory variable Y i + 1 as Y i and the independent variable X i + 1 as X i and repeatedly executes the processing [1] and subsequent steps.
 一方、算出した誤差が、前回(Y,X)の処理における予測対象との誤差より大きくなった場合には、機械学習部214bは、1回前の処理〔1〕までで算出した共分散wを横方向に並べて共分散行列Wを生成する。更に、制御部21は、この共分散行列Wを用いて処理〔3〕で算出した回帰係数cを横方向に並べて行列Cを生成し、この共分散行列Wを用いて処理〔4〕で算出した回帰係数pを横方向に並べて行列Pを生成する。制御部21は、回帰係数行列BをB=W(PW)-1に代入して算出する。この回帰係数行列Bを用いて生成した予測モデルをモデル記憶部25に記録する。 On the other hand, when the calculated error becomes larger than the error with the prediction target in the previous process (Y i , X i ), the machine learning unit 214b performs the calculation calculated up to the previous process [1]. The covariance matrix W is generated by arranging the variances w i in the horizontal direction. Further, the control unit 21 generates a matrix C by arranging the regression coefficients c i calculated in the process [3] in the horizontal direction using the covariance matrix W, and uses the covariance matrix W in the process [4]. The calculated regression coefficient p i is arranged in the horizontal direction to generate a matrix P. The control unit 21 calculates by substituting the regression coefficient matrix B into B = W (P T W) −1 C T. A prediction model generated using the regression coefficient matrix B is recorded in the model storage unit 25.
 予測処理において、制御部21は、入力データXjと、記録している回帰係数行列Bとを用いて、予測を行なう。
 このPLS回帰分析を用いることにより、重回帰分析よりも適切な変数を選択することができるので、より妥当な予測を行なうことができる。
In the prediction process, the control unit 21 performs prediction using the input data Xj and the recorded regression coefficient matrix B.
By using this PLS regression analysis, it is possible to select more appropriate variables than in the multiple regression analysis, and therefore it is possible to make a more appropriate prediction.
 次に、サポートベクトル回帰分析に基づく学習処理について説明する。このサポートベクトル回帰分析は、非線形分析であり、回帰曲線を算出する。具体的には、このサポートベクトル回帰分析は、サンプルデータが回帰曲線±所定距離wの範囲(チューブ)内に入るように回帰曲線を算出する。チューブの外に出たデータをペナルティデータξとし、次の式を最小化する曲線及び所定距離wを算出する。 Next, the learning process based on support vector regression analysis will be described. This support vector regression analysis is a nonlinear analysis and calculates a regression curve. Specifically, in the support vector regression analysis, the regression curve is calculated so that the sample data falls within the range (tube) of the regression curve ± predetermined distance w. The data that goes out of the tube is taken as penalty data ξ, and a curve that minimizes the following equation and a predetermined distance w are calculated.
Figure JPOXMLDOC01-appb-M000006
 ここで、調整定数Cは外れ値の許容範囲を調整するパラメータである。この調整定数Cが大きいと許容範囲が小さくなる。「ξi」は、データiがチューブ内ならば「0」で、チューブより上にある場合は、チューブとの距離が代入される値である。「ξi」は、データiがチューブ内ならば「0」で、チューブより下にある場合は、チューブとの距離が代入される値である。
Figure JPOXMLDOC01-appb-M000006
Here, the adjustment constant C is a parameter for adjusting the allowable range of outliers. If this adjustment constant C is large, the allowable range becomes small. “Ξ + i” is “0” if the data i is in the tube, and is a value into which the distance from the tube is substituted if it is above the tube. “Ξ i” is “0” if the data i is in the tube, and if it is below the tube, the distance to the tube is substituted.
 このサポートベクトル回帰分析に基づく学習処理を用いることにより、非線形(曲線)回帰に適用することができる。従って、上記実施形態の重回帰分析やPLS回帰分析の線形回帰分析に比べて、関数の形をより自由に設定することができる。 学習 By using a learning process based on this support vector regression analysis, it can be applied to nonlinear (curve) regression. Therefore, the shape of the function can be set more freely than in the linear regression analysis of the multiple regression analysis or the PLS regression analysis of the above embodiment.

Claims (7)

  1.  顔形状の加齢による変化を予測する形状加齢モデルと、
     顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、
     2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、
     入力部、出力部に接続されるように構成され、加齢化を予測する制御部とを備えた加齢化予測システムであって、
     前記制御部が、
      前記入力部から予測対象画像を取得し、
      前記予測対象画像の特徴点を抽出し、
      抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、
      前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、
      前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、
      前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、
      前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、
      生成した前記加齢化顔モデルを前記出力部に出力する
    予測処理を実行するように構成されている、加齢化予測システム。
    A shape aging model that predicts changes in facial shape due to aging,
    A texture aging model that predicts changes in facial texture due to aging,
    A model storage unit storing a three-dimensional prediction model for predicting three-dimensional data from a two-dimensional image;
    An aging prediction system comprising a control unit configured to be connected to an input unit and an output unit and predicting aging,
    The control unit is
    Obtain a prediction target image from the input unit,
    Extracting feature points of the prediction target image;
    Using the extracted feature points, estimate the face direction in the prediction target image,
    Generating first three-dimensional data based on the three-dimensional prediction model and the estimated face orientation;
    Using the shape aging model, generating second three-dimensional data from the first three-dimensional data,
    Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture,
    Synthesizing the aging texture with respect to the second three-dimensional data to generate an aging face model,
    An aging prediction system configured to execute a prediction process of outputting the generated aging face model to the output unit.
  2.  前記制御部は、
     前記入力部から、出力用として指定された顔向き角度を取得し、
     生成した前記加齢化顔モデルを用いて、前記顔向き角度の2次元顔画像を生成して、
     生成した前記2次元顔画像を前記出力部に出力する
    ようにさらに構成されている請求項1に記載の加齢化予測システム。
    The controller is
    From the input unit, obtain the face orientation angle specified for output,
    Using the generated aging face model, generate a two-dimensional face image of the face orientation angle,
    The aging prediction system according to claim 1, further configured to output the generated two-dimensional face image to the output unit.
  3.  前記2次元画像は第1の2次元画像であり、
     前記制御部は、
     取得した3次元顔サンプルデータに基づいて、第2の2次元画像を生成し、
     前記第2の2次元画像において特徴点を特定し、
     前記特徴点を用いて、前記3次元顔サンプルデータを正規化した正規化サンプルデータを生成し、
     前記正規化サンプルデータを用いて、前記形状加齢モデル及び前記テクスチャ加齢モデルを生成し、
     生成した前記形状加齢モデル及び生成した前記テクスチャ加齢モデルを前記モデル記憶部に記憶する
    学習処理を実行するようにさらに構成されている請求項1又は2に記載の加齢化予測システム。
    The two-dimensional image is a first two-dimensional image;
    The controller is
    A second 2D image is generated based on the acquired 3D face sample data,
    Identifying feature points in the second two-dimensional image;
    Using the feature points, generate normalized sample data obtained by normalizing the three-dimensional face sample data,
    Using the normalized sample data, generating the shape aging model and the texture aging model,
    The aging prediction system according to claim 1 or 2, further configured to execute a learning process of storing the generated shape aging model and the generated texture aging model in the model storage unit.
  4.  前記学習処理は、前記正規化サンプルデータを用いて前記3次元化予測モデルを生成して、前記モデル記憶部に記憶することを含む請求項3に記載の加齢化予測システム。 The aging prediction system according to claim 3, wherein the learning process includes generating the three-dimensional prediction model using the normalized sample data and storing the generated three-dimensional prediction model in the model storage unit.
  5.  前記モデル記憶部には、主成分分析を用いて算出した第1のテクスチャ加齢モデルと、ウェーブレット変換を用いて算出した第2のテクスチャ加齢モデルとが記憶されており、
     前記制御部は、前記第1の2次元画像に対して、第1のテクスチャ加齢モデルを適用した画像をウェーブレット変換した第1のウェーブレット係数と、前記第1の2次元画像に対して、第2のテクスチャ加齢モデルを適用した第2のウェーブレット係数とを比較した結果に応じて、適用する前記テクスチャ加齢モデルを特定するようにさらに構成されている請求項3又は4に記載の加齢化予測システム。
    The model storage unit stores a first texture aging model calculated using principal component analysis and a second texture aging model calculated using wavelet transform,
    The control unit performs a first wavelet coefficient obtained by performing a wavelet transform on an image obtained by applying a first texture aging model to the first two-dimensional image, and a first wavelet coefficient for the first two-dimensional image. The aging according to claim 3 or 4, further configured to identify the texture aging model to be applied according to a result of comparison with a second wavelet coefficient to which the texture aging model of 2 is applied. Prediction system.
  6.  顔形状の加齢による変化を予測する形状加齢モデルと、
     顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、
     2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、
     入力部、出力部に接続されるように構成された制御部とを備えた加齢化予測システムを用いて、加齢化を予測する方法であって、
     前記制御部が、
      前記入力部から予測対象画像を取得し、
      前記予測対象画像の特徴点を抽出し、
      抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、
      前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、
      前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、
      前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、
      前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、
      生成した前記加齢化顔モデルを前記出力部に出力する
    予測処理を実行する加齢化予測方法。
    A shape aging model that predicts changes in facial shape due to aging,
    A texture aging model that predicts changes in facial texture due to aging,
    A model storage unit storing a three-dimensional prediction model for predicting three-dimensional data from a two-dimensional image;
    A method for predicting aging by using an aging prediction system including a control unit configured to be connected to an input unit and an output unit,
    The control unit is
    Obtain a prediction target image from the input unit,
    Extracting feature points of the prediction target image;
    Using the extracted feature points, estimate the face direction in the prediction target image,
    Generating first three-dimensional data based on the three-dimensional prediction model and the estimated face orientation;
    Using the shape aging model, generating second three-dimensional data from the first three-dimensional data,
    Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture,
    Synthesizing the aging texture with respect to the second three-dimensional data to generate an aging face model,
    The aging prediction method which performs the prediction process which outputs the produced | generated said aging face model to the said output part.
  7.  顔形状の加齢による変化を予測する形状加齢モデルと、
     顔表面のテクスチャの加齢による変化を予測するテクスチャ加齢モデルと、
     2次元画像から3次元データを予測する3次元化予測モデルとを記憶したモデル記憶部と、
     入力部、出力部に接続されるように構成された制御部とを備えた加齢化予測システムを用いて、加齢化を予測するプログラムを記憶する非一時的なコンピュータ可読記憶媒体であって、加齢化予測システムによる前記プログラムの実行時、
     前記制御部が、
      前記入力部から予測対象画像を取得し、
      前記予測対象画像の特徴点を抽出し、
      抽出した前記特徴点を用いて、前記予測対象画像における顔向きを推定し、
      前記3次元化予測モデル及び推定した前記顔向きに基づいて、第1の3次元データを生成し、
      前記形状加齢モデルを用いて、前記第1の3次元データから第2の3次元データを生成し、
      前記第1の3次元データに基づいて生成された2次元画像に対して、前記テクスチャ加齢モデルを適用して、加齢化テクスチャを生成し、
      前記第2の3次元データに対して、前記加齢化テクスチャを合成して、加齢化顔モデルを生成し、
      生成した前記加齢化顔モデルを前記出力部に出力する
    予測処理を実行する、媒体。
    A shape aging model that predicts changes in facial shape due to aging,
    A texture aging model that predicts changes in facial texture due to aging,
    A model storage unit storing a three-dimensional prediction model for predicting three-dimensional data from a two-dimensional image;
    A non-transitory computer-readable storage medium for storing a program for predicting aging using an aging prediction system including an input unit and a control unit configured to be connected to an output unit. When the program is executed by the aging prediction system,
    The control unit is
    Obtain a prediction target image from the input unit,
    Extracting feature points of the prediction target image;
    Using the extracted feature points, estimate the face direction in the prediction target image,
    Generating first three-dimensional data based on the three-dimensional prediction model and the estimated face orientation;
    Using the shape aging model, generating second three-dimensional data from the first three-dimensional data,
    Applying the texture aging model to the two-dimensional image generated based on the first three-dimensional data to generate an aging texture,
    Synthesizing the aging texture with respect to the second three-dimensional data to generate an aging face model,
    A medium that executes a prediction process of outputting the generated aging face model to the output unit.
PCT/JP2016/063227 2015-07-09 2016-04-27 Aging prediction system, aging prediction method, and aging prediction program WO2017006615A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680016809.0A CN107408290A (en) 2015-07-09 2016-04-27 Increase age forecasting system, increase age Forecasting Methodology and increase age Prediction program
KR1020177025018A KR101968437B1 (en) 2015-07-09 2016-04-27 For example, the prediction prediction system, the prediction prediction method and the prediction prediction program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-137942 2015-07-09
JP2015137942A JP5950486B1 (en) 2015-04-01 2015-07-09 Aging prediction system, aging prediction method, and aging prediction program

Publications (1)

Publication Number Publication Date
WO2017006615A1 true WO2017006615A1 (en) 2017-01-12

Family

ID=57709415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/063227 WO2017006615A1 (en) 2015-07-09 2016-04-27 Aging prediction system, aging prediction method, and aging prediction program

Country Status (3)

Country Link
KR (1) KR101968437B1 (en)
CN (1) CN107408290A (en)
WO (1) WO2017006615A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399813A (en) * 2019-07-10 2019-11-01 深兰科技(上海)有限公司 A kind of age recognition methods, device, electronic equipment and storage medium
CN112287852A (en) * 2020-11-02 2021-01-29 腾讯科技(深圳)有限公司 Face image processing method, display method, device and equipment
US11354844B2 (en) 2018-10-26 2022-06-07 Soul Machines Limited Digital character blending and generation system and method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230431B (en) * 2018-01-24 2022-07-12 深圳市云之梦科技有限公司 Human body action animation generation method and system of two-dimensional virtual image
CN109886248B (en) * 2019-03-08 2023-06-23 南方科技大学 Image generation method and device, storage medium and electronic equipment
JP6751540B1 (en) * 2019-05-31 2020-09-09 みずほ情報総研株式会社 Shape prediction system, shape prediction method and shape prediction program
KR102668161B1 (en) * 2020-02-26 2024-05-21 소울 머신스 리미티드 Facial mesh deformation with fine wrinkles

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04199474A (en) * 1990-11-29 1992-07-20 Matsushita Electric Ind Co Ltd Face picture synthetic device
JP2007265396A (en) * 2006-03-29 2007-10-11 Mitsubishi Electric Research Laboratories Inc Method and system for generating face model
JP2013089032A (en) * 2011-10-18 2013-05-13 Kao Corp Face impression determination chart
JP2014137719A (en) * 2013-01-17 2014-07-28 Ntt Communications Corp Feature point output device, feature point output program, feature point output method, search device, search program, and search method

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4341135B2 (en) * 2000-03-10 2009-10-07 コニカミノルタホールディングス株式会社 Object recognition device
JP2001273495A (en) * 2000-03-24 2001-10-05 Minolta Co Ltd Object recognizing device
KR20010044586A (en) * 2001-03-09 2001-06-05 조양일 System for providing an auto facial animation and method therefor
US8391639B2 (en) * 2007-07-23 2013-03-05 The Procter & Gamble Company Method and apparatus for realistic simulation of wrinkle aging and de-aging
CN101952853B (en) * 2008-01-16 2013-05-15 旭化成株式会社 Face posture estimating device, face posture estimating method
JP4518157B2 (en) * 2008-01-31 2010-08-04 カシオ計算機株式会社 Imaging apparatus and program thereof
JP4569670B2 (en) * 2008-06-11 2010-10-27 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5428886B2 (en) * 2010-01-19 2014-02-26 ソニー株式会社 Information processing apparatus, information processing method, and program thereof
JP2012048649A (en) * 2010-08-30 2012-03-08 Citizen Holdings Co Ltd Body shape change prediction apparatus
JP5795979B2 (en) * 2012-03-15 2015-10-14 株式会社東芝 Person image processing apparatus and person image processing method
JP2015011712A (en) * 2013-06-28 2015-01-19 アザパ アールアンドディー アメリカズ インク Digital information gathering and analyzing method and apparatus
KR20140004604A (en) * 2013-10-02 2014-01-13 인텔렉추얼디스커버리 주식회사 Apparatus and method for generating 3 dimension face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04199474A (en) * 1990-11-29 1992-07-20 Matsushita Electric Ind Co Ltd Face picture synthetic device
JP2007265396A (en) * 2006-03-29 2007-10-11 Mitsubishi Electric Research Laboratories Inc Method and system for generating face model
JP2013089032A (en) * 2011-10-18 2013-05-13 Kao Corp Face impression determination chart
JP2014137719A (en) * 2013-01-17 2014-07-28 Ntt Communications Corp Feature point output device, feature point output program, feature point output method, search device, search program, and search method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
K, SCHERBAUM ET AL.: "Prediction of Individual Non-linear Aging Trajectories of Faces", COMPUTER GRAPHICS FORUM, vol. 26, no. 3, 3 September 2007 (2007-09-03), pages 285 - 294, XP055345892 *
SHIGEO MORISHIMA: "Pattern Analysis and Synthesis of Human Face for Entertainment Applications", JOURNAL OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS, vol. 53, no. 7, 10 July 2014 (2014-07-10), pages 593 - 598 *
TAKESHI NAGATA ET AL.: "Seibutsu·Iryo, Zairyo· Kikibun'ya Muke 'Kodo Gazo Shori Solution", IMAGE LAB, vol. 25, no. 5, 10 May 2014 (2014-05-10), pages 48 - 51 *
YUSUKE TAZOE ET AL.: "Aging Simulation of Personal Face Based on Conversion of 3D Geometry and Texture", IEICE TECHNICAL REPORT, vol. 109, no. 471, 8 March 2010 (2010-03-08), pages 151 - 156 *
YUSUKE TAZOE ET AL.: "Keijo Henkei to Patch Tiling ni Motozuku Texture Henkan ni yoru Nenrei Henka Kao Simulation", VISUAL COMPUTING/ GRAPHICS TO CAD GODO SYMPOSIUM 2012, 23 June 2012 (2012-06-23), pages 1 - 8 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11354844B2 (en) 2018-10-26 2022-06-07 Soul Machines Limited Digital character blending and generation system and method
CN110399813A (en) * 2019-07-10 2019-11-01 深兰科技(上海)有限公司 A kind of age recognition methods, device, electronic equipment and storage medium
CN112287852A (en) * 2020-11-02 2021-01-29 腾讯科技(深圳)有限公司 Face image processing method, display method, device and equipment
WO2022089166A1 (en) * 2020-11-02 2022-05-05 腾讯科技(深圳)有限公司 Facial image processing method and apparatus, facial image display method and apparatus, and device
CN112287852B (en) * 2020-11-02 2023-11-21 腾讯科技(深圳)有限公司 Face image processing method, face image display method, face image processing device and face image display equipment

Also Published As

Publication number Publication date
CN107408290A (en) 2017-11-28
KR20170115591A (en) 2017-10-17
KR101968437B1 (en) 2019-04-11

Similar Documents

Publication Publication Date Title
JP5950486B1 (en) Aging prediction system, aging prediction method, and aging prediction program
WO2017006615A1 (en) Aging prediction system, aging prediction method, and aging prediction program
CN110443885B (en) Three-dimensional human head and face model reconstruction method based on random human face image
Rohmer et al. Animation wrinkling: augmenting coarse cloth simulations with realistic-looking wrinkles
CN109002763B (en) Method and device for simulating human face aging based on homologous continuity
JP6207210B2 (en) Information processing apparatus and method
KR102010161B1 (en) System, method, and program for predicing information
KR20130003170A (en) Method and apparatus for expressing rigid area based on expression control points
JP2018503470A (en) System and method for adding surface details to a digital crown model formed using statistical techniques
WO2013078404A1 (en) Perceptual rating of digital image retouching
JP5751865B2 (en) Face image processing device
CN111091624A (en) Method for generating high-precision drivable human face three-dimensional model from single picture
Dupej et al. Statistical Mesh Shape Analysis with Nonlandmark Nonrigid Registration.
JP4170096B2 (en) Image processing apparatus for evaluating the suitability of a 3D mesh model mapped on a 3D surface of an object
Wang et al. Modeling of personalized anatomy using plastic strains
JP5639499B2 (en) Face image processing device
Hansen et al. Adaptive parametrization of multivariate B-splines for image registration
JP2017122993A (en) Image processor, image processing method and program
CN114219920A (en) Three-dimensional face model construction method and device, storage medium and terminal
JP5650012B2 (en) Facial image processing method, beauty counseling method, and facial image processing apparatus
Danckaers et al. Adaptable digital human models from 3D body scans
JP5954846B2 (en) Shape data generation program, shape data generation method, and shape data generation apparatus
JP7100842B2 (en) Model analysis device, model analysis method, and model analysis program
CN112329640A (en) Facial nerve palsy disease rehabilitation detection system based on eye muscle movement analysis
JP6751540B1 (en) Shape prediction system, shape prediction method and shape prediction program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16821086

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177025018

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16821086

Country of ref document: EP

Kind code of ref document: A1