WO2020194488A1 - Device, method, program, and system for determining three-dimensional shape of face - Google Patents

Device, method, program, and system for determining three-dimensional shape of face Download PDF

Info

Publication number
WO2020194488A1
WO2020194488A1 PCT/JP2019/012719 JP2019012719W WO2020194488A1 WO 2020194488 A1 WO2020194488 A1 WO 2020194488A1 JP 2019012719 W JP2019012719 W JP 2019012719W WO 2020194488 A1 WO2020194488 A1 WO 2020194488A1
Authority
WO
WIPO (PCT)
Prior art keywords
age group
face
discrimination
dimensional
facial expression
Prior art date
Application number
PCT/JP2019/012719
Other languages
French (fr)
Japanese (ja)
Inventor
千尋 谷川
健治 高田
春奈 関根
ルリ子 ▲高▼野
定樹 高田
Original Assignee
株式会社資生堂
国立大学法人大阪大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社資生堂, 国立大学法人大阪大学 filed Critical 株式会社資生堂
Priority to PCT/JP2019/012719 priority Critical patent/WO2020194488A1/en
Priority to JP2021508455A priority patent/JP7226745B2/en
Publication of WO2020194488A1 publication Critical patent/WO2020194488A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a three-dimensional face morphology discrimination device, method, program, and system.
  • Patent Document 1 the curvature of a curved surface at each point of the face is obtained by using the three-dimensional shape information of the face measured by the three-dimensional shape measuring device, and the shape of the face is evaluated based on the distribution of the curvature.
  • Patent Document 1 the shape of the face is only evaluated based on the distribution of the curvature of the curved surface at each point of the face.
  • One aspect of the present invention is a three-dimensional morphological information acquisition unit that acquires three-dimensional morphological information of a face, and whether the human being with the face is in the first age group based on the three-dimensional morphological information. It is provided with a discriminating unit for determining whether or not the user is in a second age group higher than the age group.
  • the accuracy of evaluation of the three-dimensional morphology of the face can be improved.
  • FIG. 1 is an overall configuration diagram of a discrimination system 100 according to an embodiment of the present invention.
  • the discrimination system 100 includes a discrimination device 101 and a user terminal 102. Each will be described below.
  • the discrimination device 101 is a device for discriminating which group the human belongs to based on information about the three-dimensional morphology of the human face (hereinafter, also referred to as three-dimensional morphological information).
  • the discriminator 101 comprises one or more computers.
  • the determination device 101 can send and receive data to and from the user terminal 102 via an arbitrary network 103.
  • the discrimination device 101 determines whether the human belongs to the first age group (hereinafter, also referred to as the young group) based on the three-dimensional morphological information of the human face acquired from the user terminal 102. , It is determined whether or not the person belongs to a second age group (hereinafter, also referred to as an elder group), which is an age group higher than the first age group. Further, the discrimination device 101 is based on the three-dimensional morphological information of the human face acquired from the user terminal 102, and the human has a first facial expression (hereinafter, also referred to as a resting face) of the first age group.
  • the first age group hereinafter, also referred to as the young group
  • the discrimination device 101 is based on the three-dimensional morphological information of the human face acquired from the user terminal 102, and the human has a first facial expression (hereinafter, also referred to as a resting face) of the first age group.
  • the discrimination device 101 is configured to discriminate whether the human has a first facial expression or a second facial expression based on the three-dimensional morphological information of the human face acquired from the user terminal 102. It can also be. The discriminating device 101 will be described in detail later with reference to FIG.
  • the user terminal 102 is a terminal used by a person who wants to determine which group the person belongs to based on the three-dimensional morphological information of the human face.
  • the user terminal 102 is a computer such as a personal computer, a tablet, or a smartphone.
  • the user terminal 102 can send and receive data to and from the discrimination device 101 via an arbitrary network 103.
  • the user terminal 102 transmits data indicating three-dimensional morphological information of a human face to the discrimination device 101. Further, the user terminal 102 receives data indicating the result of determining which group the human belongs to from the determination device 101 and displays it on a display means such as a display. For example, the user terminal 102 can use data indicating three-dimensional morphological information of a human face acquired by a depth sensor or the like built in the user terminal 102.
  • the discriminating device 101 and the user terminal 102 are described as separate computers in the present specification, the discriminating device 101 and the user terminal 102 may be mounted on one computer.
  • FIG. 2 is a functional block diagram of the discrimination device 101 according to the embodiment of the present invention.
  • the discrimination device 101 includes a three-dimensional form information acquisition unit 201, a three-dimensional form generation unit 202, a feature variation measurement unit 203, a discrimination unit 204, and a discrimination function storage unit 205.
  • the discrimination device 101 functions as a three-dimensional form information acquisition unit 201, a three-dimensional form generation unit 202, a feature variation measurement unit 203, and a discrimination unit 204. Each will be described below.
  • the three-dimensional morphology information acquisition unit 201 receives data indicating the three-dimensional morphology information of the human face from the user terminal 102. Further, the three-dimensional form information acquisition unit 201 stores the received data in the memory so that other functional units can refer to it.
  • the three-dimensional morphological information of the human face is based on, for example, "an image of the human face taken by the camera (two-dimensional image)" and "the distance from the depth sensor acquired by the depth sensor to the human". It is information that can represent the three-dimensional shape of. Further, the three-dimensional morphological information of the human face is information that can represent the three-dimensional shape of the human face based on the parallax of images taken by two or more cameras such as a stereo camera from different viewpoints. is there.
  • the three-dimensional morphological information acquisition unit 201 may acquire the three-dimensional morphological information of the face at rest, may acquire the three-dimensional morphological information of the face at the time of expressing a smile, or may acquire the three-dimensional morphological information of the face at rest. Both the three-dimensional morphological information of the face at the time and the three-dimensional morphological information of the face at the time of expressing the smile may be acquired.
  • a resting face is a face whose facial expression is not expressed.
  • the face when the smile is expressed is the face where the laughing expression is expressed.
  • a smile can be defined as "three-dimensional morphology of the face when the smile is expressed" or "difference between the three-dimensional morphology of the face at rest and the three-dimensional morphology of the face when the smile is expressed". You can also do it.
  • the present invention can be applied to an arbitrary face (for example, a sleeping face) in which a facial expression is not expressed.
  • a face at the time of expressing a smile is described as an example in the present specification, the present invention may be applied to a face expressing a facial expression other than a smile (for example, crying, angry, etc.). it can.
  • the three-dimensional form generation unit 202 generates a three-dimensional form of a human face based on the three-dimensional form information acquired by the three-dimensional form information acquisition unit 201. Further, the three-dimensional form generation unit 202 stores the generated three-dimensional form data in the memory so that other functional units can refer to it.
  • two examples of the generation of the three-dimensional form by the ⁇ cross-sectional view> and the generation of the three-dimensional form by the ⁇ mesh> will be described.
  • the three-dimensional form generation unit 202 determines a three-dimensional coordinate system (the left-right direction of the human face is the x-axis, the up-down direction is the y-axis, and the front-back direction (depth) is the z-axis). Further, the three-dimensional morphology generation unit 202 can generate a cross-sectional view of a human face by processing the three-dimensional morphology information based on the anatomical measurement points.
  • FIG. 3 is an example of a cross-sectional view according to an embodiment of the present invention.
  • FIG. 3 shows a yz cross section when a human face is cut along a line connecting the outer corner of the eye Ex and the corner point Ch of the mouth.
  • the cross section may be a cross section that passes through an arbitrary point such as a nose bottom point, a nose tip point, a glabellar point, a nose root point, an upper lip point, a lower lip point, and a chin point.
  • the three-dimensional form generation unit 202 can generate a mesh (polygon mesh) of a human face based on the three-dimensional form information.
  • FIG. 4 is an example of a mesh according to an embodiment of the present invention.
  • the feature variate measurement unit 203 measures the feature variate in a three-dimensional form. Further, the feature variate measurement unit 203 stores the measured feature variate in the memory so that other function units can refer to it.
  • the feature variate is a feature parameter that represents a feature of the morphology of the human face.
  • the feature variate measurement unit 203 measures the feature variate from the generated cross-sectional view. For example, in FIG. 3, the angle (v7) of the mouth angle Ch with the outer corner Ex as the base point in the z-axis direction, the angle (v8) of the cheek protrusion P (Ex-Ch) in the cross section with the outer corner Ex as the base point, and the outer corner of the eye.
  • the length of the outer curve of Ex and the mouth angle Ch (v12), the area closed by the outer curve (v13), and the like can be used as characteristic variables.
  • the feature variation may be the amount of the face portion protruding in the z direction at various cross-sectional positions, the angle of the protruding point portion, the protruding amount, and the angle of the concave point portion.
  • the feature variate measurement unit 203 measures the feature variate from the generated mesh. For example, in FIG. 4, the difference or ratio of a specific part with respect to the z average value can be used as a feature variate.
  • the discrimination unit 204 determines which group the human belongs to from the special variate of the three-dimensional morphology generated based on the three-dimensional morphological information of the human face. Further, the discrimination unit 204 notifies the user terminal 102 of the discrimination result.
  • Examples of discrimination by ⁇ discrimination function>, discrimination by ⁇ pattern matching>, and discrimination by ⁇ past data> will be described.
  • the discriminant unit 204 substitutes the feature variables measured by the feature variable measurement unit 203 into the four discriminant functions (which may be linear or non-linear) stored in the discriminant function storage unit 205.
  • the discriminant function will be described below.
  • the example is an element (feature variation) that is effective in distinguishing two groups (young group and elder group) by using the three-dimensional morphology of the human face for the types of facial expressions, that is, at rest and when the smile is expressed. Is shown.
  • the convex morphology of the lower nose, the bulge under the eyes is large, the width of the lower face is large, the prognathism of the lips is large, the chin angle is large, the forehead is large,
  • the protrusion of the nasal wing is large, it is determined to be an elder group.
  • the smile is expressed, when the corner of the mouth is drooping, the convex shape of the lower part of the nose, the vertical height of the eyes is small, the chin angle is large, the bulge under the eyes is large, and the protrusion of the lips is large. It is determined to be an elder group.
  • FIGS. 5 and 6 are diagrams for explaining a characteristic variable in which a significant difference is observed according to an embodiment of the present invention. Differences between the Young group (eg, 18-32 years) and the Elder group (eg, 55-65 years) are shown.
  • discriminant function of the present invention feature variables and weighting coefficients that maximize the difference between the Young group and the Elder group are adopted. Further, in the discriminant function of the present invention, a feature variate and a weighting coefficient that maximize the difference between the resting state and the smiling face expression are adopted.
  • the discriminant function of the present invention is characterized in that the feature variables are not evaluated separately but are treated at the same time as multivariates (that is, multidimensional vectors).
  • the discriminant function storage unit 205 stores the following four discriminant functions.
  • the discriminant function 1 is for discriminating whether the person having the first facial expression (for example, the face at rest) is in the first age group (young group) or the second age group (elder group). It is a discriminant function of. For example, if the result of substituting the feature variable into the discriminant function 1 is positive (positive value), it is determined to be an elder group.
  • the following shows the feature variates and weighting coefficients used in the discriminant function 1.
  • the characteristic variables used in the discriminant function 1 are "bulge under the eyes”, “sagging of the contour of the face”, “protrusion of the ala of nose”, “width of the face”, “vertical distance of the lower part of the nose”, “prognathism of the lips”, and “lower part of the nose”. It is a convex form of.
  • the discriminant function 2 discriminates whether the person having the second facial expression (for example, the face when the smile is expressed) is the first age group (young group) or the second age group (elder group). It is a discriminant function to do. For example, if the result of substituting the feature variable into the discriminant function 2 is positive (positive value), it is determined to be an elder group.
  • the following shows the feature variables and weighting factors used in the discriminant function 2.
  • the feature variables used in the discriminant function 2 are "vertical distance between eyes and mouth corner", “convex morphology of lower nose”, “bulge under eyes”, “sagging of jaw corner”, “vertical distance and horizontal distance of eye fissure”. "Ratio".
  • the discriminant function 3 determines whether the person belonging to the first age group (young group) has a first facial expression (for example, a resting face) or a second facial expression (for example, a face when a smile is expressed). It is a discriminant function for discriminating. For example, if the result of substituting the feature variable into the discriminant function 3 is negative (negative value), it is determined that the smile is expressed.
  • the following shows the feature variates and weighting coefficients used in the discriminant function 3.
  • the characteristic variables used in the discriminant function 3 are "increase in face width when smiling", “increase in cheek protrusion when smiling", “backward movement of nose when smiling", and "smile".
  • the discriminant function 4 determines whether a person belonging to the second age group (elder group) has a first facial expression (for example, a resting face) or a second facial expression (for example, a face when a smile is expressed). It is a discriminant function for discriminating. For example, if the result of substituting the feature variable into the discriminant function 4 is negative (negative value), it is determined that the smile is expressed.
  • the following shows the feature variate and the weighting coefficient used in the discriminant function 4.
  • the characteristic variables used in the discriminant function 4 are "a decrease in lower facial height when a smile is expressed" and "a cheek protrusion amount when a smile is expressed".
  • the discrimination unit 204 substitutes the feature variables measured by the feature variable measurement unit 203 into the four discrimination functions stored in the discrimination function storage unit 205. Further, the discrimination unit 204 uses the results of the discrimination functions 1 to 4 having a large absolute value. For example, assume that the result of the discriminant function 1 is "+2”, the result of the discriminant function 2 is "+10”, the result of the discriminant function 3 is "-2”, and the result of the discriminant function 4 is "-13". Then, from the discriminant function 2 and the discriminant function 4, it is determined that the person belongs to the elder group and is the face when the smile is expressed.
  • the three-dimensional morphological information acquisition unit 201 can acquire both the three-dimensional morphological information of the face at rest and the three-dimensional morphological information of the face when the smile is expressed.
  • the smile can be defined as "the difference between the three-dimensional form of the face at rest and the three-dimensional form of the face when the smile is expressed".
  • the discrimination device 101 more accurately discriminates between the Young group and the Elder group by using the difference between the three-dimensional morphology of the face at rest and the three-dimensional morphology of the face when the smile is expressed (that is,). If a site-specific decrease in the amount of movement is observed (that is, if the difference is small), it can be an elder group.
  • the discriminant unit 204 performs pattern matching (for example, machine learning based on Manhattan distance, Euclidean distance, etc.) with the median value of a set of several pre-patterned three-dimensional forms without using the above discriminant function. It can also be configured.
  • FIG. 7 is an example of a discrimination system according to an embodiment of the present invention.
  • a feature vector is created from a feature variate and a weighting coefficient in a three-dimensional form. Then, the created feature vector of the subject and each group (for example, a group having a first facial expression in the first age group, a group having a second facial expression in the first age group, and a second age group)
  • the group to which the subject belongs is determined based on the distance (for example, Manhattan distance, Euclidean distance, Mahalanobis distance, etc.) from the group having the first facial expression and the group having the second facial expression in the second age group. ..
  • the discrimination unit 204 may be configured to refer to the most similar three-dimensional form in the past without using the above discrimination function.
  • FIG. 8 is a flowchart showing a discrimination process according to an embodiment of the present invention.
  • step 801 the three-dimensional form information acquisition unit 201 receives data indicating the three-dimensional form information of the human face from the user terminal 102.
  • step 802 the three-dimensional form generation unit 202 generates a three-dimensional form of a human face based on the three-dimensional form information acquired in S801.
  • step 803 the feature variate measurement unit 203 measures the feature variate in the three-dimensional form generated in S802.
  • step 804 the discrimination unit 204 determines which group the person belongs to from the special variate measured in S803.
  • step 805 the determination unit 204 notifies the user terminal 102 of the result of determination in S804.
  • the present invention it is possible to determine which group the human belongs to from the characteristic variation of the three-dimensional form of the human face. "Characteristic variation that maximizes the difference between the first age group (young group) and the second age group (elder group) in the first facial expression (for example, the face at rest)", which the inventor found.
  • the discriminating device, method, program, and system according to an embodiment of the present invention can be applied to the beauty field. Specifically, it is possible to provide a device that measures the degree of aging of a smile by photographing the three-dimensional morphology of the face at rest and when the smile is expressed using a depth sensor built into a smartphone or the like. it can. Using the present invention, it is possible to measure the effect of facial expression training and objectively show the effect of cosmetics to customers from the viewpoint of expressing a smile.
  • the discriminant device, method, program, and system according to an embodiment of the present invention can be applied to the medical field.
  • dementia which is the second leading cause of long-term care
  • By measuring facial expressions using the present invention it becomes possible to analyze changes due to normal aging and changes due to illness separately, which may lead to early detection of illness.
  • morphological distortion of the face and facial expressions caused by dysplasia of the maxillofacial region may cause a serious problem of social psychological maladaptation for individuals. Therefore, establishing a better appearance and facial expression has been emphasized as the purpose of orthodontic treatment and plastic surgery treatment.
  • Treatment for facial morphological abnormalities is mainly performed from adolescence to early adolescence, and the resulting appearance and facial expression play a role while changing with age from late adolescence to middle age.
  • FIG. 9 is a block diagram showing a hardware configuration of the discrimination device 101 according to the embodiment of the present invention.
  • the discriminating device 101 includes a CPU (Central Processing Unit) 1, a ROM (Read Only Memory) 2, and a RAM (Random Access Memory) 3.
  • the CPU 1, ROM 2, and RAM 3 form a so-called computer.
  • the discrimination device 101 includes an auxiliary storage device 4, a display device 5, an operation device 6, an I / F (Interface) device 7, and a drive device 8.
  • the hardware of the discriminating device 101 is connected to each other via the bus 9.
  • the CPU 1 is an arithmetic device that executes various programs installed in the auxiliary storage device 4.
  • ROM2 is a non-volatile memory.
  • the ROM 2 functions as a main storage device for storing various programs, data, and the like necessary for the CPU 1 to execute various programs installed in the auxiliary storage device 4.
  • the ROM 2 functions as a main memory device that stores boot programs such as BIOS (Basic Input / Output System) and EFI (Extensible Firmware Interface).
  • BIOS Basic Input / Output System
  • EFI Extensible Firmware Interface
  • RAM 3 is a volatile memory such as DRAM (Dynamic Random Access Memory) or SRAM (Static Random Access Memory).
  • the RAM 3 functions as a main storage device that provides a work area that is expanded when various programs installed in the auxiliary storage device 4 are executed by the CPU 1.
  • the auxiliary storage device 4 is an auxiliary storage device that stores various programs and information used when various programs are executed.
  • the display device 5 is a display device that displays the internal state of the discrimination device 101 and the like.
  • the operation device 6 is an input device in which the administrator of the discrimination device 101 inputs various instructions to the discrimination device 101.
  • the I / F device 7 is a communication device for connecting to the network 103 and communicating with the user terminal 102.
  • the drive device 8 is a device for setting the storage medium 10.
  • the storage medium 10 referred to here includes a medium such as a CD-ROM, a flexible disk, a magneto-optical disk, or the like that optically, electrically, or magnetically records information. Further, the storage medium 10 may include a semiconductor memory for electrically recording information such as an EPROM (Erasable Programmable Read Only Memory) and a flash memory.
  • EPROM Erasable Programmable Read Only Memory
  • the various programs installed in the auxiliary storage device 4 are installed, for example, by setting the distributed storage medium 10 in the drive device 8 and reading the various programs recorded in the storage medium 10 by the drive device 8. Will be done.
  • the various programs installed in the auxiliary storage device 4 may be installed by being downloaded from another network different from the network 103 via the I / F device 7.
  • Discrimination system 101 Discrimination device 102 User terminal 103 Network 201 Three-dimensional morphology information acquisition unit 202 Three-dimensional morphology generation unit 203 Feature variability measurement unit 204 Discrimination unit 205 Discrimination function storage unit

Abstract

[Problem] To improve the accuracy of the evaluation of the three-dimensional shape of a face. [Solution] The present invention comprises: a three-dimensional shape information acquisition unit that acquires the three-dimensional shape information of the face of a person; and a determination unit that, on the basis of the three-dimensional shape information, determines whether the person is in a first age group or a second age group which is higher than the first age group.

Description

三次元顔形態の判別装置、方法、プログラム、およびシステムThree-dimensional face morphology discriminator, method, program, and system
 本発明は、三次元顔形態の判別装置、方法、プログラム、およびシステムに関する。 The present invention relates to a three-dimensional face morphology discrimination device, method, program, and system.
 従来、人間の顔を定量的に評価する手法が研究されている。例えば、特許文献1では、三次元形状計測装置で計測した顔の三次元形状情報を用いて、顔の各点における曲面の曲率を求め、該曲率の分布に基づいて顔の形状を評価している(特許文献1の段落[0007])。 Conventionally, a method for quantitatively evaluating a human face has been studied. For example, in Patent Document 1, the curvature of a curved surface at each point of the face is obtained by using the three-dimensional shape information of the face measured by the three-dimensional shape measuring device, and the shape of the face is evaluated based on the distribution of the curvature. (Patent Document 1 paragraph [0007]).
特開2009-54060号公報JP-A-2009-54060
 しかしながら、特許文献1では、顔の各点における曲面の曲率の分布に基づいて、顔の形状を評価しているに過ぎない。 However, in Patent Document 1, the shape of the face is only evaluated based on the distribution of the curvature of the curved surface at each point of the face.
 そこで、本発明の一実施形態では、顔の三次元形態の評価の精度を向上することを目的とする。 Therefore, in one embodiment of the present invention, it is an object to improve the accuracy of evaluation of the three-dimensional form of the face.
 本発明の一態様は、顔の三次元形態情報を取得する三次元形態情報取得部と、前記三次元形態情報に基づいて、前記顔の人間が第1の年齢層であるか前記第1の年齢層よりも高い第2の年齢層であるかを判別する判別部と、を備える。 One aspect of the present invention is a three-dimensional morphological information acquisition unit that acquires three-dimensional morphological information of a face, and whether the human being with the face is in the first age group based on the three-dimensional morphological information. It is provided with a discriminating unit for determining whether or not the user is in a second age group higher than the age group.
 本発明によれば、顔の三次元形態の評価の精度を向上することができる。 According to the present invention, the accuracy of evaluation of the three-dimensional morphology of the face can be improved.
本発明の一実施形態に係る判別システムの全体の構成図である。It is a block diagram of the whole of the discrimination system which concerns on one Embodiment of this invention. 本発明の一実施形態に係る判別装置の機能ブロック図である。It is a functional block diagram of the discrimination apparatus which concerns on one Embodiment of this invention. 本発明の一実施形態に係る断面図の一例である。It is an example of the sectional view which concerns on one Embodiment of this invention. 本発明の一実施形態に係るメッシュの一例である。This is an example of a mesh according to an embodiment of the present invention. 本発明の一実施形態に係る有意差の認められる特徴変量を説明するための図である。It is a figure for demonstrating the characteristic variation which a significant difference is observed which concerns on one Embodiment of this invention. 本発明の一実施形態に係る有意差の認められる特徴変量を説明するための図である。It is a figure for demonstrating the characteristic variation which a significant difference is observed which concerns on one Embodiment of this invention. 本発明の一実施形態に係る判別システムの一例である。This is an example of a discrimination system according to an embodiment of the present invention. 本発明の一実施形態に係る判別処理を示すフローチャートである。It is a flowchart which shows the discrimination process which concerns on one Embodiment of this invention. 本発明の一実施形態に係る判別装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware structure of the discrimination apparatus which concerns on one Embodiment of this invention.
 以下、図面に基づいて本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
<システム構成>
 図1は、本発明の一実施形態に係る判別システム100の全体の構成図である。判別システム100は、判別装置101とユーザ端末102とを含む。以下、それぞれについて説明する。
<System configuration>
FIG. 1 is an overall configuration diagram of a discrimination system 100 according to an embodiment of the present invention. The discrimination system 100 includes a discrimination device 101 and a user terminal 102. Each will be described below.
 判別装置101は、人間の顔の三次元の形態についての情報(以下、三次元形態情報ともいう)に基づいて、その人間がいずれのグループに属するかを判別するための装置である。判別装置101は、1または複数のコンピュータからなる。判別装置101は、任意のネットワーク103を介して、ユーザ端末102とデータを送受信することができる。 The discrimination device 101 is a device for discriminating which group the human belongs to based on information about the three-dimensional morphology of the human face (hereinafter, also referred to as three-dimensional morphological information). The discriminator 101 comprises one or more computers. The determination device 101 can send and receive data to and from the user terminal 102 via an arbitrary network 103.
 具体的には、判別装置101は、ユーザ端末102から取得した人間の顔の三次元形態情報に基づいて、その人間が、第1の年齢層(以下、ヤング群ともいう)に属するか、あるいは、第1の年齢層よりも高い年齢層である第2の年齢層(以下、エルダー群ともいう)に属するかを判別する。また、判別装置101は、ユーザ端末102から取得した人間の顔の三次元形態情報に基づいて、その人間が、第1の年齢層の第1の表情(以下、安静時の顔ともいう)であるか、第1の年齢層の第2の表情(以下、笑顔表出時の顔ともいう)であるか、第2の年齢層の第1の表情であるか、第2の年齢層の第2の表情であるかを判別する構成とすることもできる。また、判別装置101は、ユーザ端末102から取得した人間の顔の三次元形態情報に基づいて、その人間が、第1の表情であるか、あるいは、第2の表情であるかを判別する構成とすることもできる。後段で、図2を参照しながら、判別装置101について詳細に説明する。 Specifically, the discrimination device 101 determines whether the human belongs to the first age group (hereinafter, also referred to as the young group) based on the three-dimensional morphological information of the human face acquired from the user terminal 102. , It is determined whether or not the person belongs to a second age group (hereinafter, also referred to as an elder group), which is an age group higher than the first age group. Further, the discrimination device 101 is based on the three-dimensional morphological information of the human face acquired from the user terminal 102, and the human has a first facial expression (hereinafter, also referred to as a resting face) of the first age group. Whether it is the second facial expression of the first age group (hereinafter, also referred to as the face when the smile is expressed), the first facial expression of the second age group, or the second facial expression of the second age group. It is also possible to determine whether or not the facial expression is 2. Further, the discrimination device 101 is configured to discriminate whether the human has a first facial expression or a second facial expression based on the three-dimensional morphological information of the human face acquired from the user terminal 102. It can also be. The discriminating device 101 will be described in detail later with reference to FIG.
 ユーザ端末102は、人間の顔の三次元形態情報に基づいてその人間がいずれのグループに属するかを判別したい者が利用する端末である。ユーザ端末102は、パーソナルコンピュータ、タブレット、スマートフォン等のコンピュータである。ユーザ端末102は、任意のネットワーク103を介して、判別装置101とデータを送受信することができる。 The user terminal 102 is a terminal used by a person who wants to determine which group the person belongs to based on the three-dimensional morphological information of the human face. The user terminal 102 is a computer such as a personal computer, a tablet, or a smartphone. The user terminal 102 can send and receive data to and from the discrimination device 101 via an arbitrary network 103.
 具体的には、ユーザ端末102は、人間の顔の三次元形態情報を示すデータを、判別装置101へ送信する。また、ユーザ端末102は、人間がいずれのグループに属するかを判別した結果を示すデータを、判別装置101から受信してディスプレイ等の表示手段に表示する。例えば、ユーザ端末102は、ユーザ端末102に内蔵された深度センサー等により取得された、人間の顔の三次元形態情報を示すデータを用いることができる。 Specifically, the user terminal 102 transmits data indicating three-dimensional morphological information of a human face to the discrimination device 101. Further, the user terminal 102 receives data indicating the result of determining which group the human belongs to from the determination device 101 and displays it on a display means such as a display. For example, the user terminal 102 can use data indicating three-dimensional morphological information of a human face acquired by a depth sensor or the like built in the user terminal 102.
 なお、本明細書では、判別装置101とユーザ端末102とを別々のコンピュータとして説明するが、判別装置101とユーザ端末102とを1つのコンピュータで実装するようにしてもよい。 Although the discriminating device 101 and the user terminal 102 are described as separate computers in the present specification, the discriminating device 101 and the user terminal 102 may be mounted on one computer.
<機能構成>
 図2は、本発明の一実施形態に係る判別装置101の機能ブロック図である。判別装置101は、三次元形態情報取得部201、三次元形態生成部202、特徴変量計測部203、判別部204、判別関数記憶部205を含む。判別装置101は、プログラムを実行することで、三次元形態情報取得部201、三次元形態生成部202、特徴変量計測部203、判別部204として機能する。以下、それぞれについて説明する。
<Functional configuration>
FIG. 2 is a functional block diagram of the discrimination device 101 according to the embodiment of the present invention. The discrimination device 101 includes a three-dimensional form information acquisition unit 201, a three-dimensional form generation unit 202, a feature variation measurement unit 203, a discrimination unit 204, and a discrimination function storage unit 205. By executing the program, the discrimination device 101 functions as a three-dimensional form information acquisition unit 201, a three-dimensional form generation unit 202, a feature variation measurement unit 203, and a discrimination unit 204. Each will be described below.
 三次元形態情報取得部201は、ユーザ端末102から、人間の顔の三次元形態情報を示すデータを受信する。また、三次元形態情報取得部201は、受信したデータを他の機能部が参照できるようにメモリに記憶する。 The three-dimensional morphology information acquisition unit 201 receives data indicating the three-dimensional morphology information of the human face from the user terminal 102. Further, the three-dimensional form information acquisition unit 201 stores the received data in the memory so that other functional units can refer to it.
 ここで、三次元形態情報について説明する。人間の顔の三次元形態情報は、例えば、"カメラが撮影した人間の顔の画像(2次元画像)"と"深度センサーが取得した深度センサーから人間までの距離"とに基づく、人間の顔の三次元の形状を表すことができる情報である。また、人間の顔の三次元形態情報は、例えば、ステレオカメラ等の2台以上のカメラが異なる視点から撮影した画像の視差に基づく、人間の顔の三次元の形状を表すことができる情報である。 Here, the three-dimensional morphological information will be explained. The three-dimensional morphological information of the human face is based on, for example, "an image of the human face taken by the camera (two-dimensional image)" and "the distance from the depth sensor acquired by the depth sensor to the human". It is information that can represent the three-dimensional shape of. Further, the three-dimensional morphological information of the human face is information that can represent the three-dimensional shape of the human face based on the parallax of images taken by two or more cameras such as a stereo camera from different viewpoints. is there.
 三次元形態情報取得部201は、安静時の顔の三次元形態情報を取得してもよいし、あるいは、笑顔表出時の顔の三次元形態情報を取得してもよいし、あるいは、安静時の顔の三次元形態情報と笑顔表出時の顔の三次元形態情報との両方を取得してもよい。 The three-dimensional morphological information acquisition unit 201 may acquire the three-dimensional morphological information of the face at rest, may acquire the three-dimensional morphological information of the face at the time of expressing a smile, or may acquire the three-dimensional morphological information of the face at rest. Both the three-dimensional morphological information of the face at the time and the three-dimensional morphological information of the face at the time of expressing the smile may be acquired.
 本明細書で、安静時の顔とは、表情が表出していない顔である。また、笑顔表出時の顔とは、笑っている表情が表出している顔である。笑顔は、「笑顔表出時の顔の三次元形態」として定義することもできるし、「安静時の顔の三次元形態と笑顔表出時の顔の三次元形態との差分」として定義することもできる。 In this specification, a resting face is a face whose facial expression is not expressed. In addition, the face when the smile is expressed is the face where the laughing expression is expressed. A smile can be defined as "three-dimensional morphology of the face when the smile is expressed" or "difference between the three-dimensional morphology of the face at rest and the three-dimensional morphology of the face when the smile is expressed". You can also do it.
 なお、本明細書では安静時の顔を一例として説明するが、本発明は、表情が表出していない任意の顔(例えば、寝顔等)に適用することができる。また、本明細書では笑顔表出時の顔を一例として説明するが、本発明は、笑顔以外の表情(例えば、泣いている、怒っている等)が表出している顔に適用することができる。 Although the resting face is described as an example in the present specification, the present invention can be applied to an arbitrary face (for example, a sleeping face) in which a facial expression is not expressed. Further, although the face at the time of expressing a smile is described as an example in the present specification, the present invention may be applied to a face expressing a facial expression other than a smile (for example, crying, angry, etc.). it can.
 図2に戻る。三次元形態生成部202は、三次元形態情報取得部201が取得した三次元形態情報に基づいて、人間の顔の三次元形態を生成する。また、三次元形態生成部202は、生成した三次元形態のデータを他の機能部が参照できるようにメモリに記憶する。以下、<断面図>による三次元形態の生成と<メッシュ>による三次元形態の生成との2つの例について説明する。 Return to Fig. 2. The three-dimensional form generation unit 202 generates a three-dimensional form of a human face based on the three-dimensional form information acquired by the three-dimensional form information acquisition unit 201. Further, the three-dimensional form generation unit 202 stores the generated three-dimensional form data in the memory so that other functional units can refer to it. Hereinafter, two examples of the generation of the three-dimensional form by the <cross-sectional view> and the generation of the three-dimensional form by the <mesh> will be described.
<断面図>
 三次元形態生成部202は、三次元の座標系(人間の顔の左右方向をx軸とし、上下方向をy軸とし、前後方向(奥行き)をz軸とする)を決定する。また、三次元形態生成部202は、三次元形態情報を解剖学的な計測点に基づいてデータ処理することによって、人間の顔の断面図を生成することができる。図3は、本発明の一実施形態に係る断面図の一例である。図3には、目尻Exと口角点Chとを結ぶ線で人間の顔を切断した場合のyz断面が示されている。なお、断面は、鼻下点、鼻の最突点、眉間点、鼻根点、上唇点、下唇点、オトガイ点等の任意の点を通る断面であってよい。
<Cross section>
The three-dimensional form generation unit 202 determines a three-dimensional coordinate system (the left-right direction of the human face is the x-axis, the up-down direction is the y-axis, and the front-back direction (depth) is the z-axis). Further, the three-dimensional morphology generation unit 202 can generate a cross-sectional view of a human face by processing the three-dimensional morphology information based on the anatomical measurement points. FIG. 3 is an example of a cross-sectional view according to an embodiment of the present invention. FIG. 3 shows a yz cross section when a human face is cut along a line connecting the outer corner of the eye Ex and the corner point Ch of the mouth. The cross section may be a cross section that passes through an arbitrary point such as a nose bottom point, a nose tip point, a glabellar point, a nose root point, an upper lip point, a lower lip point, and a chin point.
<メッシュ>
 三次元形態生成部202は、三次元形態情報に基づいて、人間の顔のメッシュ(ポリゴンメッシュ)を生成することができる。図4は、本発明の一実施形態に係るメッシュの一例である。
<Mesh>
The three-dimensional form generation unit 202 can generate a mesh (polygon mesh) of a human face based on the three-dimensional form information. FIG. 4 is an example of a mesh according to an embodiment of the present invention.
 特徴変量計測部203は、三次元形態の特徴変量を計測する。また、特徴変量計測部203は、計測した特徴変量を他の機能部が参照できるようにメモリに記憶する。特徴変量は、人間の顔の形態の特徴を表す特徴パラメータである。以下、上記の<断面図>の場合の特徴変量の計測と<メッシュ>の場合の特徴変量の計測との2つの例について説明する。 The feature variate measurement unit 203 measures the feature variate in a three-dimensional form. Further, the feature variate measurement unit 203 stores the measured feature variate in the memory so that other function units can refer to it. The feature variate is a feature parameter that represents a feature of the morphology of the human face. Hereinafter, two examples of the measurement of the feature variate in the case of <cross-sectional view> and the measurement of the feature variate in the case of <mesh> will be described.
<断面図>
 特徴変量計測部203は、生成した断面図から特徴変量を計測する。例えば、図3では、目尻Exを基点とする口角Chのz軸方向における角度(v7)、目尻Exを基点とする当該断面における頬の突点P(Ex-Ch)の角度(v8)、目尻Exと口角Chの外形曲線の長さ(v12)、外形曲線で閉じられる面積(v13)などを特徴変量とすることができる。
<Cross section>
The feature variate measurement unit 203 measures the feature variate from the generated cross-sectional view. For example, in FIG. 3, the angle (v7) of the mouth angle Ch with the outer corner Ex as the base point in the z-axis direction, the angle (v8) of the cheek protrusion P (Ex-Ch) in the cross section with the outer corner Ex as the base point, and the outer corner of the eye. The length of the outer curve of Ex and the mouth angle Ch (v12), the area closed by the outer curve (v13), and the like can be used as characteristic variables.
 その他、特徴変量は、様々な断面位置における顔の部位のz方向へ突出する量、突点部の角度、突出量、凹点部の角度であってもよい。 In addition, the feature variation may be the amount of the face portion protruding in the z direction at various cross-sectional positions, the angle of the protruding point portion, the protruding amount, and the angle of the concave point portion.
<メッシュ>
 特徴変量計測部203は、生成したメッシュから特徴変量を計測する。例えば、図4では、ある特定部位のz平均値に対する差分や比率を特徴変量とすることができる。
<Mesh>
The feature variate measurement unit 203 measures the feature variate from the generated mesh. For example, in FIG. 4, the difference or ratio of a specific part with respect to the z average value can be used as a feature variate.
 判別部204は、人間の顔の三次元形態情報に基づいて生成した三次元形態の特量変量から、その人間がいずれのグループに属するかを判別する。また、判別部204は、判別した結果をユーザ端末102に通知する。以下、<判別関数>による判別、<パターンマッチング>による判別、<過去のデータ>による判別の例について説明する。 The discrimination unit 204 determines which group the human belongs to from the special variate of the three-dimensional morphology generated based on the three-dimensional morphological information of the human face. Further, the discrimination unit 204 notifies the user terminal 102 of the discrimination result. Hereinafter, examples of discrimination by <discrimination function>, discrimination by <pattern matching>, and discrimination by <past data> will be described.
<判別関数>
 判別部204は、判別関数記憶部205に記憶されている4つの判別関数(線形でも非線形でもよい)に、特徴変量計測部203が計測した特徴変量を代入する。
<Discrimination function>
The discriminant unit 204 substitutes the feature variables measured by the feature variable measurement unit 203 into the four discriminant functions (which may be linear or non-linear) stored in the discriminant function storage unit 205.
 以下、判別関数について説明する。例示は、表情の種類のうち安静時と笑顔表出時とを対象に、人間の顔の三次元形態を用いて、2群(ヤング群、エルダー群)の識別に有効な要素(特徴変量)を示したものである。 The discriminant function will be described below. The example is an element (feature variation) that is effective in distinguishing two groups (young group and elder group) by using the three-dimensional morphology of the human face for the types of facial expressions, that is, at rest and when the smile is expressed. Is shown.
 具体的には、安静時には、鼻下部の凸型の形態、目の下のふくらみが大きい、下顔面の幅が大きい、口唇の前突が大きい、顎角部のたるみが大きい、額の幅が大きい、鼻翼部の突出が大きい場合に、エルダー群と判別される。また、笑顔表出時には、口角の下垂、鼻下部の凸型の形態、目の垂直的高さが小さい、顎角部のたるみが大きい、目の下のふくらみが大きい、口唇の突出が大きい場合に、エルダー群と判別される。また、エルダー群は、安静時であるか笑顔表出時であるかが判定されにくい(つまり、識別に有意な特徴変量が少ない)。例えば、図5および図6は、本発明の一実施形態に係る有意差の認められる特徴変量を説明するための図である。ヤング群(例えば、18~32歳)とエルダー群(例えば、55~65歳)との差異が示されている。 Specifically, at rest, the convex morphology of the lower nose, the bulge under the eyes is large, the width of the lower face is large, the prognathism of the lips is large, the chin angle is large, the forehead is large, When the protrusion of the nasal wing is large, it is determined to be an elder group. In addition, when the smile is expressed, when the corner of the mouth is drooping, the convex shape of the lower part of the nose, the vertical height of the eyes is small, the chin angle is large, the bulge under the eyes is large, and the protrusion of the lips is large. It is determined to be an elder group. In addition, it is difficult to determine whether the elder group is at rest or when a smile is expressed (that is, there are few characteristic variables that are significant for discrimination). For example, FIGS. 5 and 6 are diagrams for explaining a characteristic variable in which a significant difference is observed according to an embodiment of the present invention. Differences between the Young group (eg, 18-32 years) and the Elder group (eg, 55-65 years) are shown.
 そして、人間の顔の三次元形態(安静時と笑顔表出時)から抽出した特徴変量を用いて、2群(ヤング群、エルダー群)を識別する判別関数を創出した(以下の<判別関数1><判別関数2>)。また、人間の顔の三次元形態(ヤング群とエルダー群)から抽出した特徴変量を用いて、2つの表情(安静時、笑顔表出時)を識別する判別関数を創出した(以下の<判別関数3><判別関数4>)。4つの判別関数は、それぞれ、多変量(つまり、有意差の認められた特徴変量)と重み係数とからなる。 Then, using the feature variables extracted from the three-dimensional morphology of the human face (at rest and when expressing a smile), a discriminant function that discriminates between the two groups (Young group and Elder group) was created (the following <discrimination function). 1> <discrimination function 2>). In addition, using the feature variables extracted from the three-dimensional morphology of the human face (Young group and Elder group), we created a discriminant function that discriminates between two facial expressions (at rest and when expressing a smile) (<discrimination below). Function 3> <discrimination function 4>). Each of the four discriminant functions consists of a multivariate (that is, a feature variable with a significant difference) and a weighting coefficient.
 本発明の判別関数では、ヤング群とエルダー群の差異を最大化するような特徴変量および重み係数が採用されている。また、本発明の判別関数では、安静時と笑顔表出時の差異を最大化するような特徴変量および重み係数が採用されている。そして、本発明の判別関数では、特徴変量をそれぞれ別々に評価するのではなく多変量(つまり、多次元のベクトル)として同時に扱うことを特徴としている。 In the discriminant function of the present invention, feature variables and weighting coefficients that maximize the difference between the Young group and the Elder group are adopted. Further, in the discriminant function of the present invention, a feature variate and a weighting coefficient that maximize the difference between the resting state and the smiling face expression are adopted. The discriminant function of the present invention is characterized in that the feature variables are not evaluated separately but are treated at the same time as multivariates (that is, multidimensional vectors).
 次に、4つの判別関数について説明する。判別関数記憶部205には、以下の4つの判別関数が記憶されている。 Next, the four discriminant functions will be described. The discriminant function storage unit 205 stores the following four discriminant functions.
<判別関数1>
 判別関数1は、第1の表情(例えば、安静時の顔)である者が、第1の年齢層(ヤング群)であるか第2の年齢層(エルダー群)であるかを判別するための判別関数である。例えば、判別関数1に特徴変量を代入した結果がプラス(正の値)であると、エルダー群であると判定される。以下は、判別関数1で用いられる特徴変量と重み係数とを示す。判別関数1で用いられる特徴変量は、「目の下のふくらみ」「顔の輪郭のたるみ」「鼻翼部の突出」「顔の幅」「鼻下部の垂直的距離」「口唇の前突」「鼻下部の凸型形態」である。
<Discrimination function 1>
The discriminant function 1 is for discriminating whether the person having the first facial expression (for example, the face at rest) is in the first age group (young group) or the second age group (elder group). It is a discriminant function of. For example, if the result of substituting the feature variable into the discriminant function 1 is positive (positive value), it is determined to be an elder group. The following shows the feature variates and weighting coefficients used in the discriminant function 1. The characteristic variables used in the discriminant function 1 are "bulge under the eyes", "sagging of the contour of the face", "protrusion of the ala of nose", "width of the face", "vertical distance of the lower part of the nose", "prognathism of the lips", and "lower part of the nose". It is a convex form of.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
<判別関数2>
 判別関数2は、第2の表情(例えば、笑顔表出時の顔)である者が、第1の年齢層(ヤング群)であるか第2の年齢層(エルダー群)であるかを判別するための判別関数である。例えば、判別関数2に特徴変量を代入した結果がプラス(正の値)であると、エルダー群であると判定される。以下は、判別関数2で用いられる特徴変量と重み係数とを示す。判別関数2で用いられる特徴変量は、「目と口角との垂直的距離」「鼻下部の凸型形態」「目の下のふくらみ」「顎角部のたるみ」「眼裂の垂直距離と水平距離の比率」である。
<Discrimination function 2>
The discriminant function 2 discriminates whether the person having the second facial expression (for example, the face when the smile is expressed) is the first age group (young group) or the second age group (elder group). It is a discriminant function to do. For example, if the result of substituting the feature variable into the discriminant function 2 is positive (positive value), it is determined to be an elder group. The following shows the feature variables and weighting factors used in the discriminant function 2. The feature variables used in the discriminant function 2 are "vertical distance between eyes and mouth corner", "convex morphology of lower nose", "bulge under eyes", "sagging of jaw corner", "vertical distance and horizontal distance of eye fissure". "Ratio".
Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002
<判別関数3>
 判別関数3は、第1の年齢層(ヤング群)に属する者が第1の表情(例えば、安静時の顔)であるか第2の表情(例えば、笑顔表出時の顔)であるかを判別するための判別関数である。例えば、判別関数3に特徴変量を代入した結果がマイナス(負の値)であると、笑顔表出時であると判定される。以下は、判別関数3で用いられる特徴変量と重み係数とを示す。判別関数3で用いられる特徴変量は、「笑顔表出時の顔の幅の増加量」「笑顔表出時の頬の突出の増加量」「笑顔表出時の鼻の後方移動量」「笑顔表出時の目の上方移動量」「笑顔表出時の下赤唇の厚みの減少量」「笑顔表出時の上下赤唇および口角の後方移動量」「笑顔表出時の下顔面高の減少量」「笑顔表出時のおとがい唇溝の深さの減少量」である。
<Discrimination function 3>
The discriminant function 3 determines whether the person belonging to the first age group (young group) has a first facial expression (for example, a resting face) or a second facial expression (for example, a face when a smile is expressed). It is a discriminant function for discriminating. For example, if the result of substituting the feature variable into the discriminant function 3 is negative (negative value), it is determined that the smile is expressed. The following shows the feature variates and weighting coefficients used in the discriminant function 3. The characteristic variables used in the discriminant function 3 are "increase in face width when smiling", "increase in cheek protrusion when smiling", "backward movement of nose when smiling", and "smile". "Upward movement of eyes when expressing smile""Reduction in thickness of lower red lips when expressing smile""Amount of backward movement of upper and lower red lips and corners of mouth when expressing smile""Lower face height when expressing smile" The amount of decrease in the depth of the lip groove when the smile is expressed.
Figure JPOXMLDOC01-appb-T000003
Figure JPOXMLDOC01-appb-T000003
<判別関数4>
 判別関数4は、第2の年齢層(エルダー群)に属する者が第1の表情(例えば、安静時の顔)であるか第2の表情(例えば、笑顔表出時の顔)であるかを判別するための判別関数である。例えば、判別関数4に特徴変量を代入した結果がマイナス(負の値)であると、笑顔表出時であると判定される。以下は、判別関数4で用いられる特徴変量と重み係数とを示す。判別関数4で用いられる特徴変量は、「笑顔表出時の下顔面高の減少量」「笑顔表出時の頬の突出量」である。
<Discrimination function 4>
The discriminant function 4 determines whether a person belonging to the second age group (elder group) has a first facial expression (for example, a resting face) or a second facial expression (for example, a face when a smile is expressed). It is a discriminant function for discriminating. For example, if the result of substituting the feature variable into the discriminant function 4 is negative (negative value), it is determined that the smile is expressed. The following shows the feature variate and the weighting coefficient used in the discriminant function 4. The characteristic variables used in the discriminant function 4 are "a decrease in lower facial height when a smile is expressed" and "a cheek protrusion amount when a smile is expressed".
Figure JPOXMLDOC01-appb-T000004
Figure JPOXMLDOC01-appb-T000004
 上述したように、判別部204は、判別関数記憶部205に記憶されている4つの判別関数に、特徴変量計測部203が計測した特徴変量を代入する。また、判別部204は、判別関数1~4の結果のうち絶対値が大きいものを使う。例えば、判別関数1の結果が"+2"、判別関数2の結果が"+10"、判別関数3の結果が"-2"、判別関数4の結果が"-13"であったとする。そうすると、判別関数2と判別関数4から、エルダー群に属する者であり、かつ、笑顔表出時の顔であることが判別される。 As described above, the discrimination unit 204 substitutes the feature variables measured by the feature variable measurement unit 203 into the four discrimination functions stored in the discrimination function storage unit 205. Further, the discrimination unit 204 uses the results of the discrimination functions 1 to 4 having a large absolute value. For example, assume that the result of the discriminant function 1 is "+2", the result of the discriminant function 2 is "+10", the result of the discriminant function 3 is "-2", and the result of the discriminant function 4 is "-13". Then, from the discriminant function 2 and the discriminant function 4, it is determined that the person belongs to the elder group and is the face when the smile is expressed.
 なお、上述したように、三次元形態情報取得部201は、安静時の顔の三次元形態情報と笑顔表出時の顔の三次元形態情報との両方を取得することができる。また、笑顔は、「安静時の顔の三次元形態と笑顔表出時の顔の三次元形態との差分」として定義することができる。上述したように、エルダー群は、安静時であるか笑顔表出時であるかが判定されにくい。そのため、判別装置101は、安静時の顔の三次元形態と笑顔表出時の顔の三次元形態との差分を用いることによって、より正確にヤング群とエルダー群との判別をする(つまり、部位特異的な移動量の減少が認められると(つまり、差分が小さいと)エルダー群である)ことができる。 As described above, the three-dimensional morphological information acquisition unit 201 can acquire both the three-dimensional morphological information of the face at rest and the three-dimensional morphological information of the face when the smile is expressed. Further, the smile can be defined as "the difference between the three-dimensional form of the face at rest and the three-dimensional form of the face when the smile is expressed". As described above, it is difficult to determine whether the elder group is at rest or when a smile is expressed. Therefore, the discrimination device 101 more accurately discriminates between the Young group and the Elder group by using the difference between the three-dimensional morphology of the face at rest and the three-dimensional morphology of the face when the smile is expressed (that is,). If a site-specific decrease in the amount of movement is observed (that is, if the difference is small), it can be an elder group.
<パターンマッチング>
 また、判別部204は、上記の判別関数を用いずに、予めパターン化されたいくつかの三次元形態の集合の中央値とパターンマッチング(例えば、マンハッタン距離やユークリッド距離等による機械学習)を行う構成とすることもできる。例えば、図7は、本発明の一実施形態に係る判別システムの一例である。
<Pattern matching>
Further, the discriminant unit 204 performs pattern matching (for example, machine learning based on Manhattan distance, Euclidean distance, etc.) with the median value of a set of several pre-patterned three-dimensional forms without using the above discriminant function. It can also be configured. For example, FIG. 7 is an example of a discrimination system according to an embodiment of the present invention.
 図7の例では、三次元形態の特徴変量および重み係数から特徴ベクトルが作成される。そして、作成された被験者の特徴ベクトルと、各グループ(例えば、第1の年齢層の第1の表情であるグループ、第1の年齢層の第2の表情であるグループ、第2の年齢層の第1の表情であるグループ、第2の年齢層の第2の表情であるグループ)との距離(例えば、マンハッタン距離、ユークリッド距離、マハラノビス距離等)に基づいて、被験者が属するグループが判断される。 In the example of FIG. 7, a feature vector is created from a feature variate and a weighting coefficient in a three-dimensional form. Then, the created feature vector of the subject and each group (for example, a group having a first facial expression in the first age group, a group having a second facial expression in the first age group, and a second age group) The group to which the subject belongs is determined based on the distance (for example, Manhattan distance, Euclidean distance, Mahalanobis distance, etc.) from the group having the first facial expression and the group having the second facial expression in the second age group. ..
<過去のデータ>
 また、判別部204は、上記の判別関数を用いずに、過去の最も似た三次元形態を参照する構成とすることもできる。
<Past data>
Further, the discrimination unit 204 may be configured to refer to the most similar three-dimensional form in the past without using the above discrimination function.
 図8は、本発明の一実施形態に係る判別処理を示すフローチャートである。 FIG. 8 is a flowchart showing a discrimination process according to an embodiment of the present invention.
 ステップ801(S801)において、三次元形態情報取得部201は、ユーザ端末102から、人間の顔の三次元形態情報を示すデータを受信する。 In step 801 (S801), the three-dimensional form information acquisition unit 201 receives data indicating the three-dimensional form information of the human face from the user terminal 102.
 ステップ802(S802)において、三次元形態生成部202は、S801で取得した三次元形態情報に基づいて、人間の顔の三次元形態を生成する。 In step 802 (S802), the three-dimensional form generation unit 202 generates a three-dimensional form of a human face based on the three-dimensional form information acquired in S801.
 ステップ803(S803)において、特徴変量計測部203は、S802で生成した三次元形態の特徴変量を計測する。 In step 803 (S803), the feature variate measurement unit 203 measures the feature variate in the three-dimensional form generated in S802.
 ステップ804(S804)において、判別部204は、S803で計測した特量変量から、その人間がいずれのグループに属するかを判別する。 In step 804 (S804), the discrimination unit 204 determines which group the person belongs to from the special variate measured in S803.
 ステップ805(S805)において、判別部204は、S804で判別した結果をユーザ端末102へ通知する。 In step 805 (S805), the determination unit 204 notifies the user terminal 102 of the result of determination in S804.
 このように、本発明の一実施形態では、人間の顔の三次元形態の特徴変量から、その人間がいずれのグループに属するかを判別することができる。発明者が見出した「第1の表情(例えば、安静時の顔)において、第1の年齢層(ヤング群)と第2の年齢層(エルダー群)との差異を最大化する特徴変量」、「第2の表情(例えば、笑顔表出時の顔)において、第1の年齢層(ヤング群)と第2の年齢層(エルダー群)との差異を最大化する特徴変量」、「第1の年齢層(ヤング群)において、第1の表情(例えば、安静時の顔)と第2の表情(例えば、笑顔表出時の顔)との差異を最大化する特徴変量」、「第2の年齢層(エルダー群)において、第1の表情(例えば、安静時の顔)と第2の表情(例えば、笑顔表出時の顔)との差異を最大化する特徴変量」を用いることによって、その人間がいずれのグループに属するかを判別することができる。 As described above, in one embodiment of the present invention, it is possible to determine which group the human belongs to from the characteristic variation of the three-dimensional form of the human face. "Characteristic variation that maximizes the difference between the first age group (young group) and the second age group (elder group) in the first facial expression (for example, the face at rest)", which the inventor found. "Characteristic variation that maximizes the difference between the first age group (young group) and the second age group (elder group) in the second facial expression (for example, the face when the smile is expressed)", "1st A feature variation that maximizes the difference between the first facial expression (for example, the face at rest) and the second facial expression (for example, the face when the smile is expressed) in the age group (young group) of By using a feature variation that maximizes the difference between the first facial expression (eg, resting face) and the second facial expression (eg, smiling face) in the age group (elder group). , It is possible to determine which group the person belongs to.
<美容分野における応用>
 本発明の一実施形態に係る判別装置、方法、プログラム、およびシステムは、美容分野に適用することができる。具体的には、スマートフォン等に組み込まれている深度センサーを用いて、安静時と笑顔表出時の顔の三次元形態を撮影し、笑顔の加齢の程度を測定する機器を提供することができる。本発明を用いて、表情トレーニングの効果を測定したり、化粧品の効果を笑顔の表出という観点から顧客に対して客観的に示したりすることが可能となる。
<Application in the beauty field>
The discriminating device, method, program, and system according to an embodiment of the present invention can be applied to the beauty field. Specifically, it is possible to provide a device that measures the degree of aging of a smile by photographing the three-dimensional morphology of the face at rest and when the smile is expressed using a depth sensor built into a smartphone or the like. it can. Using the present invention, it is possible to measure the effect of facial expression training and objectively show the effect of cosmetics to customers from the viewpoint of expressing a smile.
<医療分野における応用>
 本発明の一実施形態に係る判別装置、方法、プログラム、およびシステムは、医療分野に適用することができる。具体的には、介護の原因疾患の第2位である認知症では、その発症過程の初期に表情が乏しくなることが知られている。本発明を用いて、表情を計測することにより、通常の加齢による変化と疾病による変化を分離して解析することが可能になり、疾病の早期発見につながる可能性がある。また、顎顔面部の形成異常に由来する顔や表情の形態的な歪みが原因となり、個人にとって重大な社会心理学的な不適応という問題を引き起こすことがあり、近年、社会心理学的な立場から、より良い容貌および表情を確立することが矯正歯科治療や形成外科治療の目的として重要視されてきている。顔の形態異常に対する治療は主に思春期から青年前期にかけて行われるが、その結果として得られた容貌・表情は、青年後期から壮年期にかけて加齢とともに変化しながらその役割を果たす。すなわち、矯正歯科治療や形成外科治療を行う際には、治療直後の容貌・表情だけを目標とするのではなく、治療後の容貌・表情の加齢変化を考慮に入れた治療計画を立てる必要があるといえる。加齢変化を三次元的に明らかにすることで、加齢変化を考慮に入れた治療計画が可能となる。
<Application in the medical field>
The discriminant device, method, program, and system according to an embodiment of the present invention can be applied to the medical field. Specifically, it is known that dementia, which is the second leading cause of long-term care, has a poor facial expression in the early stage of its onset process. By measuring facial expressions using the present invention, it becomes possible to analyze changes due to normal aging and changes due to illness separately, which may lead to early detection of illness. In addition, morphological distortion of the face and facial expressions caused by dysplasia of the maxillofacial region may cause a serious problem of social psychological maladaptation for individuals. Therefore, establishing a better appearance and facial expression has been emphasized as the purpose of orthodontic treatment and plastic surgery treatment. Treatment for facial morphological abnormalities is mainly performed from adolescence to early adolescence, and the resulting appearance and facial expression play a role while changing with age from late adolescence to middle age. In other words, when performing orthodontic treatment or plastic surgery treatment, it is necessary to make a treatment plan that takes into account the age-related changes in appearance and facial expression after treatment, rather than just targeting the appearance and facial expression immediately after treatment. It can be said that there is. By clarifying the age-related changes three-dimensionally, it becomes possible to plan the treatment in consideration of the age-related changes.
<ハードウェア構成>
 図9は、本発明の一実施形態に係る判別装置101のハードウェア構成を示すブロック図である。判別装置101は、CPU(Central Processing Unit)1、ROM(Read Only Memory)2、RAM(Random Access Memory)3を有する。CPU1、ROM2、RAM3は、いわゆるコンピュータを形成する。
<Hardware configuration>
FIG. 9 is a block diagram showing a hardware configuration of the discrimination device 101 according to the embodiment of the present invention. The discriminating device 101 includes a CPU (Central Processing Unit) 1, a ROM (Read Only Memory) 2, and a RAM (Random Access Memory) 3. The CPU 1, ROM 2, and RAM 3 form a so-called computer.
 また、判別装置101は、補助記憶装置4、表示装置5、操作装置6、I/F(Interface)装置7、ドライブ装置8を有する。なお、判別装置101の各ハードウェアは、バス9を介して相互に接続されている。 Further, the discrimination device 101 includes an auxiliary storage device 4, a display device 5, an operation device 6, an I / F (Interface) device 7, and a drive device 8. The hardware of the discriminating device 101 is connected to each other via the bus 9.
 CPU1は、補助記憶装置4にインストールされている各種プログラムを実行する演算デバイスである。 The CPU 1 is an arithmetic device that executes various programs installed in the auxiliary storage device 4.
 ROM2は、不揮発性メモリである。ROM2は、補助記憶装置4にインストールされている各種プログラムをCPU1が実行するために必要な各種プログラム、データ等を格納する主記憶デバイスとして機能する。具体的には、ROM2はBIOS(Basic Input/Output System)やEFI(Extensible Firmware Interface)等のブートプログラム等を格納する、主記憶デバイスとして機能する。 ROM2 is a non-volatile memory. The ROM 2 functions as a main storage device for storing various programs, data, and the like necessary for the CPU 1 to execute various programs installed in the auxiliary storage device 4. Specifically, the ROM 2 functions as a main memory device that stores boot programs such as BIOS (Basic Input / Output System) and EFI (Extensible Firmware Interface).
 RAM3は、DRAM(Dynamic Random Access Memory)やSRAM(Static Random Access Memory)等の揮発性メモリである。RAM3は、補助記憶装置4にインストールされている各種プログラムがCPU1によって実行される際に展開される作業領域を提供する、主記憶デバイスとして機能する。 RAM 3 is a volatile memory such as DRAM (Dynamic Random Access Memory) or SRAM (Static Random Access Memory). The RAM 3 functions as a main storage device that provides a work area that is expanded when various programs installed in the auxiliary storage device 4 are executed by the CPU 1.
 補助記憶装置4は、各種プログラムや、各種プログラムが実行される際に用いられる情報を格納する補助記憶デバイスである。 The auxiliary storage device 4 is an auxiliary storage device that stores various programs and information used when various programs are executed.
 表示装置5は、判別装置101の内部状態等を表示する表示デバイスである。 The display device 5 is a display device that displays the internal state of the discrimination device 101 and the like.
 操作装置6は、判別装置101の管理者が判別装置101に対して各種指示を入力する入力デバイスである。 The operation device 6 is an input device in which the administrator of the discrimination device 101 inputs various instructions to the discrimination device 101.
 I/F装置7は、ネットワーク103に接続し、ユーザ端末102と通信を行うための通信デバイスである。 The I / F device 7 is a communication device for connecting to the network 103 and communicating with the user terminal 102.
 ドライブ装置8は記憶媒体10をセットするためのデバイスである。ここでいう記憶媒体10には、CD-ROM、フレキシブルディスク、光磁気ディスク等のように情報を光学的、電気的あるいは磁気的に記録する媒体が含まれる。また、記憶媒体10には、EPROM (Erasable Programmable Read Only Memory)、フラッシュメモリ等のように情報を電気的に記録する半導体メモリ等が含まれていてもよい。 The drive device 8 is a device for setting the storage medium 10. The storage medium 10 referred to here includes a medium such as a CD-ROM, a flexible disk, a magneto-optical disk, or the like that optically, electrically, or magnetically records information. Further, the storage medium 10 may include a semiconductor memory for electrically recording information such as an EPROM (Erasable Programmable Read Only Memory) and a flash memory.
 なお、補助記憶装置4にインストールされる各種プログラムは、例えば、配布された記憶媒体10がドライブ装置8にセットされ、該記憶媒体10に記録された各種プログラムがドライブ装置8により読み出されることでインストールされる。あるいは、補助記憶装置4にインストールされる各種プログラムは、I/F装置7を介して、ネットワーク103とは異なる他のネットワークよりダウンロードされることでインストールされてもよい。 The various programs installed in the auxiliary storage device 4 are installed, for example, by setting the distributed storage medium 10 in the drive device 8 and reading the various programs recorded in the storage medium 10 by the drive device 8. Will be done. Alternatively, the various programs installed in the auxiliary storage device 4 may be installed by being downloaded from another network different from the network 103 via the I / F device 7.
 以上、本発明の実施例について詳述したが、本発明は上述した特定の実施形態に限定されるものではなく、特許請求の範囲に記載された本発明の要旨の範囲内において、種々の変形・変更が可能である。 Although the examples of the present invention have been described in detail above, the present invention is not limited to the specific embodiments described above, and various modifications are made within the scope of the gist of the present invention described in the claims.・ Can be changed.
100 判別システム
101 判別装置
102 ユーザ端末
103 ネットワーク
201 三次元形態情報取得部
202 三次元形態生成部
203 特徴変量計測部
204 判別部
205 判別関数記憶部
100 Discrimination system 101 Discrimination device 102 User terminal 103 Network 201 Three-dimensional morphology information acquisition unit 202 Three-dimensional morphology generation unit 203 Feature variability measurement unit 204 Discrimination unit 205 Discrimination function storage unit

Claims (12)

  1.  顔の三次元形態情報を取得する三次元形態情報取得部と、
     前記三次元形態情報に基づいて、前記顔の人間が第1の年齢層であるか前記第1の年齢層よりも高い第2の年齢層であるかを判別する判別部と
     を備えた判別装置。
    A 3D morphology information acquisition unit that acquires 3D morphology information of the face,
    A discriminating device provided with a discriminating unit for discriminating whether the human with the face is in the first age group or in the second age group higher than the first age group based on the three-dimensional morphological information. ..
  2.  前記顔は笑顔表出時の顔である、請求項1に記載の判別装置。 The discrimination device according to claim 1, wherein the face is a face when a smile is expressed.
  3.  前記判別部は、判別関数により判別し、
     前記判別関数は、前記三次元形態情報に基づいて生成された三次元形態から計測された特徴変量を含み、前記特徴変量は、前記第1の年齢層に属する人間と前記第2の年齢層に属する人間との差異を示す、請求項1または2に記載の判別装置。
    The discrimination unit discriminates by a discrimination function and
    The discriminant function includes a feature variate measured from a three-dimensional morphology generated based on the three-dimensional morphology information, and the feature variate is applied to a human belonging to the first age group and the second age group. The discriminating device according to claim 1 or 2, which indicates a difference from a human being to which the device belongs.
  4.  前記判別部は、前記顔が第1の表情であるか第2の表情であるかをさらに判別する請求項1に記載の判別装置。 The discrimination device according to claim 1, wherein the discrimination unit further discriminates whether the face has a first facial expression or a second facial expression.
  5.  前記判別部は、判別関数により判別し、
     前記判別関数は、
     前記第1の表情である人間が、前記第1の年齢層であるか前記第2の年齢層であるかを判別する第1の判別関数と、
     前記第2の表情である人間が、前記第1の年齢層であるか前記第2の年齢層であるかを判別する第2の判別関数と、
     前記第1の年齢層に属する人間が前記第1の表情であるか前記第2の表情であるかを判別する第3の判別関数と、
     前記第2の年齢層に属する人間が前記第1の表情であるか前記第2の表情であるかを判別する第4の判別関数と
     を含む請求項4に記載の判別装置。
    The discrimination unit discriminates by a discrimination function and
    The discriminant function
    The first discriminant function for discriminating whether the human being with the first facial expression is in the first age group or the second age group, and
    A second discriminant function for discriminating whether the human being with the second facial expression is in the first age group or the second age group,
    A third discriminant function for discriminating whether a person belonging to the first age group has the first facial expression or the second facial expression, and
    The discriminating device according to claim 4, further comprising a fourth discriminating function for discriminating whether a person belonging to the second age group has the first facial expression or the second facial expression.
  6.  前記第2の表情は笑顔表出時の顔である、請求項4または5に記載の判別装置。 The discrimination device according to claim 4 or 5, wherein the second facial expression is a face when a smile is expressed.
  7.  前記第1の判別関数および前記第2の判別関数は、前記三次元形態情報に基づいて生成された三次元形態から計測された特徴変量を含み、前記特徴変量は、前記第1の年齢層に属する人間と前記第2の年齢層に属する人間との差異を示す、請求項5に記載の判別装置。 The first discriminant function and the second discriminant function include a feature variate measured from a three-dimensional morphology generated based on the three-dimensional morphology information, and the feature variate is transferred to the first age group. The discriminating device according to claim 5, which shows the difference between the person to which the person belongs and the person to which the person belongs to the second age group.
  8.  前記第3の判別関数および前記第4の判別関数は、前記三次元形態情報に基づいて生成された三次元形態から計測された特徴変量を含み、前記特徴変量は、前記第1の表情である人間と前記第2の表情である人間との差異を示す、請求項5に記載の判別装置。 The third discriminant function and the fourth discriminant function include a feature variate measured from a three-dimensional morphology generated based on the three-dimensional morphology information, and the feature variate is the first facial expression. The discriminating device according to claim 5, which shows the difference between a human and a human having the second facial expression.
  9.  前記第4の判別関数の前記特徴変量の個数は、前記第3の判別関数の特徴変量の個数よりも少ない、請求項8に記載の判別装置。 The discriminant device according to claim 8, wherein the number of the feature variables of the fourth discriminant function is smaller than the number of feature variables of the third discriminant function.
  10.  コンピュータが実行する方法であって、
     顔の三次元形態情報を取得するステップと、
     前記三次元形態情報に基づいて、前記顔の人間が第1の年齢層であるか前記第1の年齢層よりも高い第2の年齢層であるかを判別するステップと
     を含む方法。
    The way the computer does
    Steps to acquire 3D morphological information of the face,
    A method including a step of determining whether a human with a face is in a first age group or a second age group higher than the first age group based on the three-dimensional morphological information.
  11.  コンピュータを
     顔の三次元形態情報を取得する三次元形態情報取得部
     前記三次元形態情報に基づいて、前記顔の人間が第1の年齢層であるか前記第1の年齢層よりも高い第2の年齢層であるかを判別する判別部
     として機能させるためのプログラム。
    Three-dimensional morphology information acquisition unit that acquires three-dimensional morphology information of a face from a computer Based on the three-dimensional morphology information, a second person whose face is in the first age group or higher than the first age group. A program that functions as a discriminator that determines whether or not you are in the age group of.
  12.  判別装置とユーザ端末とを含む判別システムであって、
     前記判別装置は、
      前記ユーザ端末から、顔の三次元形態情報を取得する三次元形態情報取得部と、
      前記三次元形態情報に基づいて、前記顔の人間が第1の年齢層であるか前記第1の年齢層よりも高い第2の年齢層であるかを判別する判別部と、を備え、
     前記ユーザ端末は、
      前記判別装置が判別した結果を受信する、判別システム。
    A discrimination system that includes a discrimination device and a user terminal.
    The discrimination device is
    A three-dimensional morphology information acquisition unit that acquires three-dimensional morphology information of a face from the user terminal,
    Based on the three-dimensional morphological information, a discriminating unit for determining whether the human with the face is in the first age group or in the second age group higher than the first age group is provided.
    The user terminal is
    A discrimination system that receives the result of discrimination by the discrimination device.
PCT/JP2019/012719 2019-03-26 2019-03-26 Device, method, program, and system for determining three-dimensional shape of face WO2020194488A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/012719 WO2020194488A1 (en) 2019-03-26 2019-03-26 Device, method, program, and system for determining three-dimensional shape of face
JP2021508455A JP7226745B2 (en) 2019-03-26 2019-03-26 Apparatus, method, program, and system for determining three-dimensional facial morphology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/012719 WO2020194488A1 (en) 2019-03-26 2019-03-26 Device, method, program, and system for determining three-dimensional shape of face

Publications (1)

Publication Number Publication Date
WO2020194488A1 true WO2020194488A1 (en) 2020-10-01

Family

ID=72611150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/012719 WO2020194488A1 (en) 2019-03-26 2019-03-26 Device, method, program, and system for determining three-dimensional shape of face

Country Status (2)

Country Link
JP (1) JP7226745B2 (en)
WO (1) WO2020194488A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009093490A (en) * 2007-10-10 2009-04-30 Mitsubishi Electric Corp Age estimation device and program
JP2014178969A (en) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd Information processor and determination method
JP2015219648A (en) * 2014-05-15 2015-12-07 カシオ計算機株式会社 Age estimation device, imaging device, age estimation method and program
JP2016177755A (en) * 2015-03-23 2016-10-06 日本電気株式会社 Order terminal equipment, order system, customer information generation method, and program
JP2016178596A (en) * 2015-03-23 2016-10-06 日本電気株式会社 Telephone set, telephone system, sound volume setting method and program of telephone set
JP2016193175A (en) * 2015-03-31 2016-11-17 ポーラ化成工業株式会社 Extraction method of determination part for apparent face impression, extraction method of determining factor for apparent face impression, and differentiation method for apparent face impression
JP2018120644A (en) * 2018-05-10 2018-08-02 シャープ株式会社 Identification apparatus, identification method, and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009093490A (en) * 2007-10-10 2009-04-30 Mitsubishi Electric Corp Age estimation device and program
JP2014178969A (en) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd Information processor and determination method
JP2015219648A (en) * 2014-05-15 2015-12-07 カシオ計算機株式会社 Age estimation device, imaging device, age estimation method and program
JP2016177755A (en) * 2015-03-23 2016-10-06 日本電気株式会社 Order terminal equipment, order system, customer information generation method, and program
JP2016178596A (en) * 2015-03-23 2016-10-06 日本電気株式会社 Telephone set, telephone system, sound volume setting method and program of telephone set
JP2016193175A (en) * 2015-03-31 2016-11-17 ポーラ化成工業株式会社 Extraction method of determination part for apparent face impression, extraction method of determining factor for apparent face impression, and differentiation method for apparent face impression
JP2018120644A (en) * 2018-05-10 2018-08-02 シャープ株式会社 Identification apparatus, identification method, and program

Also Published As

Publication number Publication date
JP7226745B2 (en) 2023-02-21
JPWO2020194488A1 (en) 2020-10-01

Similar Documents

Publication Publication Date Title
Lou et al. Realistic facial expression reconstruction for VR HMD users
Dibeklioğlu et al. Combining facial dynamics with appearance for age estimation
CN111539912B (en) Health index evaluation method and equipment based on face structure positioning and storage medium
US10614174B2 (en) System and method for adding surface detail to digital crown models created using statistical techniques
KR101930851B1 (en) A skin analysis and diagnosis system for 3D face modeling
CN113436734A (en) Tooth health assessment method and device based on face structure positioning and storage medium
KR101948040B1 (en) Makeup recommendation method based on face type, and recording medium storing program for executing the same, and recording medium storing program for executing the same
KR20140067372A (en) Method and apparatus for mesuring of skin elasticity using moire image
JP5651385B2 (en) Face evaluation method
WO2020194488A1 (en) Device, method, program, and system for determining three-dimensional shape of face
Mejía et al. Head measurements from 3D point clouds
CN114743252B (en) Feature point screening method, device and storage medium for head model
JP7074422B2 (en) Aging analysis method
JP5897745B2 (en) Aging analysis method and aging analyzer
JP7439932B2 (en) Information processing system, data storage device, data generation device, information processing method, data storage method, data generation method, recording medium, and database
JP2015064823A (en) Cosmetic evaluation method, and facial expression wrinkle quantitation method
JP2007097950A (en) Lip makeup method
KR101779840B1 (en) Apparatus and method for analyzing face based on smile detection
Hsu et al. Extraction of visual facial features for health management
JP5959920B2 (en) Eye size impression determination method and apparatus
US20240070885A1 (en) Skeleton estimating method, device, non-transitory computer-readable recording medium storing program, system, trained model generating method, and trained model
Lin et al. Growth simulation of facial/head model from childhood to adulthood
Al-Meyah et al. 4D analysis of facial ageing using dynamic features
WO2023210341A1 (en) Method, device, and program for face classification
Alavani et al. Human face anthropometric measurements using consumer depth camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19921562

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021508455

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19921562

Country of ref document: EP

Kind code of ref document: A1