WO2020184288A1 - Method and system for predicting facial morphology in facial expression after treatment - Google Patents

Method and system for predicting facial morphology in facial expression after treatment Download PDF

Info

Publication number
WO2020184288A1
WO2020184288A1 PCT/JP2020/008895 JP2020008895W WO2020184288A1 WO 2020184288 A1 WO2020184288 A1 WO 2020184288A1 JP 2020008895 W JP2020008895 W JP 2020008895W WO 2020184288 A1 WO2020184288 A1 WO 2020184288A1
Authority
WO
WIPO (PCT)
Prior art keywords
case
facial
patient
morphology
expression
Prior art date
Application number
PCT/JP2020/008895
Other languages
French (fr)
Japanese (ja)
Inventor
千尋 谷川
Original Assignee
国立大学法人大阪大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学 filed Critical 国立大学法人大阪大学
Priority to JP2021504950A priority Critical patent/JPWO2020184288A1/ja
Publication of WO2020184288A1 publication Critical patent/WO2020184288A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions

Definitions

  • the present invention relates to a method and a system for predicting the facial morphology of a patient after treatment by arithmetic processing.
  • the present invention relates to a method and a system capable of quantitatively predicting the facial morphology at the time of facial expression expression such as a smile after orthodontic treatment with high accuracy.
  • the human face has a strong influence in obtaining the psychological satisfaction that oneself is socially accepted.
  • facial expressions play an important role as a nonverbal communication means for transmitting emotions and thoughts in social life. Therefore, in modern orthodontic treatment, it is recognized that improving the morphology of the soft tissue of the face is one of the important therapeutic purposes from a social psychological standpoint.
  • a dentist decides on a treatment policy for a patient with an irregular occlusion, whether it is a tooth extraction or a non-extraction, and whether surgery is necessary or camouflage treatment (treatment without surgery) is sufficient. It is indispensable to objectively evaluate the patient's three-dimensional facial morphology and to predict the prognosis of the facial morphology in order to accurately judge.
  • the prediction of facial changes after orthodontic treatment is the hard tissue (dental skeleton) and soft tissue of the patient before orthodontics shown on the head X-ray standard photograph (also referred to as “cephalog” or simply “cephaloc”). It is based on the profile (muscle and skin). For example, software that can visualize and simulate the profile after treatment by performing image processing display such as moving hard tissue on a two-dimensional cephalo image displayed on a monitor and moving soft tissue accordingly. Wear is widespread.
  • Patent Document 1 describes a method of predicting the appearance of a face in a postoperative front view from a preoperative frontal head X-ray standard photograph and a normal facial photograph of a patient in surgical orthodontic treatment of a jaw deformed patient. It is disclosed.
  • the conventional prediction algorithm using cephalo is constructed on the premise that the amount of movement of hard tissues such as teeth and jawbones and soft tissues such as skin is in a simple proportional relationship, and the proportionality constant is also determined by specialist doctors. Designated based on subjectivity or experience. Therefore, the prediction results of facial changes vary among medical professionals, and the prediction accuracy is not guaranteed in a quantitative and objective sense.
  • An object of the present invention is to provide a technique capable of quantitatively predicting a facial morphology in consideration of a patient's facial expression after treatment.
  • the present invention is a method of predicting the facial morphology at the time of facial expression expression after treatment of a patient by an arithmetic process executed by a computer device, and the arithmetic processing performs the treatment.
  • the post-treatment facial expression expression-approximate case facial morphology normalized based on the post-treatment facial expression expression case facial morphology data acquired by using a three-dimensional measuring device at the time of post-treatment facial expression expression.
  • the step of calculating the difference NCS post-pre the step of calculating the resting patient facial morphology model PFMr normalized based on the patient facial morphology data PF of the evaluation target patient, and the resting patient facial morphology model PFMr.
  • the facial expression table predicted after the treatment of the patient to be evaluated by adding the approximate case vector average difference NCS post-pre. It is a method of predicting facial morphology at the time of facial expression expression after treatment, which includes a step of calculating a predicted facial morphology model PFMs-prd at the time of appearance.
  • the case facial morphological data CF for each of the patient patients is the pretreatment resting case facial morphological data CFr-pre and the pretreatment facial expression expression case facial morphological data CFs-pre.
  • the pretreatment case morphology change amount CDpre in which the case feature vector CV is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre.
  • the patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression, and the patient characteristics. It is preferable that the vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs.
  • the calculation process calculates a normalized pre-correction hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the evaluation target patient, and the correction.
  • a predetermined number of cases are selected in the order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
  • the calculation process includes a step of classifying the case feature vector CV into a plurality of case classes by performing a clustering process, and a step of calculating the cluster center of gravity G for each case class.
  • the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is the approximate case feature. It may be selected as the vector NCV.
  • the present invention is a system for predicting the facial morphology at the time of facial expression expression after treatment of a patient, and is realized by arithmetic processing of a computer device, a feature vector extraction means, an approximate case selection means, and a prediction model.
  • a process of extracting a set of multidimensional case feature vectors CV having a plurality of preselected feature variables as elements, and a new patient who is considering treatment (the new patient) A process of extracting a multidimensional patient feature vector PV having the plurality of feature variables as elements based on the patient facial morphology data PF acquired by using a three-dimensional measuring device from the “patient to be evaluated”).
  • the approximate case selection means executes a process of selecting a plurality of approximate case feature vectors NCV that are close to the patient feature vector PV from the set of the case feature vectors CV for the plurality of the case patients.
  • the predictive model calculation means measures three-dimensionally at rest before treatment for each case patient (the selected case patient is referred to as an "approximate case patient") corresponding to the plurality of selected approximate case feature vectors NCV.
  • Pretreatment resting case acquired using the device Process to calculate the pretreatment resting approximate case facial morphology model NCMR-pre normalized based on the facial morphology data, and the post-treatment facial expression table for each of the approximate case patients.
  • Post-treatment facial expression-appearing case acquired using a three-dimensional measuring device at the time of delivery Normalized post-treatment facial expression-appearing approximate case Facial morphology model NCMs-post is calculated based on the facial morphology data, and the pre-treatment
  • the process of calculating the vector average of the resting approximate case facial morphology model NCMr-pre to obtain the pretreatment approximate case vector average NCSpre and the vector average of the post-treatment approximate case facial morphology model NCMs-post are calculated.
  • the process of calculating the resting patient face morphology model PFMr which is normalized based on the patient face morphology data PF of the patient, and the resting patient face morphology model PFMr
  • the facial expression after treatment is executed, which is the process of calculating the predicted facial morphology model PFMs-prd at the time of facial expression expression, which is predicted after the treatment of the patient to be evaluated. It is a facial expression prediction system at the time of expression.
  • the case facial morphology data CF for each of the patient patients is the pretreatment resting case facial morphological data CFr-pre and the pretreatment facial expression expression case facial morphological data CFs-pre.
  • the pretreatment case morphology change amount CDpre in which the case feature vector CV is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre.
  • the patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression, and the patient characteristics. It is preferable that the vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs.
  • the prediction model calculation means calculates a normalized pre-correction hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the evaluation target patient.
  • a predetermined number of cases are selected in order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
  • the approximate case selection means calculates the cluster center of gravity G for each of the case classes and the process of classifying the case feature vector CV into a plurality of case classes by performing the clustering process.
  • the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is the case feature vector CV. It may be selected as the approximate case feature vector NCV.
  • the present invention it is possible to easily and quantitatively predict the facial morphology at the time of facial expression expression after treatment of a patient. Therefore, it is possible to contribute to an appropriate judgment of the treatment policy in consideration of the facial expression of the patient.
  • FIG. 1 illustrates the schematic configuration of the facial morphology prediction system after orthodontic treatment.
  • various processing means described later are mainly realized by arithmetic processing of the computer device 10.
  • a large-capacity database 20, an input device 30, an output device 40, and the like are connected to the computer device 10.
  • the database 20 may be a hard disk or an optical disk directly connected to the computer device 10, or may be a data server or storage in a hospital, for example, which can be accessed from the computer device 10 via a network. Further, the database 20 may be provided in, for example, a cloud data center on a wide area network.
  • the database 20 contains X-ray photograph data, three-dimensional facial morphology data, and the like measured from a plurality of treated patients (past patients who have undergone the treatment are referred to as "case patients").
  • the primary data, the facial morphology model normalized based on the primary data, the intermediate data such as the feature vector which is a multivariate quantity extracted from the features, and the case data including the predicted facial morphology model which is the evaluation data are stored. It is preferable that the access permission to the case data such as the database 20 is restricted only to a specific person (for example, the doctor in charge) who is permitted to share and use the data.
  • the computer device 10 uses the case data stored in the database 20 to execute arithmetic processing for facial morphology prediction, which will be described later.
  • the input device 30 includes an operation input device linked with a human interface, such as a keyboard, a mouse, and a touch panel. Further, the input device 30 may be a device having a function of inputting data acquired or processed by another system to the computer device 10 via an information storage medium or a network.
  • the output device 40 includes, for example, an image display that three-dimensionally visualizes and displays predicted facial morphology data and a model. Further, the output device 40 may be a writing device of an information storage medium for providing data to another system or a communication device capable of outputting data to the outside via a network.
  • the facial morphology prediction system allows the patient's facial photograph data and three-dimensional facial morphological data taken in a hospital laboratory or the like to be taken into the computer device 10 via the database 20 or directly. It is configured. Therefore, this system includes a digital camera 61 and a three-dimensional measuring device 62.
  • a three-dimensional measuring device 62 a general optical measuring device such as a three-dimensional camera, a three-dimensional scanner, or a three-dimensional laser profiler can be used for a predetermined part such as the entire face of the patient or the occlusal portion.
  • the facial morphology data of these patients may be input from the input device 30 via the information storage medium, or may be input to the system via, for example, a hospital network.
  • the data of the X-ray photograph and the cephalo image may be input to the computer device 10 and / or the database 20.
  • the system may be equipped with an X-ray inspection apparatus 63, for example, via a hospital network.
  • the X-ray inspection device 63 may include an inspection device capable of taking a panoramic X-ray photograph of a patient's occlusal portion, CT image data, and the like.
  • case data The "case data" accumulated in the database 20 will be specifically described.
  • N indicates the number of case patients (that is, the number of cases).
  • the case facial morphology data CF for each case patient is, in detail, "pretreatment resting case facial morphological data CFr-pre”, “pretreatment facial expression expression case facial morphological data CFs-pre”, and “post-treatment resting”. Includes “case facial morphology data CFr-post” and “case facial morphology data CFs-post at the time of facial expression after treatment”.
  • FIG. 2 shows an example of case facial morphology data CF.
  • the “non-expressed facial expression” face specifically means a resting face
  • the “expressed facial expression” face specifically refers to a smiling facial expression. It shall refer to the face of.
  • Pretreatment resting case facial morphology data CFr-pre refers to facial morphology data obtained by three-dimensionally measuring the resting face in a state in which the case patient does not express a facial expression before treatment.
  • Case facial morphology data at the time of facial expression expression before treatment CFs-pre refers to facial morphological data obtained by three-dimensionally measuring a face in which a case patient expresses a smiling facial expression, for example, before treatment.
  • Post-treatment resting case facial morphology data CFr-post refers to facial morphological data obtained by three-dimensionally measuring the resting face of a case patient who does not express a facial expression after treatment.
  • Case facial morphology data CFs-post at the time of facial expression expression before treatment refers to facial morphology data obtained by three-dimensionally measuring, for example, a face in which a case patient expresses a smiling facial expression after treatment.
  • the number of three-dimensional facial morphology data acquired differs depending on the size of the face of each patient.
  • the position of the origin differs depending on the standing position of the patient who was photographed.
  • a morphological model that converts three-dimensional facial morphological data into a normalized facial morphological model in order to enable quantitative comparison and statistical processing of the facial morphology of each patient.
  • the conversion means 110 is provided.
  • the morphological modeling means 110 is normal, for example, by extracting predetermined anatomical feature points from the patient's three-dimensional facial morphology data and arranging the feature points on polygons having the same number of points and the same topological structure.
  • the morphological model constructed by such a method is generally called a "homology model", and for example, an HBM (Homologous Body Modeling) program provided by AIST (National Institute of Advanced Industrial Science and Technology) can be used.
  • HBM Homologous Body Modeling
  • the morphological modeling means 110 normalizes the above-mentioned case facial morphological data CF for each case patient and performs a process of constructing a “case facial morphological model CFM”.
  • FIG. 3 shows an example of a case facial morphology model CFM.
  • the model data obtained by normalizing the pretreatment resting case facial morphology data CFr-pre is referred to as the pretreatment resting case facial morphology model CFMr-pre, and the pretreatment facial expression expression case facial morphology data CFs.
  • the model data obtained by normalizing -pre is called “pretreatment facial expression expression case facial morphology model CFMs-pre”
  • the model data obtained by normalizing the post-treatment facial expression data CFr-post is referred to as "post-treatment resting time”.
  • the model data obtained by normalizing the case facial morphology data CFs-post at the time of facial expression expression after treatment is referred to as "case facial morphology model CFMs-post”.
  • the case data further includes the "case morphological change amount CD" for N case patients.
  • the case morphology change amount CD for each case patient includes, in detail, "pre-treatment case morphology change amount CD pre” and "post-treatment case morphology change amount CD post”.
  • the morphological change amount calculation means 120 calculates the change amount of the pretreatment resting case facial morphology model CFMr-pre and the pretreatment facial expression expression case facial morphological model CFMs-pre to obtain the “pretreatment case morphological change amount CDpre”. To get. Further, the morphological change amount calculation means 120 calculates the amount of change in the post-treatment resting case facial morphology model CFMr-post and the post-treatment facial expression expression case facial morphological model CFMs-post to obtain a “post-treatment case morphological change amount”. Get "CDpost”.
  • FIG. 4 shows an example in which the case morphology change amount CDpre is obtained from the case facial morphology models CFMr-pre and CFMs-pre before treatment.
  • the "morphological change amount” includes information on the amount of change in soft tissue and the direction when the patient changes from a resting face to a smiling face, and these can be displayed as three-dimensional image data. ..
  • the feature vector extraction means 130 obtains a multidimensional "case morphology feature vector CFV" having a plurality of “feature variables” (values of feature parameters) selected in advance from the case face morphology model CFM for each case patient. Perform the extraction process.
  • the "feature parameter” is a geometric parameter that characteristically represents a morphology such as a human face, and is selected in advance by a specialist, for example, based on his / her experience and knowledge.
  • a specialist for example, based on his / her experience and knowledge.
  • FIG. 5 shows an example of feature parameters selected in the human face outline.
  • the human face can recognize some inflection points in its morphology.
  • the corner of a boundary line such as an eye or nose, the most prominent position in three dimensions, the most recessed position, or the like can be selected.
  • inflection points are referred to as "landmarks" and are used in the definition of feature parameters.
  • the landmark is not particularly limited as long as it is not an inflection point but can be geometrically defined, such as the center point of a straight line connecting two inflection points.
  • the outline of the face can be extracted as follows. First, a surface normal at each pixel of the three-dimensional surface data is calculated by an arithmetic program customized for measuring the facial morphology from the frontal image of the face. In addition, the angle formed by the z-axis and the face normal of the face is also calculated for each coordinate of the face surface. Each coordinate point where the angle formed by the z-axis and the face normal is, for example, 60 degrees is extracted, and the line connecting these points is used as the outline of the face. The angle that defines the outline of the face is preferably an angle between 45 degrees and 90 degrees.
  • a feature parameter is the distance between landmarks.
  • the feature parameter v1 shown in FIG. 5 is defined as the distance between the outer corners of the eyes Ex (
  • the distance between the landmark for example, the line connecting the outermost end Zy'of the face and the chin point Gn
  • the landmark for example, the cheek point Go'
  • Another example of a feature parameter is the angle of the line connecting the landmarks.
  • the angle of the feature parameter v4 is determined by the positional relationship between the outermost end Zy'of the face, the cheek point Go', and the cheek.
  • the characteristic parameter of the distance may be a dimensionless quantity.
  • ) can be adopted as a feature parameter.
  • deviations with respect to a plurality of average values and ratios with respect to the average may be considered as feature parameters.
  • FIGS. 6 to 8 a plurality of feature parameters are selected from a cross section based on three-dimensional data obtained by photographing a specific part of a human face. These cross sections are created by data processing based on anatomical measurement points after determining the three-dimensional coordinate system.
  • FIG. 6 shows, as an example, a yz cross section when the subject's face is cut at a line connecting the outer corner of the eye Ex and the corner point Ch of the mouth.
  • the angle (v7) of the mouth angle Ch with the outer corner Ex as the base point in the z-axis direction the angle (v8) of the cheek protrusion P (Ex-Ch) in the cross section with the outer corner Ex as the base point, the outer corner Ex and the mouth angle Ch.
  • the length of the outer curve (v12), the area closed by the outer curve (v13), and the like can be selected as feature parameters.
  • FIG. 7 shows an xz cross section when the subject's face is cut in a horizontal plane passing through the subnasal point Sn.
  • FIG. 8 illustrates an xz cross section when the subject's face is cut in a horizontal plane passing through the most apex point Pm of the nose.
  • the amount of protrusion of the facial part in the z direction (v14, v18), the angle of the apex (v16, v20), and the amount of protrusion (v17, v22, v23) at various cross-sectional positions.
  • the angle of the concave point (v21) and the like can be selected as feature parameters.
  • the cross section that characterizes the facial morphology may be a cross section that passes through, for example, the glabellar point Gla, the nose root point N, the upper lip point Ls, the lower lip point Li, and the chin point Sm. Further, a difference or a ratio with respect to the z average value of a specific part may be added to the feature parameter.
  • the feature vector extraction means 130 performs a process of measuring feature variables corresponding to each of a plurality of selected and set feature parameters from the patient's three-dimensional facial morphology data.
  • the feature vector extraction means 130 extracts an n-dimensional "feature vector V" having the measured n feature variables v as vector elements.
  • the feature vector extraction means 130 is a multidimensional "case morphology feature vector" based on the case facial morphology data CF before and after the treatment for each case patient and at rest and when the smiling facial expression is expressed. Perform the process of extracting "CFV".
  • FIG. 9 shows an example of extracting the case morphology feature vector CFV from the case face morphology model CFM.
  • the feature vector extracted based on the pretreatment resting case facial morphology data CFr-pre is called the pretreatment resting case morphological feature vector CFVr-pre, and is used as the pretreatment facial expression expression case facial morphology data CFs-pre.
  • the feature vector extracted based on this is called “pretreatment facial expression expression case morphology feature vector CFVs-pre”
  • the feature vector extracted based on post-treatment resting case facial morphology data CFr-post is called "post-treatment rest”.
  • the feature vector extracted based on the case facial morphology data CFs-post at the time of facial expression expression after treatment is called "case morphology feature vector CFVs-post at the time of facial expression expression after treatment”.
  • the feature vector extraction means 130 can also extract a multidimensional "case morphology change amount feature vector CDV" from the case morphology change amount CD showing the soft tissue change amount at rest and when smiling for each case patient.
  • FIG. 10 shows an example of extracting the case morphology change amount feature vector CDV from the case morphology change amount CD.
  • the feature vector extracted based on the pretreatment case morphological change CDpre is called “pretreatment case morphological change feature vector CDVpre”
  • the feature vector extracted based on the posttreatment case morphological change CDpost is called “post-treatment case”. It is called "morphological change amount feature vector CDV post”.
  • the "case feature vector CV" used as the basal variate when selecting the "approximate case patient” described later is the above-mentioned pretreatment resting case morphology feature vector CFVr-pre and the pretreatment facial expression expression case morphology feature.
  • Example 1 Pretreatment resting case Morphological feature vector CFVr-pre (Example 2) Case morphology feature vector CFVs-pre when facial expression is expressed before treatment (Example 3) Pretreatment case morphological change feature vector CDVpre (Example 4) Pretreatment resting case morphological feature vector CFVr-pre + pretreatment facial expression expression case morphological feature vector CFVs-pre (Example 5) Pretreatment resting case morphological feature vector CFVr-pre + pretreatment case morphological change feature vector CDVpre (Example 6) Case morphological feature vector CFVs-pre + pretreatment case morphological change feature vector CDVpre And so on.
  • the database 20 contains a set of case feature vector CVs CV (1) and CV (2), which are extracted from the case morphology feature vector CFV and / or the case morphology deformation amount feature vector CDV and correspond to N case patients. ), CV (3), ..., CV (N) are made into knowledge.
  • a set of case feature vectors CVs for a plurality of case patients may be clustered and knowledgeed in the database 20.
  • a general vector quantization method such as the Lloyd method or the k-means method can be used.
  • the "distance" between the vectors may be either the Euclidean distance or the Manhattan distance.
  • the primary cluster center of gravity G * (l) at the shortest distance is searched from each case feature vector CV, and the group of the case feature vector CV having the shortest distance center of gravity G * (l) as an element is used. Reorganize the next cluster CL ** (l). Then, the secondary cluster center of gravity G ** (l) is also obtained in the secondary cluster CL ** (l), and the tertiary cluster center of gravity G *** (l) is obtained from the group of the case feature vector CV at the shortest distance.
  • the process of optimizing the number of clusters may be performed by the following algorithm.
  • the calculation is performed, and the minimum distance Dc (l) min is obtained.
  • the inter-cluster distance Dc which is the average value of the minimum distance Dc (l) min of each cluster, is calculated by the mathematical formula (1).
  • the inter-cluster distance Dc (for example, D 3 , D 4 , ..., D 12 ) is obtained, and the change shown in the formula (2) is obtained.
  • C + 1 which is obtained by adding 1 to C having the maximum ⁇ Dc, can be determined as the optimum number of clusters.
  • the data of the case feature vector CV classified into each case class CL and the cluster center of gravity G thereof are knowledgeable in the database 20.
  • FIG. 11 is a block diagram showing an outline of the face morphology prediction method
  • FIG. 12 is a flowchart thereof.
  • step S1 the face of a new patient under consideration for treatment (the new patient is referred to as an "evaluation target patient”) is measured using a three-dimensional measuring device 62 before treatment, and the patient's facial morphology. Acquire the data PF.
  • the state in which the patient does not express a facial expression that is, the "resting patient facial morphology data PFr" which is the facial morphological data obtained by measuring the face at rest, and the facial morphological data measuring the face when the smiling facial expression is expressed.
  • At least two types of patient facial morphology data PFs of "facial expression expression patient facial morphology data PFs" are acquired.
  • the morphology modeling means 110 calculates a "patient facial morphology model PFM" normalized by using, for example, the above-mentioned homology model algorithm, based on the patient facial morphology data PF.
  • the facial morphology model constructed based on the resting patient facial morphology data PFr is called “resting patient facial morphology model PFMr”
  • the facial morphological model constructed based on the facial expression expression patient facial morphology data PFs is called “facial expression expression”.
  • patient facial morphology model PFMs When referred to as "patient facial morphology model PFMs".
  • the morphological change amount calculation means 120 obtains the "patient morphological change amount PD" by calculating the change amount of the resting patient facial morphology model PFMr and the facial expression expression patient facial morphological model PFMs.
  • the feature vector extraction means 130 extracts a multidimensional "patient feature vector PV" having a plurality of feature variables as elements from the patient facial morphology model PFM and / or the patient morphology change amount PD.
  • the feature vector extracted based on the resting patient facial morphology model PFMr is called “resting patient morphological feature vector PFVr”, and the feature vector extracted based on the facial expression expression patient facial morphology model PFMs is called “facial expression expression”.
  • patient morphological feature vector PFVs When referred to as “patient morphological feature vector PFVs”.
  • patient morphology change amount feature vector PDV the feature vector extracted based on the patient morphology change amount feature vector PDV.
  • patient feature vector PV is any one feature vector selected from the group of resting patient morphology feature vector PFVr, facial expression expression patient morphology feature vector PFVs, and patient morphology change feature vector PDV, or An extended feature vector can be obtained by combining the feature variables of these two or more feature vectors.
  • Example 1 Resting patient morphology feature vector PFVr (Example 2) Patient morphology feature vector PFVs when expressing facial expressions (Example 3) Patient morphology change characteristic vector PDV (Example 4) Resting patient morphology feature vector PFVr + Facial expression expression patient morphology feature vector PFVs (Example 5) Resting patient morphology feature vector PFVr + patient morphology change feature vector PDV (Example 6) Patient morphology feature vector PFVs + patient morphology change feature vector PDV at the time of facial expression expression And so on.
  • the case feature vector CV selected here that approximates the patient feature vector PV is referred to as an “approximate case feature vector NCV”.
  • the population of the case feature vector CV is narrowed down to cases in which, for example, gender, age, treatment site, hard tissue (dentition, etc.) are common or similar to the patient to be evaluated.
  • Example 1-1 For example, when “resting patient morphology feature vector PFVr" is adopted as the patient feature vector PV, the case feature vector CV becomes "pretreatment resting case morphology feature vector CFVr-pre".
  • the morphological change amount feature vector of Example 1-3 includes information such as the deformation amount of the soft tissue of the face, the deformation direction, and the softness of the tissue as the feature amount, an approximate case is selected using this as the basal variation. As a result, the accuracy of facial morphology prediction when the smiling facial expression of the evaluation target patient is expressed can be increased.
  • the approximate case selection means 140 can select the case feature vector CV having a predetermined number of cases k in ascending order of distance from the patient feature vector PV.
  • the approximate number of cases k is a number determined by an empirical judgment of a specialist doctor or the like.
  • the set CV (j) of the case feature vector CV corresponding to N case patients is knowledgeable in the database 20.
  • the approximate case selection means 140 selects approximate cases having the number of cases k in ascending order of distance (
  • the "distance" between the vectors may be either the Euclidean distance or the Manhattan distance.
  • the approximate case selection means 140 may select the case feature vector CV belonging to the case class CL having the cluster center of gravity G closest to the patient feature vector PV as the approximate case feature vector NCV.
  • ) is selected.
  • the "distance" in this case may be either the Euclidean distance or the Manhattan distance.
  • the approximate case selection means 140 selects a set of all the case feature vectors CV belonging to the approximate case class NCL as the approximate case feature vector NCV.
  • step S3 of FIG. 12 the prediction model calculation means 150 predicts after the treatment of the evaluation target patient based on the patient face morphology model PFM of the evaluation target patient, and the “predicted face” at the time of expressing a smiling expression is predicted.
  • the morphological model PFMs-prd is calculated.
  • Example 3-1 According to this Example 3-1 as shown in FIG. 13, the predicted facial morphology model PFMs-prd at the time of smiling, which is predicted after treatment, is based on the patient facial morphological model PFMs at the time of smiling of the evaluation target patient. Calculate.
  • step S11 a set of "pretreatment facial expression expression approximate case facial morphology model NCMs-pre” normalized based on pretreatment facial expression expression case facial morphology data NCFs-pre for each approximate case patient. Is calculated. Further, in step S12, a normalized “post-treatment facial expression expression approximate case facial morphology model NCMs-post” is calculated based on the post-treatment facial expression expression case facial morphology data NCFs-post for each approximate case patient. To do.
  • step S13 the vector average of the set of the approximate case facial morphology model NCMs-pre at the time of expressing the facial expression before treatment is calculated to obtain the "pretreatment approximate case vector average NCApre”. Further, in step S14, the vector average of the set of the approximate case facial morphology model NCMs-post at the time of facial expression expression after treatment is calculated to obtain the “post-treatment approximate case vector average NCApost”.
  • step S15 the "approximate case vector average difference NCApost-pre" is calculated by subtracting the pretreatment approximate case vector average NCApre from the post-treatment approximate case vector average NCApost.
  • step S16 the approximate case facial morphology vector average difference NCA post-pre calculated in step S15 is added to the patient facial morphology model PFMs when the facial expression of the patient to be evaluated is expressed.
  • the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression which is predicted after the treatment of the patient to be evaluated, is calculated.
  • the prediction model calculation means 150 obtains the prediction face morphology model PFMs-prd at the time of smile, which is predicted after the treatment, based on the patient face morphology model PFMr at rest of the evaluation target patient. You may calculate.
  • step S21 a set of normalized “pretreatment resting approximate case facial morphology model NCMR-pre” is calculated based on the pretreatment resting case facial morphology data NCFr-pre for each approximate case patient.
  • step S22 a normalized “post-treatment facial expression expression approximate case facial morphology model NCMs-post” is calculated based on the post-treatment facial expression expression case facial morphology data NCFs-post for each approximate case patient.
  • step S23 the vector average of the set of the pretreatment approximate case facial morphology model NCMr-pre is calculated to obtain the "pretreatment approximate case vector average NCApre”. Further, in step S24, the vector average of the set of the approximate case facial morphology model NCMs-post at the time of facial expression expression after treatment is calculated to obtain the “post-treatment approximate case vector average NCApost”.
  • step S25 the "approximate case vector average difference NCApost-pre" is calculated by subtracting the pretreatment approximate case vector average NCApre from the post-treatment approximate case vector average NCApost.
  • step S26 the approximate case facial morphology vector average difference NCA post-pre calculated in step S25 is added to the resting patient facial morphology model PFMr of the patient to be evaluated.
  • the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression, which is predicted after the treatment of the patient to be evaluated, is calculated.
  • step S4 (see FIG. 12) of displaying the predicted patient facial morphology model
  • an image of the tooth alignment after treatment is output on a display or the like together with the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression. It is preferable to display it on the device 40.
  • the arithmetic processing by the computer device 10 is a step of calculating a normalized pre-orthodontic hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the tooth skeleton of the patient to be evaluated, and the pre-orthodontic hardness. Prediction in the step of predicting the corrected hard tissue morphology model HMpost of the patient to be evaluated based on the tissue morphology model HMpre, and the prediction facial morphology model PFMs-prd at the time of expressing a smiling expression after treatment evaluated in step S3. The step of incorporating the corrected hard tissue morphology model HMpost, which has been performed, can be included.
  • the image data of the hard tissue is, for example, three-dimensional image data obtained by measuring the tooth portion of the patient with a three-dimensional measuring device, a CT image including the tooth skeleton of the occlusal part, a cephalo image, a panoramic X photograph, and the like.
  • the HMpre may be constructed by combining these two-dimensional and / or three-dimensional image data.
  • the tooth alignment predicted after the treatment is displayed on the image of the predicted facial morphology when the smiling facial expression is expressed.
  • a three-dimensional dentition prediction image or a two-dimensional dentition prediction image with the mouth portion of the predicted smile, it is possible to make a prediction including the appearance of the tooth alignment.
  • the three-dimensional facial morphology of a patient after orthodontic treatment can be predicted easily and quantitatively.
  • the facial morphology at the time of expressing the smiling facial expression can be predicted in advance before the treatment, it is necessary to make an appropriate judgment as to whether or not the treatment plan creates a "good smile" that leads to aesthetic improvement of the facial expression for the patient. Can contribute.
  • the facial morphology prediction system and the facial morphology prediction method according to the present invention can be used not only for orthodontic treatment but also for surgical treatment of patients with jaw deformities, for example. It can also be used, for example, to predict when maxillofacial surgery (including oral surgery and plastic surgery) is performed alone, orthodontic treatment, or jointly with jaw prosthesis treatment. Furthermore, its application can be expected for the prediction of age-related changes in facial morphology.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

This method for quantitatively predicting the facial morphology of a patient in consideration of an expression after treatment comprises the steps of: acquiring patient morphology models PFM of a resting state and a smiling state of a patient being evaluated; extracting a patient feature vector PV from the patient morphology models PFM; extracting case feature vectors CV from facial morphology models CFM of patients of a plurality of cases for the resting state and the smiling state thereof and before and after treatment; selecting a similar case, the case feature vector CV of which has a small distance to the patient feature vector PV; computing a vector mean difference NCA post-pre of the facial morphology models of the smiling state of the patient of the similar case before and after treatment; and acquiring a predicted morphology model PFMs-prd of a predicted smiling face after treatment by adding the vector mean difference NCA post-pre of the patient of the similar case to the patient morphology models PFM of the resting state or the smiling state of the patient being evaluated.

Description

治療後の表情表出時の顔面形態予測方法及びシステムFacial morphology prediction method and system when facial expression is expressed after treatment
 本発明は、患者の治療後の顔面形態を演算処理により予測するための方法及びシステムに関する。特に、矯正歯科治療後の笑顔等表情表出時の顔形態を定量的に高い精度で予測することが可能な方法及びシステムに関する。 The present invention relates to a method and a system for predicting the facial morphology of a patient after treatment by arithmetic processing. In particular, the present invention relates to a method and a system capable of quantitatively predicting the facial morphology at the time of facial expression expression such as a smile after orthodontic treatment with high accuracy.
 人の顔は、自己が社会的に受け入れられているという心理的充足を得る上で強い影響力を有している。また、顔の表情は、社会生活において、感情や思考の伝達を行う上の非言語コミュニケーション手段として重要な機能を果たすものである。それ故に現代の矯正歯科治療においては、社会心理学的な立場から、顔の軟組織の形態を改善することが重要な治療目的のひとつであると認識されている。 The human face has a strong influence in obtaining the psychological satisfaction that oneself is socially accepted. In addition, facial expressions play an important role as a nonverbal communication means for transmitting emotions and thoughts in social life. Therefore, in modern orthodontic treatment, it is recognized that improving the morphology of the soft tissue of the face is one of the important therapeutic purposes from a social psychological standpoint.
 例えば、歯科医師が、不正咬合を有する患者の治療方針を決定する際、抜歯か非抜歯か、また外科手術が必要なのか或いはカムフラージュ治療(外科手術を伴わない治療)で済むのかなどの治療方針を的確に判断するためには、患者の三次元の顔形態を客観的に評価し、更には予後の顔形態の予測を行うことが不可欠となっている。 For example, when a dentist decides on a treatment policy for a patient with an irregular occlusion, whether it is a tooth extraction or a non-extraction, and whether surgery is necessary or camouflage treatment (treatment without surgery) is sufficient. It is indispensable to objectively evaluate the patient's three-dimensional facial morphology and to predict the prognosis of the facial morphology in order to accurately judge.
 従来、矯正歯科治療後の顔変化の予測は、頭部X線規格写真(「セファログラム」又は単に「セファロ」ともいう。)に写し出された矯正前の患者の硬組織(歯骨格)と軟組織(筋肉及び皮膚)のプロファイルに基づいて行われている。例えば、モニターに表示した二次元のセファロ画像上で硬組織を移動させると、それに追従して軟組織も移動する等の画像処理表示を行うことで、治療後の側貌を可視化してシミュレーションできるソフトウエアが広く普及している。 Conventionally, the prediction of facial changes after orthodontic treatment is the hard tissue (dental skeleton) and soft tissue of the patient before orthodontics shown on the head X-ray standard photograph (also referred to as "cephalog" or simply "cephaloc"). It is based on the profile (muscle and skin). For example, software that can visualize and simulate the profile after treatment by performing image processing display such as moving hard tissue on a two-dimensional cephalo image displayed on a monitor and moving soft tissue accordingly. Wear is widespread.
 例えば特許文献1には、顎変形患者の外科的矯正治療における患者の術前の正面位頭部X線規格写真及び通常顔写真から、術後の正面観における顔の見え方を予測する方法が開示されている。 For example, Patent Document 1 describes a method of predicting the appearance of a face in a postoperative front view from a preoperative frontal head X-ray standard photograph and a normal facial photograph of a patient in surgical orthodontic treatment of a jaw deformed patient. It is disclosed.
 しかし、従来のセファロを用いた予測アルゴリズムは、歯や顎骨などの硬組織と皮膚などの軟組織の移動量が単純な比例関係に立つという前提で構成されており、その比例定数も専門医師等の主観又は経験に基づき指定される。そのため、顔変化の予測結果には医療従事者間のバラツキがあり、定量的・客観的な意味において予測精度が保証された技術ではなかった。 However, the conventional prediction algorithm using cephalo is constructed on the premise that the amount of movement of hard tissues such as teeth and jawbones and soft tissues such as skin is in a simple proportional relationship, and the proportionality constant is also determined by specialist doctors. Designated based on subjectivity or experience. Therefore, the prediction results of facial changes vary among medical professionals, and the prediction accuracy is not guaranteed in a quantitative and objective sense.
 そこで、発明者らは、これまで術者の主観的評価に依存していた顔面形態・機能評価の偏りを可能な限りなくすため、過去の多数の症例患者の計測情報を知識化し、治療後の顔面形態についての、より定量的で客観的な評価方法の研究及びその確立を試みてきた(例えば特許文献2参照)。 Therefore, in order to eliminate the bias of facial morphology / function evaluation, which has been dependent on the subjective evaluation of the surgeon, as much as possible, the inventors have made knowledge of the measurement information of a large number of patients in the past and after the treatment. We have attempted to study and establish a more quantitative and objective evaluation method for facial morphology (see, for example, Patent Document 2).
特開2014-171702号公報Japanese Unexamined Patent Publication No. 2014-171702 国際公開第2017/069231号International Publication No. 2017/069231
 発明者らは、同研究の予備的解析を通じて、安静時及び表情表出時における疾患固有の形態的な顔の歪み(容貌・表情表出障害)を有する症例が存在すること、そして、安静時及び笑顔表情表出時の三次元顔計測が、この歪みの検出に有効であるとの知見を得た。上記のように、矯正歯科治療においては、顔の表情改善が重要な治療目的のひとつであると認識されている。それにも関わらず、「良い笑顔」を作り出す治療計画かどうかについては、専門の歯科医師等といえども、その事前の予測及び検討が困難であるとの課題があった。 Through the preliminary analysis of the study, the inventors found that there are cases of disease-specific morphological facial distortion (appearance / facial expression disorder) at rest and at rest, and at rest. It was also found that three-dimensional face measurement when expressing a smiling facial expression is effective in detecting this distortion. As described above, in orthodontic treatment, improvement of facial expression is recognized as one of the important therapeutic purposes. Nevertheless, there was a problem that it was difficult for even specialized dentists to predict and examine in advance whether or not the treatment plan would produce a “good smile”.
 本発明は、患者の治療後の表情を考慮した顔面形態を定量的に予測することができる等の技術の提供を目的としている。 An object of the present invention is to provide a technique capable of quantitatively predicting a facial morphology in consideration of a patient's facial expression after treatment.
 上述の課題を解決するため、本発明は、患者の治療後の表情表出時の顔面形態を、コンピュータ装置が実行する演算処理により予測する方法であって、前記演算処理が、治療を行った複数の患者(その治療を行った過去の患者を「症例患者」という。)から、三次元計測装置を使って取得した症例顔面形態データCFに基づいて、予め選択された複数の特徴変量を要素とする多次元の症例特徴ベクトルCVのセットを抽出するステップと、治療を検討している新たな患者(その新たな患者を「評価対象患者」という。)から、三次元計測装置を使って取得した患者顔面形態データPFに基づいて、前記複数の特徴変量を要素とする多次元の患者特徴ベクトルPVを抽出するステップと、複数の前記症例患者についての前記症例特徴ベクトルCVのセットのうちから、前記患者特徴ベクトルPVに近似する近似症例特徴ベクトルNCVを複数選択するステップと、選択された複数の前記近似症例特徴ベクトルNCVに対応する各症例患者(その選択された症例患者を「近似症例患者」という。)について、治療前の安静時に三次元計測装置を使って取得した治療前安静時症例顔面形態データに基づいて正規化した治療前安静時近似症例顔面形態モデルNCMr-preを演算するステップと、前記各近似症例患者について、治療後の表情表出時に三次元計測装置を使って取得した治療後表情表出時症例顔面形態データに基づいて正規化した治療後表情表出時近似症例顔面形態モデルNCMs-postを演算するステップと、前記治療前安静時近似症例顔面形態モデルNCMr-preのベクトル平均を演算して治療前近似症例ベクトル平均NCSpreを得るステップと、前記治療後表情表出時近似症例顔面形態モデルNCMs-postのベクトル平均を演算して治療後近似症例ベクトル平均NCSpostを得るステップと、前記治療後近似症例ベクトル平均NCSpostから前記治療前近似症例ベクトル平均NCSpreを差し引いた近似症例ベクトル平均差分NCSpost-preを演算するステップと、前記評価対象患者の前記患者顔面形態データPFに基づいて正規化した、安静時患者顔面形態モデルPFMrを演算するステップと、前記安静時患者顔面形態モデルPFMrに、前記近似症例ベクトル平均差分NCSpost-preを加味することで、当該評価対象患者の治療後に予測される、表情表出時の予測顔面形態モデルPFMs-prdを演算するステップとを含む、治療後の表情表出時の顔面形態予測方法である。 In order to solve the above-mentioned problems, the present invention is a method of predicting the facial morphology at the time of facial expression expression after treatment of a patient by an arithmetic process executed by a computer device, and the arithmetic processing performs the treatment. Multiple feature variables selected in advance based on the case facial morphology data CF obtained from a plurality of patients (the past patients who have undergone the treatment are referred to as "case patients") using a three-dimensional measuring device. The step of extracting a set of multidimensional case feature vectors CV and the acquisition from a new patient who is considering treatment (the new patient is called an "evaluation target patient") using a three-dimensional measuring device. From the step of extracting the multidimensional patient feature vector PV having the plurality of feature variables as elements and the set of the case feature vector CVs for the plurality of the case patients, based on the patient facial morphology data PF. A step of selecting a plurality of approximate case feature vectors NCV that are close to the patient feature vector PV, and each case patient corresponding to the plurality of selected approximate case feature vectors NCV (the selected case patient is referred to as an “approximate case patient”). With regard to (), the step of calculating the pretreatment resting approximate case facial morphology model NC Pr-pre normalized based on the pretreatment resting case facial morphology data acquired using a three-dimensional measuring device at the time of pretreatment resting. For each of the above-mentioned approximate cases, the post-treatment facial expression expression-approximate case facial morphology normalized based on the post-treatment facial expression expression case facial morphology data acquired by using a three-dimensional measuring device at the time of post-treatment facial expression expression. The step of calculating the model NCMs-post, the step of calculating the vector average of the pretreatment resting approximate case facial morphology model NCMr-pre to obtain the pretreatment approximate case vector average NCSpre, and the posttreatment facial expression expression approximation. The step of calculating the vector average of the case facial morphology model NCMs-post to obtain the post-treatment approximate case vector average NCSpost and the approximate case vector average obtained by subtracting the pretreatment approximate case vector average NCSpre from the post-treatment approximate case vector average NCSpost. In the step of calculating the difference NCS post-pre, the step of calculating the resting patient facial morphology model PFMr normalized based on the patient facial morphology data PF of the evaluation target patient, and the resting patient facial morphology model PFMr. , The facial expression table predicted after the treatment of the patient to be evaluated by adding the approximate case vector average difference NCS post-pre. It is a method of predicting facial morphology at the time of facial expression expression after treatment, which includes a step of calculating a predicted facial morphology model PFMs-prd at the time of appearance.
 顔面形態予測方法において、前記各症例患者についての前記症例顔面形態データCFが、当該症例患者の治療前安静時症例顔面形態データCFr-preと、治療前表情表出時症例顔面形態データCFs-preとを含み、前記症例特徴ベクトルCVが、前記治療前安静時症例顔面形態データCFr-pre及び前記治療前表情表出時症例顔面形態データCFs-preの変化量である治療前症例形態変化量CDpreに基づいて抽出され、前記評価対象患者についての前記患者顔面形態データPFが、当該評価対象患者の安静時患者顔面形態データPFrと、表情表出時患者顔面形態データPFsとを含み、前記患者特徴ベクトルPVが、前記安静時患者顔面形態データPFr及び前記表情表出時患者顔面形態データPFsの変化量である患者形態変化量PDに基づいて抽出されることが好ましい。 In the facial morphology prediction method, the case facial morphological data CF for each of the patient patients is the pretreatment resting case facial morphological data CFr-pre and the pretreatment facial expression expression case facial morphological data CFs-pre. The pretreatment case morphology change amount CDpre in which the case feature vector CV is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre. The patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression, and the patient characteristics. It is preferable that the vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs.
 また、顔面形態予測方法において、前記演算処理が、前記評価対象患者の歯を含む硬組織を撮影した画像データに基づいて、正規化した矯正前硬組織形態モデルHMpreを演算するステップと、前記矯正前硬組織形態モデルHMpreに基づいて当該評価対象患者の矯正後硬組織形態モデルHMpostを予測するステップと、前記表情表出時の予測顔面形態モデルPFMs-prdに前記矯正後硬組織形態モデルHMpostを組み入れて表示するステップとを更に含むことが好ましい。 Further, in the facial morphology prediction method, the calculation process calculates a normalized pre-correction hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the evaluation target patient, and the correction. The step of predicting the post-correction hard tissue morphology model HMpost of the patient to be evaluated based on the pre-hard tissue morphology model HMpre, and the post-correction hard tissue morphology model HMpost in the predicted facial morphology model PFMs-prd at the time of facial expression expression. It is preferable to further include a step of incorporating and displaying.
 また、顔面形態予測方法において、前記近似症例特徴ベクトルNCVが、前記患者特徴ベクトルPVとの距離が近い順に所定症例数選択されることが好ましい。 Further, in the facial morphology prediction method, it is preferable that a predetermined number of cases are selected in the order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
 また、顔面形態予測方法において、前記演算処理が、前記症例特徴ベクトルCVに対しクラスタリング処理を行うことで複数の症例クラスに分類するステップと、前記各症例クラスについてクラスタ重心Gをそれぞれ演算するステップとを含み、分類された前記各症例クラスについての前記クラスタ重心Gのそれぞれのうち、前記患者特徴ベクトルPVとの距離が最も近いクラスタ重心を有する症例クラスに属する症例特徴ベクトルCVが、前記近似症例特徴ベクトルNCVとして選択されてもよい。 Further, in the facial morphology prediction method, the calculation process includes a step of classifying the case feature vector CV into a plurality of case classes by performing a clustering process, and a step of calculating the cluster center of gravity G for each case class. Of the cluster centers of gravity G for each of the classified case classes, the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is the approximate case feature. It may be selected as the vector NCV.
 また、本発明は、患者の治療後の表情表出時の顔面形態を予測するシステムであって、コンピュータ装置の演算処理により実現される、特徴ベクトル抽出手段と、近似症例選択手段と、予測モデル演算手段とを少なくとも含み、前記特徴ベクトル抽出手段が、治療を行った複数の患者(その治療を行った過去の患者を「症例患者」という。)から、三次元計測装置を使って取得した症例顔面形態データCFに基づいて、予め選択された複数の特徴変量を要素とする多次元の症例特徴ベクトルCVのセットを抽出する処理と、治療を検討している新たな患者(その新たな患者を「評価対象患者」という。)から、三次元計測装置を使って取得した患者顔面形態データPFに基づいて、前記複数の特徴変量を要素とする多次元の患者特徴ベクトルPVを抽出する処理とを実行し、前記近似症例選択手段が、複数の前記症例患者についての前記症例特徴ベクトルCVのセットのうちから、前記患者特徴ベクトルPVに近似する近似症例特徴ベクトルNCVを複数選択する処理を実行し、前記予測モデル演算手段が、選択された複数の前記近似症例特徴ベクトルNCVに対応する各症例患者(その選択された症例患者を「近似症例患者」という。)について、治療前の安静時に三次元計測装置を使って取得した治療前安静時症例顔面形態データに基づいて正規化した治療前安静時近似症例顔面形態モデルNCMr-preを演算する処理と、前記各近似症例患者について、治療後の表情表出時に三次元計測装置を使って取得した治療後表情表出時症例顔面形態データに基づいて正規化した治療後表情表出時近似症例顔面形態モデルNCMs-postを演算する処理と、前記治療前安静時近似症例顔面形態モデルNCMr-preのベクトル平均を演算して治療前近似症例ベクトル平均NCSpreを得る処理と、前記治療後表情表出時近似症例顔面形態モデルNCMs-postのベクトル平均を演算して治療後近似症例ベクトル平均NCSpostを得る処理と、前記治療後近似症例ベクトル平均NCSpostから前記治療前近似症例ベクトル平均NCSpreを差し引いた近似症例ベクトル平均差分NCSpost-preを演算する処理と、前記評価対象患者の前記患者顔面形態データPFに基づいて正規化した、安静時患者顔面形態モデルPFMrを演算する処理と、前記安静時患者顔面形態モデルPFMrに、前記近似症例ベクトル平均差分NCSpost-preを加味することで、当該評価対象患者の治療後に予測される、表情表出時の予測顔面形態モデルPFMs-prdを演算する処理とを実行する、治療後の表情表出時の顔面形態予測システムである。 Further, the present invention is a system for predicting the facial morphology at the time of facial expression expression after treatment of a patient, and is realized by arithmetic processing of a computer device, a feature vector extraction means, an approximate case selection means, and a prediction model. A case obtained by the feature vector extraction means from a plurality of treated patients (a past patient who has undergone the treatment is referred to as a "case patient") using a three-dimensional measuring device, including at least a calculation means. Based on the facial morphology data CF, a process of extracting a set of multidimensional case feature vectors CV having a plurality of preselected feature variables as elements, and a new patient who is considering treatment (the new patient) A process of extracting a multidimensional patient feature vector PV having the plurality of feature variables as elements based on the patient facial morphology data PF acquired by using a three-dimensional measuring device from the “patient to be evaluated”). Executed, the approximate case selection means executes a process of selecting a plurality of approximate case feature vectors NCV that are close to the patient feature vector PV from the set of the case feature vectors CV for the plurality of the case patients. The predictive model calculation means measures three-dimensionally at rest before treatment for each case patient (the selected case patient is referred to as an "approximate case patient") corresponding to the plurality of selected approximate case feature vectors NCV. Pretreatment resting case acquired using the device Process to calculate the pretreatment resting approximate case facial morphology model NCMR-pre normalized based on the facial morphology data, and the post-treatment facial expression table for each of the approximate case patients. Post-treatment facial expression-appearing case acquired using a three-dimensional measuring device at the time of delivery Normalized post-treatment facial expression-appearing approximate case Facial morphology model NCMs-post is calculated based on the facial morphology data, and the pre-treatment The process of calculating the vector average of the resting approximate case facial morphology model NCMr-pre to obtain the pretreatment approximate case vector average NCSpre and the vector average of the post-treatment approximate case facial morphology model NCMs-post are calculated. The process of obtaining the post-treatment approximate case vector average NCSpost, the process of calculating the approximate case vector average difference NCSpost-pre obtained by subtracting the pretreatment approximate case vector average NCSpre from the post-treatment approximate case vector average NCSpost, and the evaluation target. The process of calculating the resting patient face morphology model PFMr, which is normalized based on the patient face morphology data PF of the patient, and the resting patient face morphology model PFMr By adding the approximate case vector average difference NCS post-pre, the facial expression after treatment is executed, which is the process of calculating the predicted facial morphology model PFMs-prd at the time of facial expression expression, which is predicted after the treatment of the patient to be evaluated. It is a facial expression prediction system at the time of expression.
 顔面形態予測システムは、前記各症例患者についての前記症例顔面形態データCFが、当該症例患者の治療前安静時症例顔面形態データCFr-preと、治療前表情表出時症例顔面形態データCFs-preとを含み、前記症例特徴ベクトルCVが、前記治療前安静時症例顔面形態データCFr-pre及び前記治療前表情表出時症例顔面形態データCFs-preの変化量である治療前症例形態変化量CDpreに基づいて抽出され、前記評価対象患者についての前記患者顔面形態データPFが、当該評価対象患者の安静時患者顔面形態データPFrと、表情表出時患者顔面形態データPFsとを含み、前記患者特徴ベクトルPVが、前記安静時患者顔面形態データPFr及び前記表情表出時患者顔面形態データPFsの変化量である患者形態変化量PDに基づいて抽出されることが好ましい。 In the facial morphology prediction system, the case facial morphology data CF for each of the patient patients is the pretreatment resting case facial morphological data CFr-pre and the pretreatment facial expression expression case facial morphological data CFs-pre. The pretreatment case morphology change amount CDpre in which the case feature vector CV is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre. The patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression, and the patient characteristics. It is preferable that the vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs.
 また、顔面形態予測システムは、前記予測モデル演算手段が、前記評価対象患者の歯を含む硬組織を撮影した画像データに基づいて、正規化した矯正前硬組織形態モデルHMpreを演算する処理と、前記矯正前硬組織形態モデルHMpreに基づいて当該評価対象患者の矯正後硬組織形態モデルHMpostを予測する処理と、前記表情表出時の予測顔面形態モデルPFMs-prdに前記矯正後硬組織形態モデルHMpostを組み入れて表示する処理とを更に実行することが好ましい。 Further, in the facial morphology prediction system, the prediction model calculation means calculates a normalized pre-correction hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the evaluation target patient. The process of predicting the post-correction hard tissue morphology model HMpost of the patient to be evaluated based on the pre-correction hard tissue morphology model HMpre, and the post-correction hard tissue morphology model in the predicted facial morphology model PFMs-prd at the time of facial expression expression. It is preferable to further execute the process of incorporating and displaying the HMpost.
 また、顔面形態予測システムは、前記近似症例特徴ベクトルNCVが、前記患者特徴ベクトルPVとの距離が近い順に所定症例数選択されることが好ましい。 Further, in the facial morphology prediction system, it is preferable that a predetermined number of cases are selected in order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
 また、顔面形態予測システムは、前記近似症例選択手段が、前記症例特徴ベクトルCVに対しクラスタリング処理を行うことで複数の症例クラスに分類する処理と、前記各症例クラスについてクラスタ重心Gをそれぞれ演算する処理とを実行し、分類された前記各症例クラスについての前記クラスタ重心Gのそれぞれのうち、前記患者特徴ベクトルPVとの距離が最も近いクラスタ重心を有する症例クラスに属する症例特徴ベクトルCVが、前記近似症例特徴ベクトルNCVとして選択されてもよい。 Further, in the facial morphology prediction system, the approximate case selection means calculates the cluster center of gravity G for each of the case classes and the process of classifying the case feature vector CV into a plurality of case classes by performing the clustering process. Among the cluster centers of gravity G for each of the classified case classes, the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is the case feature vector CV. It may be selected as the approximate case feature vector NCV.
 本発明によれば、患者の治療後の表情表出時の顔面形態を簡便かつ定量的に予測することができる。したがって、患者の表情を考慮した治療方針の適切な判断に貢献することができる。 According to the present invention, it is possible to easily and quantitatively predict the facial morphology at the time of facial expression expression after treatment of a patient. Therefore, it is possible to contribute to an appropriate judgment of the treatment policy in consideration of the facial expression of the patient.
顔面形態予測システムの概略構成を例示するブロック図である。It is a block diagram which illustrates the schematic structure of the face morphology prediction system. 症例顔面形態データCFを例示する図である。It is a figure which illustrates the case face morphology data CF. 症例顔面形態モデルCFMを例示する図である。It is a figure which illustrates the case face morphology model CFM. 治療前の症例形態変化量CDpreを例示する図である。It is a figure which illustrates the case morphology change amount CDpre before the treatment. 人間の正面の顔から選択される特徴パラメータを例示する図である。It is a figure which illustrates the feature parameter selected from the front face of a human being. 特徴パラメータの選択例を更に示す顔縦断面図である。It is a face vertical cross-sectional view which further shows the selection example of a feature parameter. 特徴パラメータの選択例を更に示す顔横断面図である。It is a face cross-sectional view which further shows the selection example of a feature parameter. 特徴パラメータの選択例を更に示す顔横断面図である。It is a face cross-sectional view which further shows the selection example of a feature parameter. 症例顔面形態モデルCFMから抽出される症例形態特徴ベクトルCFVを例示する図である。It is a figure which illustrates the case morphology feature vector CFV extracted from the case face morphology model CFM. 症例形態変化量CDから抽出される症例形態変化量特徴ベクトルCDVを例示する図である。It is a figure which illustrates the case morphology change amount feature vector CDV extracted from the case morphology change amount CD. 顔面形態予測方法の概要を説明するための図である。It is a figure for demonstrating the outline of the face morphology prediction method. 顔面形態予測方法の概要を例示するフローチャートである。It is a flowchart which illustrates the outline of the face morphology prediction method. 本発明に係る予測モデル構築の実施例を説明するための図である。It is a figure for demonstrating the Example of the prediction model construction which concerns on this invention. 実施例3-1による予測モデル構築処理を示すフローチャートである。It is a flowchart which shows the prediction model construction process by Example 3-1. 実施例3-2による予測モデル構築処理を示すフローチャートである。It is a flowchart which shows the prediction model construction process by Example 3-2.
(顔面形態予測システムの概要説明)
 図1に矯正歯科治療後の顔面形態予測システムの概略構成を例示する。顔面形態予測システムにおいて、後述する各種処理手段は、主にコンピュータ装置10の演算処理により実現される。コンピュータ装置10には、大容量のデータベース20、入力装置30、出力装置40等が接続される。データベース20は、コンピュータ装置10に直接的に接続されるハードディスクや光ディスクの他、ネットワークを介してコンピュータ装置10からアクセス可能な例えば病院内のデータサーバやストレージであってもよい。また、データベース20は、広域ネットワーク上の例えばクラウドデータセンターに設けられるものでもよい。
(Outline explanation of facial morphology prediction system)
FIG. 1 illustrates the schematic configuration of the facial morphology prediction system after orthodontic treatment. In the face morphology prediction system, various processing means described later are mainly realized by arithmetic processing of the computer device 10. A large-capacity database 20, an input device 30, an output device 40, and the like are connected to the computer device 10. The database 20 may be a hard disk or an optical disk directly connected to the computer device 10, or may be a data server or storage in a hospital, for example, which can be accessed from the computer device 10 via a network. Further, the database 20 may be provided in, for example, a cloud data center on a wide area network.
 データベース20には、後述するように、治療を行った複数の患者(その治療を行った過去の患者を「症例患者」という。)から測定したX線写真データや三次元の顔面形態データ等の一次データ、一次データに基づいて正規化した顔面形態モデル、特徴を抽出した多変数量である特徴ベクトル等の中間データ、そして評価データである予測顔面形態モデル等を含む症例データが記憶される。なお、データベース20等の症例データへのアクセス許可は、データの共有や使用が許された特定者(例えば担当医師)のみに制限されることが好ましい。 As will be described later, the database 20 contains X-ray photograph data, three-dimensional facial morphology data, and the like measured from a plurality of treated patients (past patients who have undergone the treatment are referred to as "case patients"). The primary data, the facial morphology model normalized based on the primary data, the intermediate data such as the feature vector which is a multivariate quantity extracted from the features, and the case data including the predicted facial morphology model which is the evaluation data are stored. It is preferable that the access permission to the case data such as the database 20 is restricted only to a specific person (for example, the doctor in charge) who is permitted to share and use the data.
 コンピュータ装置10は、データベース20に記憶した症例データを使用して、後述する顔面形態予測の演算処理を実行する。入力装置30は、例えばキーボード、マウス、タッチパネル等の、ヒューマンインターフェースと連携した操作入力装置を含む。また、入力装置30は、他のシステムで取得又は処理されたデータを、情報記憶媒体又はネットワークを介してコンピュータ装置10に入力できる機能を有する装置であってもよい。出力装置40は、例えば予測した顔面形態データやモデルを三次元的に可視化表示する画像ディスプレイを含む。また出力装置40は、データを他のシステムに提供できるようにするための情報記憶媒体の書き込み装置又はネットワークを介して外部に出力できる通信装置であってもよい。 The computer device 10 uses the case data stored in the database 20 to execute arithmetic processing for facial morphology prediction, which will be described later. The input device 30 includes an operation input device linked with a human interface, such as a keyboard, a mouse, and a touch panel. Further, the input device 30 may be a device having a function of inputting data acquired or processed by another system to the computer device 10 via an information storage medium or a network. The output device 40 includes, for example, an image display that three-dimensionally visualizes and displays predicted facial morphology data and a model. Further, the output device 40 may be a writing device of an information storage medium for providing data to another system or a communication device capable of outputting data to the outside via a network.
 また、顔面形態予測システムは、病院の検査室等で撮影された患者の顔写真データや三次元の顔面形態データが、データベース20を介して、又は直接的に、コンピュータ装置10に取り込まれるように構成されている。そのために、本システムは、デジタルカメラ61や、三次元計測装置62を備えている。三次元計測装置62は、患者の顔の全部又は咬合部などの所定部位を、三次元カメラ、三次元スキャナ又は三次元レーザプロファイラ等の一般的な光学式計測装置を用いることができる。これら患者の顔面形態データ等が、情報記憶媒体を介して入力装置30から入力されてもよいし、例えば病院のネットワークを介して本システムに入力されてもよい。 In addition, the facial morphology prediction system allows the patient's facial photograph data and three-dimensional facial morphological data taken in a hospital laboratory or the like to be taken into the computer device 10 via the database 20 or directly. It is configured. Therefore, this system includes a digital camera 61 and a three-dimensional measuring device 62. As the three-dimensional measuring device 62, a general optical measuring device such as a three-dimensional camera, a three-dimensional scanner, or a three-dimensional laser profiler can be used for a predetermined part such as the entire face of the patient or the occlusal portion. The facial morphology data of these patients may be input from the input device 30 via the information storage medium, or may be input to the system via, for example, a hospital network.
 本実施形態の顔面形態予測システムは、患者の硬組織(歯骨格)を撮影するため、X線写真やセファロ画像のデータが、コンピュータ装置10及び/又はデータベース20に入力されてもよい。そのために、例えば病院のネットワークを介して、X線検査装置63が本システムに備えられてもよい。X線検査装置63は、上述のセファロ画像の他、患者の咬合部のパノラマX線写真、CT画像データ等を撮影可能な検査装置を含むものでもよい。 In the facial morphology prediction system of the present embodiment, since the hard tissue (dental skeleton) of the patient is photographed, the data of the X-ray photograph and the cephalo image may be input to the computer device 10 and / or the database 20. For this purpose, the system may be equipped with an X-ray inspection apparatus 63, for example, via a hospital network. In addition to the above-mentioned cephalo image, the X-ray inspection device 63 may include an inspection device capable of taking a panoramic X-ray photograph of a patient's occlusal portion, CT image data, and the like.
(症例データの説明)
 データベース20に蓄積される「症例データ」について具体的に説明する。ここで、症例データは、一次データとして、複数人の症例患者の顔を、三次元計測装置62を使って計測した「症例顔面形態データCF」のセットCF(j=1,2,…,N)を含む。ここで、Nは症例患者数(すなわち症例数)を示す。
(Explanation of case data)
The "case data" accumulated in the database 20 will be specifically described. Here, the case data is a set CF (j = 1,2, ..., N) of "case facial morphology data CF" in which the faces of a plurality of case patients are measured using a three-dimensional measuring device 62 as primary data. )including. Here, N indicates the number of case patients (that is, the number of cases).
 各症例患者についての症例顔面形態データCFは、詳細には、「治療前安静時症例顔面形態データCFr-pre」、「治療前表情表出時症例顔面形態データCFs-pre」、「治療後安静時症例顔面形態データCFr-post」、「治療後表情表出時症例顔面形態データCFs-post」を含む。 The case facial morphology data CF for each case patient is, in detail, "pretreatment resting case facial morphological data CFr-pre", "pretreatment facial expression expression case facial morphological data CFs-pre", and "post-treatment resting". Includes "case facial morphology data CFr-post" and "case facial morphology data CFs-post at the time of facial expression after treatment".
 図2に症例顔面形態データCFの例を示す。なお、本実施形態において、「表情非表出時」の顔とは、具体的には安静時の顔をいい、「表情表出時」の顔とは、具体的には笑顔表情表出時の顔をいうものとする。 FIG. 2 shows an example of case facial morphology data CF. In the present embodiment, the “non-expressed facial expression” face specifically means a resting face, and the “expressed facial expression” face specifically refers to a smiling facial expression. It shall refer to the face of.
 治療前安静時症例顔面形態データCFr-preは、症例患者が治療前に表情を表出していない状態、すなわち安静時の顔を三次元計測した顔面形態データをいう。
 治療前表情表出時症例顔面形態データCFs-preは、症例患者が治療前に、例えば笑顔表情を表出した顔を三次元計測した顔面形態データをいう。
 治療後安静時症例顔面形態データCFr-postは、症例患者が治療後に表情を表出していない安静時の顔を三次元計測した顔面形態データをいう。
 治療前表情表出時症例顔面形態データCFs-postは、症例患者が治療後に、例えば笑顔表情を表出した顔を三次元計測した顔面形態データをいう。
Pretreatment resting case facial morphology data CFr-pre refers to facial morphology data obtained by three-dimensionally measuring the resting face in a state in which the case patient does not express a facial expression before treatment.
Case facial morphology data at the time of facial expression expression before treatment CFs-pre refers to facial morphological data obtained by three-dimensionally measuring a face in which a case patient expresses a smiling facial expression, for example, before treatment.
Post-treatment resting case facial morphology data CFr-post refers to facial morphological data obtained by three-dimensionally measuring the resting face of a case patient who does not express a facial expression after treatment.
Case facial morphology data CFs-post at the time of facial expression expression before treatment refers to facial morphology data obtained by three-dimensionally measuring, for example, a face in which a case patient expresses a smiling facial expression after treatment.
 ところで、三次元の顔面形態データは、各患者の顔の大きさ等により取得されるデータ数が異なる。また撮影された患者の立ち位置などに応じて原点の位置も異なる。本実施形態の顔面形態予測システムでは、各患者の顔面形態の定量的な比較や統計学的処理を行えるようにするために、三次元顔面形態データを正規化した顔面形態モデルに変換する形態モデル化手段110を備えている。形態モデル化手段110は、例えば、患者の三次元顔面形態データから予め定めた解剖学的特徴点を抽出し、その特徴点を同一点数、同一位相幾何学構造のポリゴンに配置することで、正規化した三次元顔面形態モデルを構築する演算処理を行う。そのような手法で構築される形態モデルは、一般には「相同モデル」と呼ばれ、例えばAIST(産業技術総合研究所)が提供するHBM(Homologous Body Modeling)プログラムを利用することができる。 By the way, the number of three-dimensional facial morphology data acquired differs depending on the size of the face of each patient. In addition, the position of the origin differs depending on the standing position of the patient who was photographed. In the facial morphology prediction system of the present embodiment, a morphological model that converts three-dimensional facial morphological data into a normalized facial morphological model in order to enable quantitative comparison and statistical processing of the facial morphology of each patient. The conversion means 110 is provided. The morphological modeling means 110 is normal, for example, by extracting predetermined anatomical feature points from the patient's three-dimensional facial morphology data and arranging the feature points on polygons having the same number of points and the same topological structure. Performs arithmetic processing to build a three-dimensional facial morphology model. The morphological model constructed by such a method is generally called a "homology model", and for example, an HBM (Homologous Body Modeling) program provided by AIST (National Institute of Advanced Industrial Science and Technology) can be used.
 形態モデル化手段110は、各症例患者についての上述の症例顔面形態データCFを正規化して「症例顔面形態モデルCFM」を構築する処理を行う。図3に症例顔面形態モデルCFMの例を示す。 The morphological modeling means 110 normalizes the above-mentioned case facial morphological data CF for each case patient and performs a process of constructing a “case facial morphological model CFM”. FIG. 3 shows an example of a case facial morphology model CFM.
 より詳細には、治療前安静時症例顔面形態データCFr-preを正規化したモデルデータを「治療前安静時症例顔面形態モデルCFMr-pre」と称し、治療前表情表出時症例顔面形態データCFs-preを正規化したモデルデータを「治療前表情表出時症例顔面形態モデルCFMs-pre」と称し、治療後安静時症例顔面形態データCFr-postを正規化したモデルデータを「治療後安静時症例顔面形態モデルCFMr-post」と称し、治療後表情表出時症例顔面形態データCFs-postを正規化したモデルデータを「治療後表情表出時症例顔面形態モデルCFMs-post」と称する。
 データベース20には、演算された症例患者N人に対応する症例顔面形態モデルCFMのセットCFM(j=1,2,…,N)が蓄積される。
More specifically, the model data obtained by normalizing the pretreatment resting case facial morphology data CFr-pre is referred to as the pretreatment resting case facial morphology model CFMr-pre, and the pretreatment facial expression expression case facial morphology data CFs. The model data obtained by normalizing -pre is called "pretreatment facial expression expression case facial morphology model CFMs-pre", and the model data obtained by normalizing the post-treatment facial expression data CFr-post is referred to as "post-treatment resting time". The model data obtained by normalizing the case facial morphology data CFs-post at the time of facial expression expression after treatment is referred to as "case facial morphology model CFMs-post".
In the database 20, a set CFM (j = 1,2, ..., N) of the case facial morphology model CFM corresponding to the calculated case patient N is accumulated.
 症例データは、更に、症例患者N人分の「症例形態変化量CD」を含む。本実施形態において各症例患者についての症例形態変化量CDは、詳細には、「治療前症例形態変化量CDpre」と「治療後症例形態変化量CDpost」とを含む。 The case data further includes the "case morphological change amount CD" for N case patients. In the present embodiment, the case morphology change amount CD for each case patient includes, in detail, "pre-treatment case morphology change amount CD pre" and "post-treatment case morphology change amount CD post".
 形態変化量演算手段120は、治療前安静時症例顔面形態モデルCFMr-pre及び治療前表情表出時症例顔面形態モデルCFMs-preの変化量を演算することにより「治療前症例形態変化量CDpre」を得る。また、形態変化量演算手段120は、治療後安静時症例顔面形態モデルCFMr-post及び治療後表情表出時症例顔面形態モデルCFMs-postの変化量を演算することにより「治療後症例形態変化量CDpost」を得る。図4に治療前の症例顔面形態モデルCFMr-pre、CFMs-preから症例形態変化量CDpreを得る例を示す。 The morphological change amount calculation means 120 calculates the change amount of the pretreatment resting case facial morphology model CFMr-pre and the pretreatment facial expression expression case facial morphological model CFMs-pre to obtain the “pretreatment case morphological change amount CDpre”. To get. Further, the morphological change amount calculation means 120 calculates the amount of change in the post-treatment resting case facial morphology model CFMr-post and the post-treatment facial expression expression case facial morphological model CFMs-post to obtain a “post-treatment case morphological change amount”. Get "CDpost". FIG. 4 shows an example in which the case morphology change amount CDpre is obtained from the case facial morphology models CFMr-pre and CFMs-pre before treatment.
 すなわち「形態変化量」は、患者の安静時の顔から笑顔表出時の顔に変化する際の軟組織の変化量と方向の情報を含み、それらを三次元の画像データとして表示することができる。 That is, the "morphological change amount" includes information on the amount of change in soft tissue and the direction when the patient changes from a resting face to a smiling face, and these can be displayed as three-dimensional image data. ..
 そして、データベース20には、演算された症例患者N人に対応する治療前症例形態変化量CDpreのセットCDpre(j=1,2,…,N)と、治療後症例形態変化量CDpostのセットCDpost(j=1,2,…,N)が蓄積される。 Then, in the database 20, a set CDpre (j = 1,2, ..., N) of the pretreatment case morphology change amount CDpre corresponding to the calculated N case patients and a set CDpost of the posttreatment case morphology change amount CDpost are stored. (j = 1,2, ..., N) is accumulated.
(知識ベースの説明)
 特徴ベクトル抽出手段130は、各症例患者についての症例顔面形態モデルCFMから、予め選択された複数の「特徴変量」(特徴パラメータの値)を要素とする多次元の「症例形態特徴ベクトルCFV」を抽出する処理を行う。
(Explanation of knowledge base)
The feature vector extraction means 130 obtains a multidimensional "case morphology feature vector CFV" having a plurality of "feature variables" (values of feature parameters) selected in advance from the case face morphology model CFM for each case patient. Perform the extraction process.
 ここで、「特徴パラメータ」とは、人間の顔などの形態を特徴的に表す幾何学的パラメータであり、例えば専門医がその経験や知識に基づいて予め選択される。ここでは、特徴パラメータ及び特徴変量について、若干の説明を加える。 Here, the "feature parameter" is a geometric parameter that characteristically represents a morphology such as a human face, and is selected in advance by a specialist, for example, based on his / her experience and knowledge. Here, some explanations will be added about the feature parameters and feature variables.
 図5には人間の顔外形線において選択される特徴パラメータの例が示される。図5に示されるように、人間の顔には、その形態においていくつかの変曲点を認識することができる。そのような変曲点は、目や鼻などの境界線の角や、三次元的に最も突出した位置や、最も窪んだ位置などを選択することができる。本明細書では、そのような変曲点を「ランドマーク」と称し、特徴パラメータの定義に使用される。なお、ランドマークは、変曲点でなくても、2つの変曲点を結ぶ直線の中心点など幾何学的に定義できる点であれば特に制約はない。 FIG. 5 shows an example of feature parameters selected in the human face outline. As shown in FIG. 5, the human face can recognize some inflection points in its morphology. For such an inflection point, the corner of a boundary line such as an eye or nose, the most prominent position in three dimensions, the most recessed position, or the like can be selected. In the present specification, such inflection points are referred to as "landmarks" and are used in the definition of feature parameters. The landmark is not particularly limited as long as it is not an inflection point but can be geometrically defined, such as the center point of a straight line connecting two inflection points.
 なお、顔の外形線は次のように抽出することができる。先ず、顔の正面画像から顔面形態を測定するためにカスタマイズされた演算プログラムにより、三次元表面データの各ピクセルにおける面法線が計算される。また、顔表面の各座標についてz軸と顔の面法線とのなす角度も計算される。z軸と顔の面法線とのなす角度が例えば60度である各座標点が抽出され、それらの点を結ぶ線が顔の外形線として用いられる。顔の外形線を規定する前記角度は45度から90度の間の角度が好ましい。 The outline of the face can be extracted as follows. First, a surface normal at each pixel of the three-dimensional surface data is calculated by an arithmetic program customized for measuring the facial morphology from the frontal image of the face. In addition, the angle formed by the z-axis and the face normal of the face is also calculated for each coordinate of the face surface. Each coordinate point where the angle formed by the z-axis and the face normal is, for example, 60 degrees is extracted, and the line connecting these points is used as the outline of the face. The angle that defines the outline of the face is preferably an angle between 45 degrees and 90 degrees.
 特徴パラメータの一つの例はランドマーク間の距離である。図5に示される、例えば特徴パラメータv1は、目尻Ex間の距離(|Ex-Ex|)として定義される。また、v3のように、ランドマーク間を結ぶ線(例えば顔の最側端Zy´と顎の突点Gn)を結ぶ線)と、ランドマーク(例えば頬の突点Go´)との距離であってもよい。また、特徴パラメータの他の例は、ランドマークを結ぶ線の角度である。例えば特徴パラメータv4の角度は、顔の最側端Zy´、頬の突点Go´及び頬の位置関係で定まる。 One example of a feature parameter is the distance between landmarks. For example, the feature parameter v1 shown in FIG. 5 is defined as the distance between the outer corners of the eyes Ex (| Ex-Ex |). Also, as in v3, the distance between the landmark (for example, the line connecting the outermost end Zy'of the face and the chin point Gn) and the landmark (for example, the cheek point Go') There may be. Another example of a feature parameter is the angle of the line connecting the landmarks. For example, the angle of the feature parameter v4 is determined by the positional relationship between the outermost end Zy'of the face, the cheek point Go', and the cheek.
 なお、距離の特徴パラメータは無次元量であってもよい。例えば口角幅(|Ch-Ch|)を目尻間の距離(|Ex-Ex|)で正規化した幅(|Ch-Ch|/|Ex-Ex|)を特徴パラメータとして採用することができる。また、複数の平均値に対する偏差や平均に対する比を特徴パラメータとして考慮してもよい。 The characteristic parameter of the distance may be a dimensionless quantity. For example, a width (| Ch-Ch | / | Ex-Ex |) in which the corner width of the mouth (| Ch-Ch |) is normalized by the distance between the outer corners of the eyes (| Ex-Ex |) can be adopted as a feature parameter. In addition, deviations with respect to a plurality of average values and ratios with respect to the average may be considered as feature parameters.
 また、図6~8に示すように、人間の顔の特定部位を撮影した三次元データに基づく断面からも複数の特徴パラメータが選択される。これらの断面は、三次元の座標系を決定した後に、解剖学的計測点に基づくデータ処理により作成される。図6には、例として、目尻Exと口角点Chとを結ぶラインで被験者の顔を切断した場合のyz断面が示される。例えば、目尻Exを基点とする口角Chのz軸方向における角度(v7)、目尻Exを基点とする当該断面における頬の突点P(Ex-Ch)の角度(v8)、目尻Exと口角Chの外形曲線の長さ(v12)、前記外形曲線で閉じられる面積(v13)などが特徴パラメータとして選択できる。 Further, as shown in FIGS. 6 to 8, a plurality of feature parameters are selected from a cross section based on three-dimensional data obtained by photographing a specific part of a human face. These cross sections are created by data processing based on anatomical measurement points after determining the three-dimensional coordinate system. FIG. 6 shows, as an example, a yz cross section when the subject's face is cut at a line connecting the outer corner of the eye Ex and the corner point Ch of the mouth. For example, the angle (v7) of the mouth angle Ch with the outer corner Ex as the base point in the z-axis direction, the angle (v8) of the cheek protrusion P (Ex-Ch) in the cross section with the outer corner Ex as the base point, the outer corner Ex and the mouth angle Ch. The length of the outer curve (v12), the area closed by the outer curve (v13), and the like can be selected as feature parameters.
 図7には、追加的な例として、鼻下点Snを通る水平面で被験者の顔を切断した場合のxz断面が示される。同じく図8には、鼻の最突点Pmを通る水平面で被験者の顔を切断した場合のxz断面が例示される。これらの図に示されるように、様々な断面位置における顔の部位のz方向へ突出する量(v14、v18)、突点部の角度(v16、v20)、突出量(v17、v22、v23)、凹点部の角度(v21)などが特徴パラメータとして選択できる。顔面形態を特徴付ける断面は、図示はしないがこれら以外にも、例えば、眉間点Gla、鼻根点N、上唇点Ls、下唇点Li、オトガイ点Smを通る断面であってもよい。また、特定部位のz平均値に対する差分や比率を特徴パラメータに加えてもよい。 As an additional example, FIG. 7 shows an xz cross section when the subject's face is cut in a horizontal plane passing through the subnasal point Sn. Similarly, FIG. 8 illustrates an xz cross section when the subject's face is cut in a horizontal plane passing through the most apex point Pm of the nose. As shown in these figures, the amount of protrusion of the facial part in the z direction (v14, v18), the angle of the apex (v16, v20), and the amount of protrusion (v17, v22, v23) at various cross-sectional positions. , The angle of the concave point (v21) and the like can be selected as feature parameters. Although not shown, the cross section that characterizes the facial morphology may be a cross section that passes through, for example, the glabellar point Gla, the nose root point N, the upper lip point Ls, the lower lip point Li, and the chin point Sm. Further, a difference or a ratio with respect to the z average value of a specific part may be added to the feature parameter.
 特徴ベクトル抽出手段130は、患者の三次元顔面形態データから、選択設定された複数の各特徴パラメータに対応する特徴変量を測定する処理を行う。特徴ベクトル抽出手段130は、測定したn個の特徴変量vをベクトル要素とするn次元の「特徴ベクトルV」を抽出する。
Figure JPOXMLDOC01-appb-M000001
The feature vector extraction means 130 performs a process of measuring feature variables corresponding to each of a plurality of selected and set feature parameters from the patient's three-dimensional facial morphology data. The feature vector extraction means 130 extracts an n-dimensional "feature vector V" having the measured n feature variables v as vector elements.
Figure JPOXMLDOC01-appb-M000001
 上述したように、特徴ベクトル抽出手段130は、各症例患者についての治療前と治療後並びに安静時と笑顔表情表出時のそれぞれの症例顔面形態データCFに基づいて多次元の「症例形態特徴ベクトルCFV」を抽出する処理を行う。図9に、症例顔面形態モデルCFMから症例形態特徴ベクトルCFVを抽出する例を示す。 As described above, the feature vector extraction means 130 is a multidimensional "case morphology feature vector" based on the case facial morphology data CF before and after the treatment for each case patient and at rest and when the smiling facial expression is expressed. Perform the process of extracting "CFV". FIG. 9 shows an example of extracting the case morphology feature vector CFV from the case face morphology model CFM.
 治療前安静時症例顔面形態データCFr-preに基づいて抽出された特徴ベクトルを「治療前安静時症例形態特徴ベクトルCFVr-pre」と称し、治療前表情表出時症例顔面形態データCFs-preに基づいて抽出された特徴ベクトルを「治療前表情表出時症例形態特徴ベクトルCFVs-pre」と称し、治療後安静時症例顔面形態データCFr-postに基づいて抽出された特徴ベクトルを「治療後安静時症例形態特徴ベクトルCFVr-post」と称し、治療後表情表出時症例顔面形態データCFs-postに基づいて抽出された特徴ベクトルを「治療後表情表出時症例形態特徴ベクトルCFVs-post」と称する。 The feature vector extracted based on the pretreatment resting case facial morphology data CFr-pre is called the pretreatment resting case morphological feature vector CFVr-pre, and is used as the pretreatment facial expression expression case facial morphology data CFs-pre. The feature vector extracted based on this is called "pretreatment facial expression expression case morphology feature vector CFVs-pre", and the feature vector extracted based on post-treatment resting case facial morphology data CFr-post is called "post-treatment rest". The feature vector extracted based on the case facial morphology data CFs-post at the time of facial expression expression after treatment is called "case morphology feature vector CFVs-post at the time of facial expression expression after treatment". Refer to.
 また、特徴ベクトル抽出手段130は、各症例患者について安静時及び笑顔時の軟組織変化量を示す症例形態変化量CDから多次元の「症例形態変化量特徴ベクトルCDV」を抽出することもできる。 Further, the feature vector extraction means 130 can also extract a multidimensional "case morphology change amount feature vector CDV" from the case morphology change amount CD showing the soft tissue change amount at rest and when smiling for each case patient.
 図10に、症例形態変化量CDから症例形態変化量特徴ベクトルCDVを抽出する例を示す。治療前症例形態変化量CDpreに基づいて抽出された特徴ベクトルを「治療前症例形態変化量特徴ベクトルCDVpre」と称し、治療後症例形態変化量CDpostに基づいて抽出された特徴ベクトルを「治療後症例形態変化量特徴ベクトルCDVpost」と称する。 FIG. 10 shows an example of extracting the case morphology change amount feature vector CDV from the case morphology change amount CD. The feature vector extracted based on the pretreatment case morphological change CDpre is called "pretreatment case morphological change feature vector CDVpre", and the feature vector extracted based on the posttreatment case morphological change CDpost is called "post-treatment case". It is called "morphological change amount feature vector CDV post".
 なお、後述する「近似症例患者」を選択する際の基底変量として使用する「症例特徴ベクトルCV」は、上述の治療前安静時症例形態特徴ベクトルCFVr-pre、治療前表情表出時症例形態特徴ベクトルCFVs-pre及び治療前症例形態変化量特徴ベクトルCDVpreの群から選択されるいずれか1つの特徴ベクトルか、又はこれら2つ以上の特徴ベクトルの特徴変量を複合した拡張型特徴ベクトルとすることができる。 The "case feature vector CV" used as the basal variate when selecting the "approximate case patient" described later is the above-mentioned pretreatment resting case morphology feature vector CFVr-pre and the pretreatment facial expression expression case morphology feature. Either one feature vector selected from the group of vector CFVs-pre and pretreatment case morphological change feature vector CDVpre, or an extended feature vector obtained by combining the feature variables of two or more feature vectors. it can.
 すなわち、「症例特徴ベクトルCV」の具体的な実施例としては、
(例1)治療前安静時症例形態特徴ベクトルCFVr-pre
(例2)治療前表情表出時症例形態特徴ベクトルCFVs-pre
(例3)治療前症例形態変化量特徴ベクトルCDVpre
(例4)治療前安静時症例形態特徴ベクトルCFVr-pre+治療前表情表出時症例形態特徴ベクトルCFVs-pre
(例5)治療前安静時症例形態特徴ベクトルCFVr-pre+治療前症例形態変化量特徴ベクトルCDVpre
(例6)治療前表情表出時症例形態特徴ベクトルCFVs-pre+治療前症例形態変化量特徴ベクトルCDVpre
などが挙げられる。
That is, as a specific example of the "case feature vector CV",
(Example 1) Pretreatment resting case Morphological feature vector CFVr-pre
(Example 2) Case morphology feature vector CFVs-pre when facial expression is expressed before treatment
(Example 3) Pretreatment case morphological change feature vector CDVpre
(Example 4) Pretreatment resting case morphological feature vector CFVr-pre + pretreatment facial expression expression case morphological feature vector CFVs-pre
(Example 5) Pretreatment resting case morphological feature vector CFVr-pre + pretreatment case morphological change feature vector CDVpre
(Example 6) Case morphological feature vector CFVs-pre + pretreatment case morphological change feature vector CDVpre
And so on.
 このようにしてデータベース20には、症例形態特徴ベクトルCFV及び/又は症例形態変形量特徴ベクトルCDVから抽出された、症例患者N人に対応する症例特徴ベクトルCVのセットCV(1),CV(2),CV(3),・・・,CV(N)が知識化される。以下、症例特徴ベクトルCVのセットを、CV(j=1,2,…,N)又はCV(j)と表記する。 In this way, the database 20 contains a set of case feature vector CVs CV (1) and CV (2), which are extracted from the case morphology feature vector CFV and / or the case morphology deformation amount feature vector CDV and correspond to N case patients. ), CV (3), ..., CV (N) are made into knowledge. Hereinafter, the set of the case feature vector CV is referred to as CV (j = 1,2, ..., N) or CV (j).
 複数の症例患者についての症例特徴ベクトルCVのセットは、クラスタリング処理されてデータベース20に知識化されてもよい。クラスタリング処理としては、Lloyd法やk-means法などの一般的なベクトル量子化手法を用いることができる。 A set of case feature vectors CVs for a plurality of case patients may be clustered and knowledgeed in the database 20. As the clustering process, a general vector quantization method such as the Lloyd method or the k-means method can be used.
 例えばk-means法によれば、次のようにして、症例特徴ベクトルCVのクラスタリング処理を行うことができる。先ず一次クラスタの数C1を任意に設定し、n次元(nは特徴変量vの個数)のベクトル空間に仮のクラスタCL(l=1,2,…,C1)を割り当てる。次に、各一次クラスタCL(l)に属する症例特徴ベクトルCVの平均を演算して一次クラスタ重心G(l=1,2,…,C1)を求める。そして、求めたC1個の各重心G(l)と全ての症例特徴ベクトルCVとの距離D(l,i)=|G(l)-CV(i)|を求める。ここで、ベクトル間の「距離」は、ユークリッド距離又はマンハッタン距離の何れでもよい。 For example, according to the k-means method, the clustering process of the case feature vector CV can be performed as follows. First, the number C1 of the primary cluster is arbitrarily set, and a temporary cluster CL * (l = 1,2, ..., C1) is assigned to the vector space of n dimensions (n is the number of feature variables v). Next, the average of the case feature vector CV belonging to each primary cluster CL * (l) is calculated to obtain the primary cluster center of gravity G * (l = 1,2, ..., C1). Then, the distance D * (l, i) = | G * (l) -CV (i) | between each center of gravity G * (l) of the obtained C1 and all the case feature vectors CV is obtained. Here, the "distance" between the vectors may be either the Euclidean distance or the Manhattan distance.
 次に、各症例特徴ベクトルCVから見て、最短距離にある一次クラスタ重心G(l)を探し、最短距離重心G(l)を共通とする症例特徴ベクトルCVの群を要素とする二次クラスタCL**(l)を再編成する。そして、二次クラスタCL**(l)においても二次クラスタ重心G**(l)を求め、最短距離にある症例特徴ベクトルCVの群から三次クラスタ重心G***(l)を求める。このようなクラスタの再編成のサイクルを繰り返すことで収束させたC1個のクラスタ(症例クラス)CL(l=1,2,…,C1)に、各症例患者の症例特徴ベクトルCVを分類することができる。 Next, the primary cluster center of gravity G * (l) at the shortest distance is searched from each case feature vector CV, and the group of the case feature vector CV having the shortest distance center of gravity G * (l) as an element is used. Reorganize the next cluster CL ** (l). Then, the secondary cluster center of gravity G ** (l) is also obtained in the secondary cluster CL ** (l), and the tertiary cluster center of gravity G *** (l) is obtained from the group of the case feature vector CV at the shortest distance. The case feature vector CV of each case patient is classified into C1 cluster (case class) CL (l = 1,2, ..., C1) converged by repeating such a cluster reorganization cycle. Can be done.
 続いて、クラスタ(つまり症例クラス)の数の最適化の処理を、次のアルゴリズムにより行ってもよい。
 先ず、候補となるクラスタの個数Cを、例えば3,4,・・・,12など想定される合理的な範囲でいくつか設定し、それぞれのクラスタ数で分類した各クラスタの重心Gc(l=1,2,…,C)を求める。各クラスタで得られた各重心Gc(l)と、各重心Gc(l)のクラスタに属する症例特徴ベクトルCV(j=1,2,…,N)との距離Dc(l,j)をそれぞれ演算し、そのうち最小の距離Dc(l)minを求める。
Subsequently, the process of optimizing the number of clusters (that is, the case class) may be performed by the following algorithm.
First, the number C of candidate clusters is set in a reasonable range such as 3, 4, ..., 12, and the center of gravity Gc (l =) of each cluster classified by the number of each cluster. Find 1,2,…, C). The distance Dc (l, j) between each center of gravity Gc (l) obtained in each cluster and the case feature vector CV (j = 1,2, ..., N) belonging to the cluster of each center of gravity Gc (l) is set. The calculation is performed, and the minimum distance Dc (l) min is obtained.
 各クラスタの最小距離Dc(l)minの平均値であるクラスタ間距離Dcを数式(1)で求める。
Figure JPOXMLDOC01-appb-M000002
The inter-cluster distance Dc, which is the average value of the minimum distance Dc (l) min of each cluster, is calculated by the mathematical formula (1).
Figure JPOXMLDOC01-appb-M000002
 クラスタ数の各候補C(例えば3,4,・・・,12)について、クラスタ間距離Dc(例えばD,D,・・・,D12)をそれぞれ求め、数式(2)に示す変化ΔDcが最大となるCに1を加算したC+1を、最適のクラスタ数として決定することができる。 For each candidate C (for example, 3, 4, ..., 12) of the number of clusters, the inter-cluster distance Dc (for example, D 3 , D 4 , ..., D 12 ) is obtained, and the change shown in the formula (2) is obtained. C + 1, which is obtained by adding 1 to C having the maximum ΔDc, can be determined as the optimum number of clusters.
 症例クラス分類手段180は、各症例患者の症例特徴ベクトルCV(j=1,2,…,N)に対しこのようなクラスタリング処理を行うことで、C個のクラスタ重心G(l=1,2,…,C)を演算し、C個の症例クラスCL(l=1,2,…,C)に分類することができる。各症例クラスCLに分類された症例特徴ベクトルCVと、そのクラスタ重心Gのデータは、データベース20において知識化される。 The case class classification means 180 performs such a clustering process on the case feature vector CV (j = 1,2, ..., N) of each case patient, thereby performing C cluster centers of gravity G (l = 1,2). , ..., C) can be calculated and classified into C case classes CL (l = 1,2, ..., C). The data of the case feature vector CV classified into each case class CL and the cluster center of gravity G thereof are knowledgeable in the database 20.
(顔面形態の予測方法の説明)
 上述した顔面形態予測システムに、以下説明する方法で上述した症例データの知識ベースを活用することにより、患者の治療後の表情表出時の顔面形態を定量的に予測することができる。図11は顔面形態予測方法の概要を示すブロック図、図12はそのフローチャートである。
(Explanation of facial morphology prediction method)
By utilizing the above-mentioned knowledge base of case data in the above-mentioned facial morphology prediction system by the method described below, it is possible to quantitatively predict the facial morphology at the time of facial expression expression after treatment of a patient. FIG. 11 is a block diagram showing an outline of the face morphology prediction method, and FIG. 12 is a flowchart thereof.
 まず、ステップS1において、治療を検討している新たな患者(その新たな患者を「評価対象患者」という。)の顔を、治療前に三次元計測装置62を使って計測して患者顔面形態データPFを取得する。このとき、患者が表情を表出していない状態、すなわち安静時の顔を計測した顔面形態データである「安静時患者顔面形態データPFr」と、笑顔表情表出時の顔を計測した顔面形態データである「表情表出時患者顔面形態データPFs」の少なくとも2種類の患者顔面形態データPFを取得する。 First, in step S1, the face of a new patient under consideration for treatment (the new patient is referred to as an "evaluation target patient") is measured using a three-dimensional measuring device 62 before treatment, and the patient's facial morphology. Acquire the data PF. At this time, the state in which the patient does not express a facial expression, that is, the "resting patient facial morphology data PFr" which is the facial morphological data obtained by measuring the face at rest, and the facial morphological data measuring the face when the smiling facial expression is expressed. At least two types of patient facial morphology data PFs of "facial expression expression patient facial morphology data PFs" are acquired.
 形態モデル化手段110は、患者顔面形態データPFに基づいて、例えば上述した相同モデルアルゴリズムを利用して正規化した「患者顔面形態モデルPFM」を演算する。安静時患者顔面形態データPFrに基づいて構築した顔面形態モデルを「安静時患者顔面形態モデルPFMr」と称し、表情表出時患者顔面形態データPFsに基づいて構築した顔面形態モデルを「表情表出時患者顔面形態モデルPFMs」と称する。 The morphology modeling means 110 calculates a "patient facial morphology model PFM" normalized by using, for example, the above-mentioned homology model algorithm, based on the patient facial morphology data PF. The facial morphology model constructed based on the resting patient facial morphology data PFr is called "resting patient facial morphology model PFMr", and the facial morphological model constructed based on the facial expression expression patient facial morphology data PFs is called "facial expression expression". When referred to as "patient facial morphology model PFMs".
 また、形態変化量演算手段120は、安静時患者顔面形態モデルPFMr及び表情表出時患者顔面形態モデルPFMsの変化量を演算することにより「患者形態変化量PD」を得る。 Further, the morphological change amount calculation means 120 obtains the "patient morphological change amount PD" by calculating the change amount of the resting patient facial morphology model PFMr and the facial expression expression patient facial morphological model PFMs.
 次に、特徴ベクトル抽出手段130は、患者顔面形態モデルPFM及び/又は患者形態変化量PDから、複数の特徴変量を要素とする多次元の「患者特徴ベクトルPV」を抽出する。 Next, the feature vector extraction means 130 extracts a multidimensional "patient feature vector PV" having a plurality of feature variables as elements from the patient facial morphology model PFM and / or the patient morphology change amount PD.
 安静時患者顔面形態モデルPFMrに基づいて抽出された特徴ベクトルを「安静時患者形態特徴ベクトルPFVr」と称し、表情表出時患者顔面形態モデルPFMsに基づいて抽出された特徴ベクトルを「表情表出時患者形態特徴ベクトルPFVs」と称する。また、患者形態変化量PDに基づいて抽出された特徴ベクトルを「患者形態変化量特徴ベクトルPDV」と称する。 The feature vector extracted based on the resting patient facial morphology model PFMr is called "resting patient morphological feature vector PFVr", and the feature vector extracted based on the facial expression expression patient facial morphology model PFMs is called "facial expression expression". When referred to as "patient morphological feature vector PFVs". Further, the feature vector extracted based on the patient morphology change amount PD is referred to as "patient morphology change amount feature vector PDV".
 上述の「患者特徴ベクトルPV」は、安静時患者形態特徴ベクトルPFVr、表情表出時患者形態特徴ベクトルPFVs及び患者形態変化量特徴ベクトルPDVの群から選択されるいずれか1つの特徴ベクトルか、又はこれら2つ以上の特徴ベクトルの特徴変量を複合した拡張型特徴ベクトルとすることができる。 The above-mentioned "patient feature vector PV" is any one feature vector selected from the group of resting patient morphology feature vector PFVr, facial expression expression patient morphology feature vector PFVs, and patient morphology change feature vector PDV, or An extended feature vector can be obtained by combining the feature variables of these two or more feature vectors.
 すなわち、「患者特徴ベクトルPV」の具体的な実施例としては、
(例1)安静時患者形態特徴ベクトルPFVr
(例2)表情表出時患者形態特徴ベクトルPFVs
(例3)患者形態変化量特徴ベクトルPDV
(例4)安静時患者形態特徴ベクトルPFVr+表情表出時患者形態特徴ベクトルPFVs
(例5)安静時患者形態特徴ベクトルPFVr+患者形態変化量特徴ベクトルPDV
(例6)表情表出時患者形態特徴ベクトルPFVs+患者形態変化量特徴ベクトルPDV
などが挙げられる。
That is, as a specific example of the "patient feature vector PV",
(Example 1) Resting patient morphology feature vector PFVr
(Example 2) Patient morphology feature vector PFVs when expressing facial expressions
(Example 3) Patient morphology change characteristic vector PDV
(Example 4) Resting patient morphology feature vector PFVr + Facial expression expression patient morphology feature vector PFVs
(Example 5) Resting patient morphology feature vector PFVr + patient morphology change feature vector PDV
(Example 6) Patient morphology feature vector PFVs + patient morphology change feature vector PDV at the time of facial expression expression
And so on.
 次に、ステップS2において、近似症例選択手段140は、複数の症例患者についての症例特徴ベクトルCVを基底変量として近似症例を選択する処理を行う。具体的には、症例特徴ベクトルCVのセットCV(j=1,2,…,N)から、患者特徴ベクトルPVに近似する複数の症例特徴ベクトルCVを選択する。ここで選択された、患者特徴ベクトルPVに近似する症例特徴ベクトルCVを「近似症例特徴ベクトルNCV」と称する。 Next, in step S2, the approximate case selection means 140 performs a process of selecting an approximate case using the case feature vector CV for a plurality of case patients as a basal variable. Specifically, a plurality of case feature vector CVs that are close to the patient feature vector PV are selected from the set CV (j = 1,2, ..., N) of the case feature vector CV. The case feature vector CV selected here that approximates the patient feature vector PV is referred to as an “approximate case feature vector NCV”.
 なお、症例特徴ベクトルCVの母集合は、評価対象患者と、例えば性別、年代、治療部位、硬組織(歯並び等)などが共通又は近似する症例に絞られることが好ましい。 It is preferable that the population of the case feature vector CV is narrowed down to cases in which, for example, gender, age, treatment site, hard tissue (dentition, etc.) are common or similar to the patient to be evaluated.
 患者特徴ベクトルPVと、それと距離が比較される症例特徴ベクトルCVの組み合わせは同種のものとなる。それらの組み合わせの例を以下に挙げる。 The combination of the patient feature vector PV and the case feature vector CV whose distance is compared with it is the same. Examples of their combinations are given below.
 (実施例1-1)
 例えば、患者特徴ベクトルPVとして「安静時患者形態特徴ベクトルPFVr」を採用するときには、症例特徴ベクトルCVは「治療前安静時症例形態特徴ベクトルCFVr-pre」となる。
(Example 1-1)
For example, when "resting patient morphology feature vector PFVr" is adopted as the patient feature vector PV, the case feature vector CV becomes "pretreatment resting case morphology feature vector CFVr-pre".
 (実施例1-2)
 また、患者特徴ベクトルPVとして「表情表出時患者形態特徴ベクトルPFVs」を採用するときには、症例特徴ベクトルCVは「治療前表情表出時症例形態特徴ベクトルCFVs-pre」となる。
(Example 1-2)
Further, when "patient morphology feature vector PFVs at the time of facial expression expression" is adopted as the patient feature vector PV, the case feature vector CV becomes "case morphology feature vector CFVs-pre at the time of facial expression expression before treatment".
 (実施例1-3)
 また、患者特徴ベクトルPVとして「患者形態変化量特徴ベクトルPDV」を採用するときには、症例特徴ベクトルCVは「治療前症例形態変化量特徴ベクトルCDVpre」となる。
(Example 1-3)
Further, when the "patient morphology change amount feature vector PDV" is adopted as the patient feature vector PV, the case feature vector CV becomes the "pretreatment case morphology change amount feature vector CDV pre".
 特に、上記実施例1-3の形態変化量特徴ベクトルは、顔の軟組織の変形量、変形方向、組織の軟らかさ等の情報を特徴量として含むので、これを基底変量として近似症例を選択することにより、評価対象患者の笑顔表情表出時の顔面形態予測の精度を増すことができる。 In particular, since the morphological change amount feature vector of Example 1-3 includes information such as the deformation amount of the soft tissue of the face, the deformation direction, and the softness of the tissue as the feature amount, an approximate case is selected using this as the basal variation. As a result, the accuracy of facial morphology prediction when the smiling facial expression of the evaluation target patient is expressed can be increased.
 図12のステップS2における、近似症例特徴ベクトルNCVを選択する処理の具体的な実施例を説明する。 A specific example of the process of selecting the approximate case feature vector NCV in step S2 of FIG. 12 will be described.
 (実施例2-1)
 近似症例選択手段140は、患者特徴ベクトルPVとの距離が近い順に所定症例数kの症例特徴ベクトルCVを選択することができる。近似症例数kは、専門の医師等の経験則的判断により定められる数である。
 上述したように症例患者N人に対応する症例特徴ベクトルCVのセットCV(j)がデータベース20に知識化されている。近似症例選択手段140は、各症例特徴ベクトルCV(j)と、抽出された患者特徴ベクトルPVとの距離(|CV(j)-PV|)が近い順に、症例数kの近似症例を選択する。ここで、ベクトル間の「距離」は、ユークリッド距離又はマンハッタン距離の何れでもよい。
(Example 2-1)
The approximate case selection means 140 can select the case feature vector CV having a predetermined number of cases k in ascending order of distance from the patient feature vector PV. The approximate number of cases k is a number determined by an empirical judgment of a specialist doctor or the like.
As described above, the set CV (j) of the case feature vector CV corresponding to N case patients is knowledgeable in the database 20. The approximate case selection means 140 selects approximate cases having the number of cases k in ascending order of distance (| CV (j) -PV |) between each case feature vector CV (j) and the extracted patient feature vector PV. .. Here, the "distance" between the vectors may be either the Euclidean distance or the Manhattan distance.
 (実施例2-2)
 また、近似症例選択手段140は、患者特徴ベクトルPVとの距離が最も近いクラスタ重心Gを有する症例クラスCLに属する症例特徴ベクトルCVを、近似症例特徴ベクトルNCVとして選択してもよい。
 上述したように、各症例クラスCLに分類された症例特徴ベクトルCVと、そのクラスタ重心Gのデータは、データベース20において知識化されている。近似症例選択手段140は、各症例クラスCL(l=1,2,…,C)に属する症例特徴ベクトルCVの重心G(l=1,2,…,C)のうち、患者特徴ベクトルPVとの距離(|G(l)-PV|)が最も近いベクトル重心Gを有する近似症例クラスNCLを選択する。この場合の「距離」は、ユークリッド距離又はマンハッタン距離の何れでもよい。
 そして、近似症例選択手段140は、近似症例クラスNCLに属するすべての症例特徴ベクトルCVのセットを、近似症例特徴ベクトルNCVとして選択する。
(Example 2-2)
Further, the approximate case selection means 140 may select the case feature vector CV belonging to the case class CL having the cluster center of gravity G closest to the patient feature vector PV as the approximate case feature vector NCV.
As described above, the data of the case feature vector CV classified into each case class CL and the cluster center of gravity G thereof are knowledgeable in the database 20. The approximate case selection means 140 is a patient feature vector PV among the center of gravity G (l = 1,2, ..., C) of the case feature vector CV belonging to each case class CL (l = 1,2, ..., C). The approximate case class NCL having the vector centroid G with the closest distance (| G (l) -PV |) is selected. The "distance" in this case may be either the Euclidean distance or the Manhattan distance.
Then, the approximate case selection means 140 selects a set of all the case feature vectors CV belonging to the approximate case class NCL as the approximate case feature vector NCV.
 次に、図12のステップS3において、予測モデル演算手段150は、評価対象患者の患者顔面形態モデルPFMに基づいて、当該評価対象患者の治療後に予測される、笑顔表情表出時の「予測顔面形態モデルPFMs-prd」を演算する。その予測モデル構築処理の実施例を、図13~15を参照して以下説明する。 Next, in step S3 of FIG. 12, the prediction model calculation means 150 predicts after the treatment of the evaluation target patient based on the patient face morphology model PFM of the evaluation target patient, and the “predicted face” at the time of expressing a smiling expression is predicted. The morphological model PFMs-prd "is calculated. An example of the prediction model construction process will be described below with reference to FIGS. 13 to 15.
 (実施例3-1)
 本実施例3-1によれば、図13に示すように、評価対象患者の笑顔時の患者顔面形態モデルPFMsに基づいて、治療後に予測される、笑顔時の予測顔面形態モデルPFMs-prdを演算する。
(Example 3-1)
According to this Example 3-1 as shown in FIG. 13, the predicted facial morphology model PFMs-prd at the time of smiling, which is predicted after treatment, is based on the patient facial morphological model PFMs at the time of smiling of the evaluation target patient. Calculate.
 本実施例による予測モデル構築処理を、図14のフローチャートに基づいて、更に詳細に説明する。まず、ステップS11において、各近似症例患者についての治療前表情表出時症例顔面形態データNCFs-preに基づいて、正規化した「治療前表情表出時近似症例顔面形態モデルNCMs-pre」のセットを演算する。
 また、ステップS12において、各近似症例患者についての治療後表情表出時症例顔面形態データNCFs-postに基づいて、正規化した「治療後表情表出時近似症例顔面形態モデルNCMs-post」を演算する。
The prediction model construction process according to the present embodiment will be described in more detail based on the flowchart of FIG. First, in step S11, a set of "pretreatment facial expression expression approximate case facial morphology model NCMs-pre" normalized based on pretreatment facial expression expression case facial morphology data NCFs-pre for each approximate case patient. Is calculated.
Further, in step S12, a normalized “post-treatment facial expression expression approximate case facial morphology model NCMs-post” is calculated based on the post-treatment facial expression expression case facial morphology data NCFs-post for each approximate case patient. To do.
 ステップS13において、治療前表情表出時近似症例顔面形態モデルNCMs-preのセットのベクトル平均を演算して「治療前近似症例ベクトル平均NCApre」を得る。また、ステップS14において、治療後表情表出時近似症例顔面形態モデルNCMs-postのセットのベクトル平均を演算して「治療後近似症例ベクトル平均NCApost」を得る。 In step S13, the vector average of the set of the approximate case facial morphology model NCMs-pre at the time of expressing the facial expression before treatment is calculated to obtain the "pretreatment approximate case vector average NCApre". Further, in step S14, the vector average of the set of the approximate case facial morphology model NCMs-post at the time of facial expression expression after treatment is calculated to obtain the “post-treatment approximate case vector average NCApost”.
 ステップS15において、治療後近似症例ベクトル平均NCApostから治療前近似症例ベクトル平均NCApreを差し引いた「近似症例ベクトル平均差分NCApost-pre」を演算する。 In step S15, the "approximate case vector average difference NCApost-pre" is calculated by subtracting the pretreatment approximate case vector average NCApre from the post-treatment approximate case vector average NCApost.
 ステップS16において、評価対象患者の表情表出時患者顔面形態モデルPFMsに、ステップS15で演算した近似症例顔面形態ベクトル平均差分NCApost-preを加味する。これにより、当該評価対象患者の治療後に予測される、笑顔表情表出時の予測顔面形態モデルPFMs-prdが演算される。 In step S16, the approximate case facial morphology vector average difference NCA post-pre calculated in step S15 is added to the patient facial morphology model PFMs when the facial expression of the patient to be evaluated is expressed. As a result, the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression, which is predicted after the treatment of the patient to be evaluated, is calculated.
 (実施例3-2)
 また、予測モデル演算手段150は、図13に示したように、評価対象患者の安静時の患者顔面形態モデルPFMrに基づいて、治療後に予測される、笑顔時の予測顔面形態モデルPFMs-prdを演算してもよい。
(Example 3-2)
Further, as shown in FIG. 13, the prediction model calculation means 150 obtains the prediction face morphology model PFMs-prd at the time of smile, which is predicted after the treatment, based on the patient face morphology model PFMr at rest of the evaluation target patient. You may calculate.
 本実施例による予測モデル構築処理を、図15のフローチャートに基づいて、更に詳細に説明する。まず、ステップS21において、各近似症例患者についての治療前安静時症例顔面形態データNCFr-preに基づいて、正規化した「治療前安静時近似症例顔面形態モデルNCMr-pre」のセットを演算する。
 また、ステップS22において、各近似症例患者についての治療後表情表出時症例顔面形態データNCFs-postに基づいて、正規化した「治療後表情表出時近似症例顔面形態モデルNCMs-post」を演算する。
The prediction model construction process according to the present embodiment will be described in more detail based on the flowchart of FIG. First, in step S21, a set of normalized “pretreatment resting approximate case facial morphology model NCMR-pre” is calculated based on the pretreatment resting case facial morphology data NCFr-pre for each approximate case patient.
Further, in step S22, a normalized “post-treatment facial expression expression approximate case facial morphology model NCMs-post” is calculated based on the post-treatment facial expression expression case facial morphology data NCFs-post for each approximate case patient. To do.
 ステップS23において、治療前安静時近似症例顔面形態モデルNCMr-preのセットのベクトル平均を演算して「治療前近似症例ベクトル平均NCApre」を得る。また、ステップS24において、治療後表情表出時近似症例顔面形態モデルNCMs-postのセットのベクトル平均を演算して「治療後近似症例ベクトル平均NCApost」を得る。 In step S23, the vector average of the set of the pretreatment approximate case facial morphology model NCMr-pre is calculated to obtain the "pretreatment approximate case vector average NCApre". Further, in step S24, the vector average of the set of the approximate case facial morphology model NCMs-post at the time of facial expression expression after treatment is calculated to obtain the “post-treatment approximate case vector average NCApost”.
 ステップS25において、治療後近似症例ベクトル平均NCApostから治療前近似症例ベクトル平均NCApreを差し引いた「近似症例ベクトル平均差分NCApost-pre」を演算する。 In step S25, the "approximate case vector average difference NCApost-pre" is calculated by subtracting the pretreatment approximate case vector average NCApre from the post-treatment approximate case vector average NCApost.
 ステップS26において、評価対象患者の安静時患者顔面形態モデルPFMrに、ステップS25で演算した近似症例顔面形態ベクトル平均差分NCApost-preを加味する。これにより、当該評価対象患者の治療後に予測される、笑顔表情表出時の予測顔面形態モデルPFMs-prdが演算される。 In step S26, the approximate case facial morphology vector average difference NCA post-pre calculated in step S25 is added to the resting patient facial morphology model PFMr of the patient to be evaluated. As a result, the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression, which is predicted after the treatment of the patient to be evaluated, is calculated.
 次に、予測した患者顔面形態のモデルを表示するステップS4(図12参照)では、治療後の歯並び等の画像を、笑顔表情表出時の予測顔面形態モデルPFMs-prdとともに、ディスプレイ等の出力装置40に表示することが好ましい。 Next, in step S4 (see FIG. 12) of displaying the predicted patient facial morphology model, an image of the tooth alignment after treatment is output on a display or the like together with the predicted facial morphology model PFMs-prd at the time of expressing a smiling facial expression. It is preferable to display it on the device 40.
 その場合、コンピュータ装置10による演算処理が、評価対象患者の歯骨格を含む硬組織を撮影した画像データに基づいて、正規化した矯正前硬組織形態モデルHMpreを演算するステップと、前記矯正前硬組織形態モデルHMpreに基づいて当該評価対象患者の矯正後硬組織形態モデルHMpostを予測するステップと、ステップS3で評価した、治療後の笑顔表情表出時の予測顔面形態モデルPFMs-prdに、予測した前記矯正後硬組織形態モデルHMpostを組み入れるステップと、を含むことができる。 In that case, the arithmetic processing by the computer device 10 is a step of calculating a normalized pre-orthodontic hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the tooth skeleton of the patient to be evaluated, and the pre-orthodontic hardness. Prediction in the step of predicting the corrected hard tissue morphology model HMpost of the patient to be evaluated based on the tissue morphology model HMpre, and the prediction facial morphology model PFMs-prd at the time of expressing a smiling expression after treatment evaluated in step S3. The step of incorporating the corrected hard tissue morphology model HMpost, which has been performed, can be included.
 硬組織の画像データは、例えば患者の歯の部分を三次元計測装置で計測した三次元画像データ、咬合部の歯骨格を含むCT画像、セファロ画像、パノラマX写真等であり、硬組織形態モデルHMpreは、これらの二次元及び/又は三次元画像データを組み合わせて構築されてもよい。 The image data of the hard tissue is, for example, three-dimensional image data obtained by measuring the tooth portion of the patient with a three-dimensional measuring device, a CT image including the tooth skeleton of the occlusal part, a cephalo image, a panoramic X photograph, and the like. The HMpre may be constructed by combining these two-dimensional and / or three-dimensional image data.
 矯正治療を計画している硬組織構造と共通する症例データに絞って、上述の予測モデル構築処理を実行すれば、治療後に予測される歯並びが笑顔表情表出時の予測顔面形態の画像に表示させることができる。例えば、予測した笑顔のうち口元の部分に、三次元の歯列予測画像もしくは二次元の歯列予測画像を組み合わせることで、歯並びの見え方も含めた予測が可能である。また、画像上で硬組織を移動させながら、それに追従して治療後の笑顔がどのように改善されるか可視化してシミュレーションも行うことができる。 By narrowing down the case data common to the hard tissue structure for which orthodontic treatment is planned and executing the above-mentioned prediction model construction process, the tooth alignment predicted after the treatment is displayed on the image of the predicted facial morphology when the smiling facial expression is expressed. Can be made to. For example, by combining a three-dimensional dentition prediction image or a two-dimensional dentition prediction image with the mouth portion of the predicted smile, it is possible to make a prediction including the appearance of the tooth alignment. In addition, while moving the hard tissue on the image, it is possible to visualize and simulate how the smile after treatment is improved by following the movement.
 以上説明した顔面形態予測システム及び顔面形態予測方法によれば、矯正歯科治療後の患者の三次元の顔面形態を、簡便かつ定量的に予測することができる。特に、本実施形態では、笑顔表情表出時の顔形態が治療前に事前に予測できるので、患者にとっても表情の審美的改善につながる「良い笑顔」を作り出す治療計画かどうか、適切な判断に貢献することができる。 According to the facial morphology prediction system and the facial morphology prediction method described above, the three-dimensional facial morphology of a patient after orthodontic treatment can be predicted easily and quantitatively. In particular, in the present embodiment, since the facial morphology at the time of expressing the smiling facial expression can be predicted in advance before the treatment, it is necessary to make an appropriate judgment as to whether or not the treatment plan creates a "good smile" that leads to aesthetic improvement of the facial expression for the patient. Can contribute.
 本発明に係る顔面形態予測システム及び顔面形態予測方法は、矯正歯科治療以外にも、例えば顎変形症患者の外科的治療にも利用できる。また、例えば顎顔面外科(口腔外科と形成外科を含む)手術単独、矯正歯科治療又は顎補綴治療と共同で治療を行うときの予測にも利用できる。更に、顔面形態の加齢変化予測にもその応用が期待できる。 The facial morphology prediction system and the facial morphology prediction method according to the present invention can be used not only for orthodontic treatment but also for surgical treatment of patients with jaw deformities, for example. It can also be used, for example, to predict when maxillofacial surgery (including oral surgery and plastic surgery) is performed alone, orthodontic treatment, or jointly with jaw prosthesis treatment. Furthermore, its application can be expected for the prediction of age-related changes in facial morphology.
10 コンピュータ装置         20 データベース
30 入力装置             40 出力装置
62 三次元計測装置          63 X線検査装置
110 形態モデル化手段        120 形態変化量演算手段
130 特徴ベクトル抽出手段      140 近似症例選択手段
150 予測モデル演算手段
CF 症例顔面形態データ        CFM 症例顔面形態モデル
CD 症例形態変化量          CV 症例特徴ベクトル
CFV 症例形態特徴ベクトル      CDV 症例形態変化量特徴ベクトル
PF 患者顔面形態データ        PFM 患者顔面形態モデル
PFMs-prd 表情表出時の予測顔面形態モデル
PD 患者形態変化量
PV 患者特徴ベクトル         PDV 患者形態変化量特徴ベクトル
NCV 近似症例特徴ベクトル
NCApre 治療前近似症例ベクトル平均
NCApost 治療後近似症例ベクトル平均
NCApost-pre 近似症例ベクトル平均差分
N 症例患者数(症例数)        CL 症例クラス
NCL 近似症例クラス         G クラスタ重心
10 Computer device 20 Database 30 Input device 40 Output device 62 Three-dimensional measurement device 63 X-ray inspection device 110 Morphological modeling means 120 Morphological change amount calculation means 130 Feature vector extraction means 140 Approximate case selection means 150 Predictive model calculation means CF Case face Morphological data CFM Case facial morphology model CD Case morphological change CV Case feature vector CFV Case morphological feature vector CDV Case morphological change Feature vector PF Patient facial morphological data PFM Patient facial morphological model PFMs-prd Predicted facial morphological model at the time of facial expression expression PD Patient morphology change PV Patient feature vector PDV Patient morphology change Feature vector NCV Approximate case Feature vector NCApre Pre-treatment approximate case vector Average NCApost Post-treatment approximate case vector Average NCApost-pre Approximate case vector Average difference N Cases Number of patients (number of cases) ) CL case class NCL approximate case class G cluster center of gravity

Claims (10)

  1.  患者の治療後の表情表出時の顔面形態を、コンピュータ装置が実行する演算処理により予測する方法であって、
     前記演算処理が、
     治療を行った複数の患者(その治療を行った過去の患者を「症例患者」という。)から、三次元計測装置を使って取得した症例顔面形態データCFに基づいて、予め選択された複数の特徴変量を要素とする多次元の症例特徴ベクトルCVのセットを抽出するステップと、
     治療を検討している新たな患者(その新たな患者を「評価対象患者」という。)から、三次元計測装置を使って取得した患者顔面形態データPFに基づいて、前記複数の特徴変量を要素とする多次元の患者特徴ベクトルPVを抽出するステップと、
     複数の前記症例患者についての前記症例特徴ベクトルCVのセットのうちから、前記患者特徴ベクトルPVに近似する近似症例特徴ベクトルNCVを複数選択するステップと、
     選択された複数の前記近似症例特徴ベクトルNCVに対応する各症例患者(その選択された症例患者を「近似症例患者」という。)について、治療前の安静時に三次元計測装置を使って取得した治療前安静時症例顔面形態データに基づいて正規化した治療前安静時近似症例顔面形態モデルNCMr-preを演算するステップと、
     前記各近似症例患者について、治療後の表情表出時に三次元計測装置を使って取得した治療後表情表出時症例顔面形態データに基づいて正規化した治療後表情表出時近似症例顔面形態モデルNCMs-postを演算するステップと、
     前記治療前安静時近似症例顔面形態モデルNCMr-preのベクトル平均を演算して治療前近似症例ベクトル平均NCSpreを得るステップと、
     前記治療後表情表出時近似症例顔面形態モデルNCMs-postのベクトル平均を演算して治療後近似症例ベクトル平均NCSpostを得るステップと、
     前記治療後近似症例ベクトル平均NCSpostから前記治療前近似症例ベクトル平均NCSpreを差し引いた近似症例ベクトル平均差分NCSpost-preを演算するステップと、
     前記評価対象患者の前記患者顔面形態データPFに基づいて正規化した、安静時患者顔面形態モデルPFMrを演算するステップと、
     前記安静時患者顔面形態モデルPFMrに、前記近似症例ベクトル平均差分NCSpost-preを加味することで、当該評価対象患者の治療後に予測される、表情表出時の予測顔面形態モデルPFMs-prdを演算するステップと
    を含む、治療後の表情表出時の顔面形態予測方法。
    It is a method of predicting the facial morphology at the time of facial expression expression after treatment of a patient by arithmetic processing executed by a computer device.
    The arithmetic processing
    A plurality of preselected patients based on case facial morphology data CF obtained from a plurality of treated patients (past patients who have undergone the treatment are referred to as "case patients") using a three-dimensional measuring device. Steps to extract a set of multidimensional case feature vector CVs with feature variables as elements,
    Based on the patient facial morphology data PF obtained from a new patient who is considering treatment (the new patient is referred to as an "evaluation target patient") using a three-dimensional measuring device, the plurality of feature variables are elements. Steps to extract the multidimensional patient feature vector PV
    A step of selecting a plurality of approximate case feature vectors NCV that are close to the patient feature vector PV from the set of the case feature vectors CV for the plurality of the case patients.
    Treatment acquired by using a three-dimensional measuring device at rest before treatment for each of the selected case patients (the selected case patient is referred to as "approximate case patient") corresponding to the plurality of selected approximate case feature vectors NCV. Pre-resting case Normalized treatment based on facial morphology data Pre-resting approximate case Facial morphology model NCMr-pre calculation step and
    For each of the approximate case patients, a post-treatment facial expression expression approximate case facial morphology model normalized based on the post-treatment facial expression expression case facial morphology data acquired using a three-dimensional measuring device at the time of post-treatment facial expression expression. Steps to calculate NCMs-post and
    A step of calculating the vector average of the pretreatment approximate case facial morphology model NCMR-pre to obtain the pretreatment approximate case vector average NC Spre.
    The step of calculating the vector average of the approximate case facial morphology model NCMs-post at the time of expressing the facial expression after the treatment to obtain the vector average NCSpost of the approximate case after the treatment.
    A step of calculating the approximate case vector average difference NCSpost-pre obtained by subtracting the pretreatment approximate case vector average NCSpre from the post-treatment approximate case vector average NCSpost.
    A step of calculating a resting patient facial morphology model PFMr normalized based on the patient facial morphology data PF of the patient to be evaluated.
    By adding the approximate case vector average difference NCSpost-pre to the resting patient facial morphology model PFMr, the predicted facial expression model PFMs-prd at the time of facial expression expression, which is predicted after the treatment of the patient to be evaluated, is calculated. A method for predicting facial morphology at the time of facial expression expression after treatment, including steps to be performed.
  2.  前記各症例患者についての前記症例顔面形態データCFが、当該症例患者の治療前安静時症例顔面形態データCFr-preと、治療前表情表出時症例顔面形態データCFs-preとを含み、
     前記症例特徴ベクトルCVが、前記治療前安静時症例顔面形態データCFr-pre及び前記治療前表情表出時症例顔面形態データCFs-preの変化量である治療前症例形態変化量CDpreに基づいて抽出され、
     前記評価対象患者についての前記患者顔面形態データPFが、当該評価対象患者の安静時患者顔面形態データPFrと、表情表出時患者顔面形態データPFsとを含み、
     前記患者特徴ベクトルPVが、前記安静時患者顔面形態データPFr及び前記表情表出時患者顔面形態データPFsの変化量である患者形態変化量PDに基づいて抽出され
    る、請求項1に記載の治療後の表情表出時の顔面形態予測方法。
    The case facial morphology data CF for each of the case patients includes pretreatment resting case facial morphological data CFr-pre and pretreatment facial expression expression case facial morphological data CFs-pre.
    The case feature vector CV is extracted based on the pretreatment case morphology change amount CDpre, which is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre. Being done
    The patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression expression.
    The treatment according to claim 1, wherein the patient feature vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs. How to predict facial morphology when expressing facial expressions later.
  3.  前記演算処理が、
     前記評価対象患者の歯を含む硬組織を撮影した画像データに基づいて、正規化した矯正前硬組織形態モデルHMpreを演算するステップと、
     前記矯正前硬組織形態モデルHMpreに基づいて当該評価対象患者の矯正後硬組織形態モデルHMpostを予測するステップと、
     前記表情表出時の予測顔面形態モデルPFMs-prdに前記矯正後硬組織形態モデルHMpostを組み入れて表示するステップと
    を更に含む、請求項1又は2に記載の治療後の表情表出時の顔面形態予測方法。
    The arithmetic processing
    A step of calculating a normalized pre-orthodontic hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the patient to be evaluated, and
    A step of predicting the post-correction hard tissue morphology model HMpost of the patient to be evaluated based on the pre-correction hard tissue morphology model HMpre, and
    The facial expression at the time of facial expression after treatment according to claim 1 or 2, further comprising a step of incorporating and displaying the corrected hard tissue morphology model HMpost in the predicted facial morphology model PFMs-prd at the time of facial expression expression. Morphological prediction method.
  4.  前記近似症例特徴ベクトルNCVが、前記患者特徴ベクトルPVとの距離が近い順に所定症例数選択される、請求項1又は2に記載の治療後の表情表出時の顔面形態予測方法。 The method for predicting facial morphology at the time of facial expression expression after treatment according to claim 1 or 2, wherein a predetermined number of cases are selected in the order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
  5.  前記演算処理が、
     前記症例特徴ベクトルCVに対しクラスタリング処理を行うことで複数の症例クラスに分類するステップと、
     前記各症例クラスについてクラスタ重心Gをそれぞれ演算するステップと
    を含み、
     分類された前記各症例クラスについての前記クラスタ重心Gのそれぞれのうち、前記患者特徴ベクトルPVとの距離が最も近いクラスタ重心を有する症例クラスに属する症例特徴ベクトルCVが、前記近似症例特徴ベクトルNCVとして選択される、請求項1又は2に記載の治療後の表情表出時の顔面形態予測方法。
    The arithmetic processing
    A step of classifying the case feature vector CV into a plurality of case classes by performing clustering processing, and
    Including a step of calculating the cluster center of gravity G for each of the case classes.
    Of the cluster centers of gravity G for each of the classified case classes, the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is defined as the approximate case feature vector NCV. The method for predicting facial morphology at the time of facial expression expression after treatment according to claim 1 or 2, which is selected.
  6.  患者の治療後の表情表出時の顔面形態を予測するシステムであって、
     コンピュータ装置の演算処理により実現される、特徴ベクトル抽出手段と、近似症例選択手段と、予測モデル演算手段とを少なくとも含み、
     前記特徴ベクトル抽出手段が、
     治療を行った複数の患者(その治療を行った過去の患者を「症例患者」という。)から、三次元計測装置を使って取得した症例顔面形態データCFに基づいて、予め選択された複数の特徴変量を要素とする多次元の症例特徴ベクトルCVのセットを抽出する処理と、
     治療を検討している新たな患者(その新たな患者を「評価対象患者」という。)から、三次元計測装置を使って取得した患者顔面形態データPFに基づいて、前記複数の特徴変量を要素とする多次元の患者特徴ベクトルPVを抽出する処理と
    を実行し、
     前記近似症例選択手段が、複数の前記症例患者についての前記症例特徴ベクトルCVのセットのうちから、前記患者特徴ベクトルPVに近似する近似症例特徴ベクトルNCVを複数選択する処理を実行し、
     前記予測モデル演算手段が、
     選択された複数の前記近似症例特徴ベクトルNCVに対応する各症例患者(その選択された症例患者を「近似症例患者」という。)について、治療前の安静時に三次元計測装置を使って取得した治療前安静時症例顔面形態データに基づいて正規化した治療前安静時近似症例顔面形態モデルNCMr-preを演算する処理と、
     前記各近似症例患者について、治療後の表情表出時に三次元計測装置を使って取得した治療後表情表出時症例顔面形態データに基づいて正規化した治療後表情表出時近似症例顔面形態モデルNCMs-postを演算する処理と、
     前記治療前安静時近似症例顔面形態モデルNCMr-preのベクトル平均を演算して治療前近似症例ベクトル平均NCSpreを得る処理と、
     前記治療後表情表出時近似症例顔面形態モデルNCMs-postのベクトル平均を演算して治療後近似症例ベクトル平均NCSpostを得る処理と、
     前記治療後近似症例ベクトル平均NCSpostから前記治療前近似症例ベクトル平均NCSpreを差し引いた近似症例ベクトル平均差分NCSpost-preを演算する処理と、
     前記評価対象患者の前記患者顔面形態データPFに基づいて正規化した、安静時患者顔面形態モデルPFMrを演算する処理と、
     前記安静時患者顔面形態モデルPFMrに、前記近似症例ベクトル平均差分NCSpost-preを加味することで、当該評価対象患者の治療後に予測される、表情表出時の予測顔面形態モデルPFMs-prdを演算する処理と
    を実行する、治療後の表情表出時の顔面形態予測システム。
    It is a system that predicts the facial morphology at the time of facial expression after treatment of a patient.
    It includes at least a feature vector extraction means, an approximate case selection means, and a prediction model calculation means realized by arithmetic processing of a computer device.
    The feature vector extraction means
    A plurality of preselected patients based on case facial morphology data CF obtained using a three-dimensional measuring device from a plurality of treated patients (past patients who have undergone the treatment are referred to as "case patients"). Processing to extract a set of multidimensional case feature vector CVs with feature variables as elements,
    Based on the patient facial morphology data PF obtained from a new patient who is considering treatment (the new patient is referred to as an "evaluation target patient") using a three-dimensional measuring device, the plurality of feature variables are elements. Execute the process of extracting the multidimensional patient feature vector PV
    The approximate case selection means executes a process of selecting a plurality of approximate case feature vectors NCV that are close to the patient feature vector PV from the set of the case feature vectors CV for the plurality of the case patients.
    The predictive model calculation means
    Treatment acquired by using a three-dimensional measuring device at rest before treatment for each of the selected case patients (the selected case patient is referred to as "approximate case patient") corresponding to the plurality of selected approximate case feature vectors NCV. Treatment normalized based on pre-resting case facial morphology data Processing to calculate pre-resting approximate case facial morphology model NCMR-pre,
    For each of the approximate case patients, a post-treatment facial expression expression approximate case facial morphology model normalized based on the post-treatment facial expression expression case facial morphology data acquired by using a three-dimensional measuring device at the time of post-treatment facial expression expression. Processing to calculate NCMs-post and
    The process of calculating the vector average of the pretreatment approximate case facial morphology model NCMR-pre to obtain the pretreatment approximate case vector average NC Spre,
    The process of calculating the vector average of the approximate case facial morphology model NCMs-post at the time of expressing the facial expression after treatment to obtain the vector average NCSpost of the approximate case after treatment.
    The process of calculating the approximate case vector average difference NCSpost-pre obtained by subtracting the pretreatment approximate case vector average NCSpre from the post-treatment approximate case vector average NCSpost.
    A process of calculating the resting patient facial morphology model PFMr normalized based on the patient facial morphology data PF of the evaluation target patient, and
    By adding the approximate case vector average difference NCSpost-pre to the resting patient facial morphology model PFMr, the predicted facial expression model PFMs-prd at the time of facial expression expression, which is predicted after the treatment of the patient to be evaluated, is calculated. A facial morphology prediction system at the time of facial expression expression after treatment, which executes the processing to be performed.
  7.  前記各症例患者についての前記症例顔面形態データCFが、当該症例患者の治療前安静時症例顔面形態データCFr-preと、治療前表情表出時症例顔面形態データCFs-preとを含み、
     前記症例特徴ベクトルCVが、前記治療前安静時症例顔面形態データCFr-pre及び前記治療前表情表出時症例顔面形態データCFs-preの変化量である治療前症例形態変化量CDpreに基づいて抽出され、
     前記評価対象患者についての前記患者顔面形態データPFが、当該評価対象患者の安静時患者顔面形態データPFrと、表情表出時患者顔面形態データPFsとを含み、
     前記患者特徴ベクトルPVが、前記安静時患者顔面形態データPFr及び前記表情表出時患者顔面形態データPFsの変化量である患者形態変化量PDに基づいて抽出され
    る、請求項6に記載の治療後の表情表出時の顔面形態予測システム。
    The case facial morphology data CF for each of the case patients includes pretreatment resting case facial morphological data CFr-pre and pretreatment facial expression expression case facial morphological data CFs-pre.
    The case feature vector CV is extracted based on the pretreatment case morphology change amount CDpre, which is the change amount of the pretreatment resting case facial morphology data CFr-pre and the pretreatment facial expression expression case facial morphology data CFs-pre. Being done
    The patient facial morphology data PF for the evaluation target patient includes the resting patient facial morphology data PFr of the evaluation target patient and the patient facial morphology data PFs at the time of facial expression expression.
    The treatment according to claim 6, wherein the patient feature vector PV is extracted based on the patient morphology change amount PD, which is the change amount of the resting patient face morphology data PFr and the facial expression expression patient face morphology data PFs. Facial morphology prediction system when expressing facial expressions later.
  8.  前記予測モデル演算手段が、
     前記評価対象患者の歯を含む硬組織を撮影した画像データに基づいて、正規化した矯正前硬組織形態モデルHMpreを演算する処理と、
     前記矯正前硬組織形態モデルHMpreに基づいて当該評価対象患者の矯正後硬組織形態モデルHMpostを予測する処理と、
     前記表情表出時の予測顔面形態モデルPFMs-prdに前記矯正後硬組織形態モデルHMpostを組み入れて表示する処理と
    を更に実行する、請求項6又は7に記載の治療後の表情表出時の顔面形態予測システム。
    The predictive model calculation means
    A process of calculating a normalized pre-orthodontic hard tissue morphology model HMpre based on image data obtained by photographing the hard tissue including the teeth of the patient to be evaluated, and
    A process of predicting the post-correction hard tissue morphology model HMpost of the patient to be evaluated based on the pre-correction hard tissue morphology model HMpre, and
    The process of incorporating and displaying the corrected hard tissue morphology model HMpost in the predicted facial morphology model PFMs-prd at the time of facial expression expression, according to claim 6 or 7, at the time of facial expression expression after treatment. Facial morphology prediction system.
  9.  前記近似症例特徴ベクトルNCVが、前記患者特徴ベクトルPVとの距離が近い順に所定症例数選択される、請求項6又は7に記載の治療後の表情表出時の顔面形態予測システム。 The facial morphology prediction system at the time of facial expression expression after treatment according to claim 6 or 7, wherein a predetermined number of cases are selected in the order in which the approximate case feature vector NCV is closer to the patient feature vector PV.
  10.  前記近似症例選択手段が、
     前記症例特徴ベクトルCVに対しクラスタリング処理を行うことで複数の症例クラスに分類する処理と、
     前記各症例クラスについてクラスタ重心Gをそれぞれ演算する処理と
    を実行し、
     分類された前記各症例クラスについての前記クラスタ重心Gのそれぞれのうち、前記患者特徴ベクトルPVとの距離が最も近いクラスタ重心を有する症例クラスに属する症例特徴ベクトルCVが、前記近似症例特徴ベクトルNCVとして選択される、請求項6又は7に記載の治療後の表情表出時の顔面形態予測システム。
    The approximate case selection means
    A process of classifying the case feature vector CV into a plurality of case classes by performing a clustering process, and
    For each of the case classes, the process of calculating the cluster center of gravity G is executed.
    Of the cluster center of gravity G for each of the classified case classes, the case feature vector CV belonging to the case class having the cluster center of gravity closest to the patient feature vector PV is defined as the approximate case feature vector NCV. The facial morphology prediction system at the time of facial expression expression after the treatment according to claim 6 or 7, which is selected.
PCT/JP2020/008895 2019-03-13 2020-03-03 Method and system for predicting facial morphology in facial expression after treatment WO2020184288A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021504950A JPWO2020184288A1 (en) 2019-03-13 2020-03-03

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-046443 2019-03-13
JP2019046443 2019-03-13

Publications (1)

Publication Number Publication Date
WO2020184288A1 true WO2020184288A1 (en) 2020-09-17

Family

ID=72427823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/008895 WO2020184288A1 (en) 2019-03-13 2020-03-03 Method and system for predicting facial morphology in facial expression after treatment

Country Status (2)

Country Link
JP (1) JPWO2020184288A1 (en)
WO (1) WO2020184288A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040029068A1 (en) * 2001-04-13 2004-02-12 Orametrix, Inc. Method and system for integrated orthodontic treatment planning using unified workstation
JP2011086266A (en) * 2009-10-19 2011-04-28 Canon Inc Feature point positioning device, image recognition device, and processing method and program therefor
JP2014513824A (en) * 2011-02-22 2014-06-05 モルフェウス カンパニー リミテッド Facial correction image providing method and system
WO2017069231A1 (en) * 2015-10-23 2017-04-27 国立大学法人大阪大学 Method and system for predicting shape of human body after treatment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040029068A1 (en) * 2001-04-13 2004-02-12 Orametrix, Inc. Method and system for integrated orthodontic treatment planning using unified workstation
JP2011086266A (en) * 2009-10-19 2011-04-28 Canon Inc Feature point positioning device, image recognition device, and processing method and program therefor
JP2014513824A (en) * 2011-02-22 2014-06-05 モルフェウス カンパニー リミテッド Facial correction image providing method and system
WO2017069231A1 (en) * 2015-10-23 2017-04-27 国立大学法人大阪大学 Method and system for predicting shape of human body after treatment

Also Published As

Publication number Publication date
JPWO2020184288A1 (en) 2020-09-17

Similar Documents

Publication Publication Date Title
US11617633B2 (en) Method and system for predicting shape of human body after treatment
US9788917B2 (en) Methods and systems for employing artificial intelligence in automated orthodontic diagnosis and treatment planning
Guyomarc'h et al. Anthropological facial approximation in three dimensions (AFA 3D): Computer‐assisted estimation of the facial morphology using geometric morphometrics
Moss et al. Three‐dimensional assessment of treatment outcomes on the face
US10568716B2 (en) Methods and systems for employing artificial intelligence in automated orthodontic diagnosis and treatment planning
US9517111B2 (en) Methods and systems for employing artificial intelligence in automated orthodontic diagnosis and treatment planning
KR20190020756A (en) Method for estimating at least one of shape, position and orientation of a dental restoration
US20220296344A1 (en) Method, system and devices for instant automated design of a customized dental object
US11357604B2 (en) Artificial intelligence platform for determining dental readiness
US10007988B2 (en) Systems and methods for approximating the soft tissue profile of the skull of an unknown subject
JP2016085490A (en) System and method for evaluating face form
Rasteau et al. Three-dimensional acquisition technologies for facial soft tissues–Applications and prospects in orthognathic surgery
CN110575178B (en) Diagnosis and monitoring integrated medical system for judging motion state and judging method thereof
Park et al. A three-dimensional parametric adult head model with representation of scalp shape variability under hair
WO2020184288A1 (en) Method and system for predicting facial morphology in facial expression after treatment
Hayes A geometric morphometric evaluation of the Belanglo ‘Angel’facial approximation
Alagha et al. Objective grading facial paralysis severity using a dynamic 3D stereo photogrammetry imaging system
KR101715567B1 (en) Method for facial analysis for correction of anthroposcopic errors from Sasang constitutional specialists
Wu et al. Three-dimensional statistical model for gingival contour reconstruction
Basamtabar et al. Relationship of anteroposterior position of maxillary central incisors with the forehead in an adult Iranian subpopulation: a cross-sectional study
D'Alessio et al. Measure and comparison of facial attractiveness indices through photogrammetry and statistical analysis
Romeiro et al. Forensic facial reconstruction using mesh template deformation with detail transfer over HRBF
Amirkhanov et al. Visual analytics in dental aesthetics
Marcolin Miscellaneous expertise of 3D facial landmarks in recent literature
Tanikawa et al. Knowledge-dependent pattern classification of human nasal profiles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20770797

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021504950

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20770797

Country of ref document: EP

Kind code of ref document: A1