KR101744079B1 - The face model generation method for the Dental procedure simulation - Google Patents

The face model generation method for the Dental procedure simulation Download PDF

Info

Publication number
KR101744079B1
KR101744079B1 KR1020140084001A KR20140084001A KR101744079B1 KR 101744079 B1 KR101744079 B1 KR 101744079B1 KR 1020140084001 A KR1020140084001 A KR 1020140084001A KR 20140084001 A KR20140084001 A KR 20140084001A KR 101744079 B1 KR101744079 B1 KR 101744079B1
Authority
KR
South Korea
Prior art keywords
face
model
facial
data
image data
Prior art date
Application number
KR1020140084001A
Other languages
Korean (ko)
Other versions
KR20160004865A (en
Inventor
지헌주
조헌제
임용현
Original Assignee
주식회사 인스바이오
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인스바이오 filed Critical 주식회사 인스바이오
Priority to KR1020140084001A priority Critical patent/KR101744079B1/en
Priority to PCT/KR2015/006976 priority patent/WO2016003258A1/en
Publication of KR20160004865A publication Critical patent/KR20160004865A/en
Application granted granted Critical
Publication of KR101744079B1 publication Critical patent/KR101744079B1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Dentistry (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Epidemiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Generation (AREA)

Abstract

The present invention relates to a method of generating a face model that reflects a face change according to a result of a tooth movement simulation in a dental procedure for correction. More particularly, the present invention relates to a method of generating a face model by using 2D face image data and head- Is a technique for generating a model similar to an actual face by using a model restoration technique and a texture mapping technique or a technique of matching the result data of a 3D face surface model and a 3D color scanner. That is, the present invention relates to a method for generating a face model for simulation of a dental procedure, which can reflect a change of a face type according to a tooth movement simulation in real time.
According to the present invention, a 3D CAD model-based process is provided. In the process of distinguishing facial surfaces included in a 3D head image when a 3D facial surface model is generated, an unnecessary portion such as nasal cavity or sinus, which is included in an empty space in the nostril, It provides data that can perform precise and accurate simulation by solving problems perceived as skin surface. It also provides data that can be used for different types of equipment, such as tomography (CT, etc.) The present invention provides a texture mapping technique capable of solving the error of texture mapping generated from different results of two devices due to the angle of incidence, etc., and provides a 3D face model constructed using only images having the best visibility based on a plurality of images, By matching the results by the 3D color scanner to the 3D face surface model, The present invention provides a 3D facial model that provides an aesthetic 3D facial model without any error that may occur in the process, and can simulate the dental procedure and reflect it in real time. In addition, The simulation can be performed with the face model reflected.

Description

Technical Field [0001] The present invention relates to a face model generation method for a dental procedure simulation,

The present invention relates to a method of generating a face model that reflects a face change according to a result of a tooth movement simulation in a dental procedure for correction. More particularly, the present invention relates to a method of generating a face model by using 2D face image data and head- Is a technique for generating a model similar to an actual face by using a model restoration technique and a texture mapping technique or a technique of matching the result data of a 3D face surface model and a 3D color scanner. That is, the present invention relates to a method for generating a face model for simulation of a dental procedure, which can reflect a change of a face type according to a tooth movement simulation in real time.

In order to obtain data used for medical consultation, 2D bitmap image data is acquired by photographing with a digital camera, a mobile phone camera, etc., and computed tomography (CT) and magnetic resonance imaging (MRI) A method of collecting a plurality of 2D tomographic image data through photographing or the like is used. At this time, since each image data is generated on a different principle, the characteristics of data that can be obtained therefrom are different from each other, and the methods and areas thereof are different from each other.

Therefore, a volume model created using 2D bitmap image data or 2D tomographic image data composed of pixels or voxels can be used as a 3D CAD A model-based process is impossible.

In order to solve this problem, 3D CAD model was created by segmentation and reconstruction using 2D tomographic image data, but this was a single color model with no real face expression. In this paper, we propose a method to apply facial image of a patient to a single color model by using texture mapping technique. However, There is a problem that the error of the texture mapping generated from the different result of the two devices due to the angle between the camera lens and the patient may be large. Also, during the creation of the 3D CAD model using the 2D tomographic image data, the nasal cavity and the sinusoidal part of the nose are generated as a model together with the condition of the automatic regionization, which then interferes with the texture mapping and the simulation .

KR 10-1099732 B1

SUMMARY OF THE INVENTION It is an object of the present invention to provide a process based on a 3D CAD model.

In addition, in the process of distinguishing the facial surface included in the 3D head image when generating the 3D facial surface model, unnecessary parts such as the nasal cavity and the sinus, which are included in the empty space in the nostril, are recognized as the skin surface, The goal is to provide data that can be used to perform simulations.

In addition, a texture mapping technique that can solve the errors arising from the different results of the two devices due to the characteristics of different equipment, the movement of the patient, the angle between the camera lens and the patient, And to provide the above objects.

It is another object of the present invention to provide a 3D face model constructed by using only images having the best visibility based on a plurality of images.

The object of the present invention is to provide an aesthetic 3D face model without errors that may occur in the texture mapping process by matching the result values by the 3D color scanner with the 3D face surface model.

It is another object of the present invention to provide a 3D face model that can simulate dental procedures and reflect them in real time.

The object of the present invention is to make it possible to carry out simulation in a state in which the data based on the diagnosis and simulation results of the doctor are reflected on the 3D face model.

According to another aspect of the present invention, there is provided a face model generation method for face change simulation according to the present invention,

A method for generating a face model reflecting facial deformation according to a tooth movement simulation, the method comprising: a first step of acquiring 2D facial photograph data and a 3D head image, and restoring a 3D facial surface model using the obtained 3D head image; Displaying a mark point on the obtained 2D face photograph data and displaying a mark point on the 3D face model; Moving and rotating the 3D face model based on the displayed mark point so as to apply the 3D face model showing the mark point to the 2D face image data to convert the position and size of the 3D face model into the 2D face image data A third step of generating a 3D facial surface model so as to match the facial appearance of the 2D facial photograph data by performing an operation to align the facial image data with the facial image data; The 3D face model is generated by projecting the 3D face model onto the texture plane based on the 2D face image data, and the 2D face image data and the 3D face model are parameterized to generate a texture mesh Step 4; Performing a visibility rank process for comparing visibility priorities by comparing the 3D face surface model with a normal vector of the texture mesh; And texture mapping (Texture Mapping) is performed by collecting only the best visibility textures marked by a visibility rank process using the 2D facial photograph data, the 3D facial surface model, and the texture mesh generated through the parameterization process ; .

In the fifth step, a visibility rank process compares angles of a normal vector of a predetermined region for texture mapping on the 3D face surface model with respect to one or more 2D face image data and a normal vector of the texture mesh, A visibility check is performed and a visibility rank is assigned to each region based on the visibility rank. Thus, a 2D image having the highest visibility can be identified in the corresponding region, so that texture mapping using data having the highest visibility can be performed .

delete

According to the present invention, a 3D CAD model-based process is provided. In the process of distinguishing facial surfaces included in a 3D head image when a 3D facial surface model is generated, an unnecessary portion such as nasal cavity or sinus, which is included in an empty space in the nostril, It is possible to provide data that can perform precise and accurate simulation by solving the problem recognized as a skin surface.

Provides texture mapping technology that can solve the errors arising from the different results of the two equipments due to different equipment characteristics, patient's movements, angle between the camera lens and the patient when performing tomographic imaging (CT, etc.) and face photographing .

By providing a 3D face model composed only of images with the best visibility based on a plurality of images and matching the result values by the 3D color scanner with the 3D face surface model, aesthetic 3D There is an effect of providing a face model.

The present invention provides a 3D face model capable of performing simulation in dental procedures and reflecting it in real time and enables simulation to be performed with the data based on the doctor's diagnosis and simulation results reflected in the 3D face model .

Fig. 1 is a diagram showing marking points displayed on 2D face photograph data
FIG. 2 is a diagram showing a reconstruction of a 3D facial surface model by composing a 3D head image having a volume from head tomographic image data,
3 is a view showing a process of removing the nasal cavity and nasal cavity included in the reconstructed 3D facial surface model
Fig. 4 is a view showing marking points on the restored 3D face surface model; Fig.
FIG. 5 is a diagram illustrating a 3D face-point-based registration that performs rotation, movement, and size conversion based on an arbitrary axis so that the restored 3D face surface model can be mapped to 2D face image data, Drawings showing the creation of a 3D face surface model
6 is a view showing a state of a texture mesh generated by performing 3D-point-based-based matching to parameterize a 3D face surface model
7 is a diagram showing a textured three-dimensional face model generated based on a texture mapping technique
8 is a flowchart showing a method of generating a face model for simulating a dental procedure;

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined. Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.

FIG. 1 is a diagram showing marking points (10, 2D Landmarks) on 2D face picture data.

(10, 2D Landmarks) are displayed on the 2D face image data (2D Face Picture Image) as shown in FIG. For example, in the case of a frontal face, two markers can be placed on both sides of the tail and two markers on both sides of the tail. This will be followed by a 3D affine landmark-transform Is used to perform texture mapping (texture coordinate mapping). In other words, the texture mapping (texture mapping) generated from the different results of the two devices due to the characteristics of the different equipment, the movement of the patient, the angle between the camera lens and the patient during the tomographic (CT) The detailed description thereof will be given later.

Referring to FIG. 1, a 2D face picture image having a file format such as BMP, JPEG, and GIF is acquired using a digital camera, a mobile phone camera, or the like. The data thus obtained is a 2D bitmap image data, which is only two dimensions and can not be viewed three-dimensionally. In addition, 2D face picture data (2D Face Picture Image) can not be used for directly creating a new model or performing simulation. In this paper, we propose a model that uses 2D face image data (2D Face Picture Image) and head 3D image data (DICOM series) as input data and uses 3D CAD model restoration technology and texture mapping technology , 3D facial surface CAD model) to overcome the above-mentioned problems. That is, in the medical treatment, a 3D CAD model-based process such as creating a new model or performing a simulation by applying image data generated by combining image data or images composed of pixels or voxels . Here, the 3D CAD model includes points, lines and faces. After reconstructing an image or an image composed of pixels or voxels into the 3D CAD model, a texture mapping (texture mapping) ) Technology to create a textured face surface, and use it to create a new model or run a simulation.

However, the method for generating a textured face surface is not limited to this method. For example, a method using a 3D color scanner (Texture 3D scanner), a method of generating a 3D surface And a method of using a device or software that can obtain a model (3D surface). That is, each of the point cloud data is formed using the data scanned with the 3D face surface and the 3D color scanner, and the obtained data is collected, and the 3D color scanner (3D color scanner, and textured 3D scanner) to the 3D face surface to create a 3D face model (Textured Face Surface). Since the 3D color scanner (3D color scanner) scans the face itself to acquire the data on which the texture is embedded, there is no error that can occur in the texture mapping process because there is no need to apply the texture separately, And can provide data similar to the appearance. However, since it is composed of a small number of polyhedrons (Poly Hedon, Polygonal Surface), it is difficult to express precise data. In order to realize additional correction, that is, simulation, a result using a 3D color scanner is converted into a 3D face surface model Face surface) is required. In addition, there is a disadvantage that it is necessary to additionally provide a 3D color scanner (Texture 3D scanner) in addition to the CBCT equipment generally provided in dentistry.

Therefore, in the following description, 3D face model (2D face image) and head 3D image data (DICOM series) are used as input data, and 3D CAD model restoration technique and texture mapping technique are applied. Textured Face Surface).

FIG. 2 shows a 3D head image having a volume from the DICOM series, segmentation of the facial skin region, and reconstruction based on the segmentation. 3D Face Surface) is restored.

Head tomography image data (DICOMseries) can be obtained from equipment such as CT (computed tomography), MRI (magnetic resonance imaging), and ultrasonic diagnostics (medical imaging) Acquisition of the tomographic image data (DICOMseries) is not limited to this and can be performed by various methods such as PET (poisitron emission tomography).

Then, the acquired head tomographic image data (DICOM series) are combined to create a 3D head image having a volume.

Then, the facial skin area is segmented and reconstructed into 3D to restore the 3D face surface.

Here, the segmentation of the facial skin region means a task of selecting a desired region by forming a boundary line. For example, a 3D head image is formed by stacking a plurality of images in a layered structure. Thus, a boundary of the surface of the face is formed by finding a sudden change in the pixel value at the portion where the air layer and the face surface are in contact with each other , And this operation corresponds to segmentation. That is, it means that the user uses the data obtained from the CT and MRI images, such as the skin, the jaw bone, the crown, and the root, to distinguish the part to be used as data. Thereafter, reconstruction is performed using a technique such as a Marching cube algorithm on a 3D head image based on the segmented information to generate a 3D face surface model (3D Face Surface) 3D facial surface CAD model) data.

As a result, the 3D face surface model created by segmentation and reconstruction of the 3D head image is data in which the interior is empty, that is, facial skin. However, such a 3D face surface model is a 3D CAD model having points, lines, and faces and has directionality, and a CAD model-based application process such as creating a new model or performing a simulation using the CAD model It is possible.

FIG. 3 is a diagram illustrating a process of removing the sinus and nasal portions included in the restored 3D face surface model (3D Face Surface). FIG. 4 is a view showing marking points (20, 3D Landmarks) on the restored 3D face surface model (3D Face Surface). FIG. 5 illustrates a method for rotating, translating, and resizing a restored 3D face surface model to be mapped with 2D face picture data. (3D Affine Landmark-Transform) that scales the 3D face image to create a transformed 3D face surface that is converted to the 2D face image. Fig.

Referring to FIG. 3, the 3D face surface model created through the above process includes sinuses, nasal cavities, and the like, which occupy empty space in the inside of the nostril, for example, inside the nostril. Sinuses, nasal cavities, and the like can interfere with data transformation when performing texture mapping or simulation. Therefore, the unnecessary part of the 3D face surface can be obtained through the Boolean operation of the quadratic surface which is relatively easy to calculate using the polyhedron of interest (POI) The face skin that is directly used for new model making or simulation is obtained.

As shown in Fig. 4, a marking point is located at a position corresponding to the marking point (10, 2D Landmarks) displayed on the 2D face picture image, as shown in Fig. 4, on the thus generated 3D face surface model Mark (20, 3D Landmrks).

Then, as shown in FIG. 5, the 3D face surface model (20, 3D Landmarks) is displayed on the 2D face image data (10, 2D Landmarks) In order to do this, the order of the marking points on the 3D Face Surface and the shape of the marking points are matched with the order of the marking points of the 2D face picture image and the shapes of the markings Rotate, Translate, and Scale to fit 2D Landmarks. The set of markers on the 2D Face Picture Image and the 3D face surface (Coordinates Mapping, Shared Local Coordinates) by sharing the coordinates of the set of marking points of the 3D face surface.

Applying the above operations to the 3D Face Surface makes it possible to convert the 2D Face Picture Data and the 3D Face Surface to the same orientation and shape, A Transformed 3D Face Surface is created. This process is called 3D Affine Landmark-Transform.

FIG. 6 shows the appearance of a texture mesh created by parameterizing a transformed 3D face surface by performing 3D Affine Landmark-Transform (3D) Fig.

Hereinafter, the parameterization process will be described. A parameterization process is started based on the transformed 3D face surface of Fig. 4. The transformed 3D face surface is converted into 2D face image data Image-based texture plane to project-plane generation.

(Normalized, synchronized) by adjusting the axis value, synchronizing the sections and adjusting the size, etc., so that the coordinates of the 2D facial image data (2D Face Picture Image) and the transformed 3D facial surface surface (Texture Paramater Space Nomalization). In this way, the transformed 3D facial surface model is transformed into 2D, which is the same as 2D facial image data, which is called texture-mesh data.

Here, the texture-mesh data is a 3D CAD model, which is composed of points, lines and faces, and is made up of polyhedrons and polygonal surfaces that are directional and exist both inside and outside. In the present invention, the surface constituting the texture mesh is formed by tens to hundreds of thousands of triangles so that the user can easily calculate the position and the like on the program.

In performing texture mapping, it is possible to use photographs taken in various directions to generate a face model closer to a real face. In such a case, a transformed 3D face surface model may be used. (Regions Visibility check) using the normal vector of the texture mesh (Texture-Mesh) generated through the parametrization process and the normal vector of the 3D face surface model, and then selects the optimal visibility part And a 'visibility rank' process. This reflects, for example, that visibility may vary depending on the direction in which the photographer took the nose, nose, eyes, cheeks, and the like. In other words, the first normal vector matching and matched to at least one polygonal area (first area), which is a predetermined area of the 3D face surface model, and a predetermined area of the texture mesh data The second normal vector existing in the second area) is determined. As the angle formed by the first normal vector and the second normal vector is closer to 180 degrees, the data is judged to have good visibility and the visibility rank Prioritize through the process. It is possible to select and map textures with higher visibility than those in texture mapping using the priority through the visibility rank process.

7 is a view showing a textured face model generated based on a texture mapping technique.

For texture mapping, 2D face picture data, 3D face surface, and texture-mesh data are used. A texture is extracted from an area of a 2D face picture image corresponding to each of a certain area (unit) constituting a texture mesh, and the texture is extracted from a texture mesh ) Of the 3D face surface to correspond to the 3D coordinates of the 3D face surface to obtain the textured face surface data.

At this time, a plurality of 2D facial image data (2D Face Picture Image) and a plurality of transformed 3D facial surface models (Transformed 3D Face Surfaces) corresponding thereto are sorted according to the priority order of the above-described visibility rank You can create a 3D face model with greater visibility through mapping and mapping.

Texture-mapping is a technique of applying a texture such as a desired pattern or color to a surface to enhance the realism of an image or an object to be expressed. In the 3D surface model, a 2D bitmap Image, and 3D image data to obtain a realistic and sophisticated model.

8 is a flowchart showing a method of generating a face model for dental procedure simulation.

First, DICOM series data were acquired using CT, MRI, etc., and 3D head images were obtained using these data. Then, segmentation and reconstruction were performed to obtain 3D head images, Obtain a 3D Face Surface. Thereafter, a 3D facial surface model modified to fit texture mapping (Texture-Mapping) or simulation is created by removing the nasal sinus, nasal cavity, etc., which act as obstacles in performing texture mapping or simulation 3D Face Surface).

Secondly, 2D face picture data (2D Face Picture Image) is obtained and mark points are displayed (10, 2D Landmarks).

Third, marking points (20, 3D Landmarks) are displayed at positions corresponding to the marking points (10, 2D Landmarks) displayed on the 2D face picture data on the 3D face surface model, Perform a 3D Affine Landmark-Transform to obtain a 3D Face Surface. In this way, the face orientation between the 2D face image data (2D Face Picture Image) and the 3D face surface model (3D Face Face) photographed can be matched to the 2D face image data and 3D face image data in texture mapping It is possible to solve the error caused by the inconsistency of the 3D face surface.

Fourth, a parameterization process is performed on the transformed 3D face surface to obtain texture-mesh data. This means that the transformed 2D face image data (2D Face Picture Image) is transformed into 2D in order to fit a transformed 3D face surface model, and the generated texture mesh (Texture-Mesh) has the same coordinates as 2D face picture data (2D Face Picture Image), thereby enabling texture mapping.

At this time, when one or more 2D face picture data is provided as input data, a plurality of 2D face picture data (2D Face Picture Images) taken in various directions When input, the visibility priority between the texture mesh created based on the previously input image and the 3D face surface (3D Face Surface), the texture mesh generated based on the currently input image, and the 3D face surface model And visibility rank is maintained as the highest visibility priority so as to prepare it as data for texture mapping.

In performing a texture mapping process using one or more 2D face picture data and a texture-mesh corresponding to the image obtained through the above process, A texture with a high visibility priority is selected using a visibility rank, and the 3D face surface is modeled to obtain a textured face surface.

As a result, textured face surface data as shown in FIG. 7 is obtained, which uses the 3D face surface data generated in the previous process, and the 3D face surface model (3D Face) (3D head image) and the coordinate system because it is based on the 3D head image. Therefore, the face shape due to the tooth movement using the coordinate system of the 3D head image It can be used for all tasks including face change simulation that reflects changes in real time. That is, a realistic and stereoscopic 3D face model (3D face model) is created so as to be similar to the actual face for dental procedure simulation.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. It will be understood that various modifications and changes may be made without departing from the scope of the appended claims.

10: Cover point displayed on 2D face photograph data
20: Cover point displayed on the 3D face surface model

Claims (4)

The present invention relates to a face model generation method that reflects face deformation according to a tooth movement simulation,
A first step of acquiring 2D facial image data and a 3D head image and reconstructing a 3D facial surface model using the obtained 3D head image;
Displaying a mark point on the obtained 2D face photograph data and displaying a mark point on the 3D face model;
Moving and rotating the 3D face model based on the displayed mark point so as to apply the 3D face model showing the mark point to the 2D face image data to convert the position and size of the 3D face model into the 2D face image data A third step of generating a converted 3D facial surface model so as to match the facial appearance of the 2D facial photograph data by performing the operation of fitting the facial model to the facial image data;
Dimensional face image data based on the 2D facial image data to generate a plane model by projecting the converted 3D facial surface model onto a texture plane based on the 2D facial photograph data, and performing a parameterization process on the 2D facial image data and the converted 3D facial surface model to generate a texture- Mesh);
A fifth step of performing a visibility rank process for comparing visibility priorities by comparing the 3D face surface model with a normal vector of the texture mesh; And
Texture mapping is performed by collecting only the optimal visibility textures marked by a visibility rank process using the 2D facial photograph data, the 3D facial surface model, and the texture mesh generated through the parameterization process ;
And generating a face model for the dental procedure simulation.
The method according to claim 1,
In the fifth step, a visibility rank process is performed.
The visibility check is performed for each of the one or more 2D face image data by comparing the angle formed by the normal vector of the predetermined area for the texture mapping on the 3D face surface model with the normal vector of the texture mesh, Dimensional face image data having the highest visibility can be discriminated by giving a visibility rank so that texture mapping using the data having the highest visibility can be performed. How to create a model.
delete delete
KR1020140084001A 2014-07-04 2014-07-04 The face model generation method for the Dental procedure simulation KR101744079B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020140084001A KR101744079B1 (en) 2014-07-04 2014-07-04 The face model generation method for the Dental procedure simulation
PCT/KR2015/006976 WO2016003258A1 (en) 2014-07-04 2015-07-06 Face model generation method for dental procedure simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140084001A KR101744079B1 (en) 2014-07-04 2014-07-04 The face model generation method for the Dental procedure simulation

Publications (2)

Publication Number Publication Date
KR20160004865A KR20160004865A (en) 2016-01-13
KR101744079B1 true KR101744079B1 (en) 2017-06-09

Family

ID=55019687

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140084001A KR101744079B1 (en) 2014-07-04 2014-07-04 The face model generation method for the Dental procedure simulation

Country Status (2)

Country Link
KR (1) KR101744079B1 (en)
WO (1) WO2016003258A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230137779A (en) 2022-03-22 2023-10-05 주식회사 레이 method of producing 3-dimensional face scan data

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127122A (en) * 2016-06-16 2016-11-16 厦门道拓科技有限公司 Head portrait detection method based on face action identification, system and intelligent terminal
CN108876886B (en) * 2017-05-09 2021-07-27 腾讯科技(深圳)有限公司 Image processing method and device and computer equipment
CN107748869B (en) * 2017-10-26 2021-01-22 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN108717730B (en) * 2018-04-10 2023-01-10 福建天泉教育科技有限公司 3D character reconstruction method and terminal
KR102289610B1 (en) * 2019-05-09 2021-08-17 오스템임플란트 주식회사 Method and apparatus for providing additional information of teeth
CN110428491B (en) * 2019-06-24 2021-05-04 北京大学 Three-dimensional face reconstruction method, device, equipment and medium based on single-frame image
CN112819741B (en) * 2021-02-03 2024-03-08 四川大学 Image fusion method and device, electronic equipment and storage medium
WO2022191575A1 (en) * 2021-03-09 2022-09-15 고려대학교 산학협력단 Simulation device and method based on face image matching
CN113112617B (en) * 2021-04-13 2023-04-14 四川大学 Three-dimensional image processing method and device, electronic equipment and storage medium
CN115120372B (en) * 2022-05-25 2023-04-14 北京大学口腔医学院 Three-dimensional prosthesis form and position determining method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000008415A1 (en) 1998-08-05 2000-02-17 Cadent Ltd. Imaging a three-dimensional structure by confocal focussing an array of light beams
JP2011004796A (en) 2009-06-23 2011-01-13 Akita Univ Jaw oral cavity model using optical molding technique and method for manufacturing the same

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100654396B1 (en) * 2005-12-15 2006-12-06 제양수 3d conversion apparatus for 2d face image and simulation apparatus for hair style using computer
KR20090092473A (en) * 2008-02-27 2009-09-01 오리엔탈종합전자(주) 3D Face Modeling Method based on 3D Morphable Shape Model
KR100942026B1 (en) * 2008-04-04 2010-02-11 세종대학교산학협력단 Makeup system and method for virtual 3D face based on multiple sensation interface
JP2011039869A (en) * 2009-08-13 2011-02-24 Nippon Hoso Kyokai <Nhk> Face image processing apparatus and computer program
KR101397476B1 (en) * 2012-11-28 2014-05-20 주식회사 에스하이텍 Virtual cosmetic surgery method by 3d virtual cosmetic surgery device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000008415A1 (en) 1998-08-05 2000-02-17 Cadent Ltd. Imaging a three-dimensional structure by confocal focussing an array of light beams
JP2011004796A (en) 2009-06-23 2011-01-13 Akita Univ Jaw oral cavity model using optical molding technique and method for manufacturing the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230137779A (en) 2022-03-22 2023-10-05 주식회사 레이 method of producing 3-dimensional face scan data

Also Published As

Publication number Publication date
WO2016003258A1 (en) 2016-01-07
KR20160004865A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
KR101744079B1 (en) The face model generation method for the Dental procedure simulation
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
JP4979682B2 (en) Method and system for pre-operative prediction
CN107403463B (en) Human body representation with non-rigid parts in an imaging system
KR101744080B1 (en) Teeth-model generation method for Dental procedure simulation
EP3591616A1 (en) Automated determination of a canonical pose of a 3d dental structure and superimposition of 3d dental structures using deep learning
KR101560508B1 (en) Method and arrangement for 3-dimensional image model adaptation
KR101556992B1 (en) 3d scanning system using facial plastic surgery simulation
US20130107003A1 (en) Apparatus and method for reconstructing outward appearance of dynamic object and automatically skinning dynamic object
WO2018075053A1 (en) Object pose based on matching 2.5d depth information to 3d information
CN101339670A (en) Computer auxiliary three-dimensional craniofacial rejuvenation method
CN112562082A (en) Three-dimensional face reconstruction method and system
CN113628327A (en) Head three-dimensional reconstruction method and equipment
Dicko et al. Anatomy transfer
CN114450719A (en) Human body model reconstruction method, reconstruction system and storage medium
Li et al. ImTooth: Neural Implicit Tooth for Dental Augmented Reality
CN116912433B (en) Three-dimensional model skeleton binding method, device, equipment and storage medium
Noh et al. Retouch transfer for 3D printed face replica with automatic alignment
JP7251003B2 (en) Face mesh deformation with fine wrinkles
Catherwood et al. 3D stereophotogrammetry: post-processing and surface integration
Takács et al. Facial modeling for plastic surgery using magnetic resonance imagery and 3D surface data
JP2022139243A (en) Generation device, generation method, and program
de Sousa Azevedo 3d object reconstruction using computer vision: Reconstruction and characterization applications for external human anatomical structures
Ali et al. Hair Shape Modeling from Video Captured Images and CT Data
Wójcicka et al. Structure from Motion in Three–Dimensional Modeling of Human Head

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right