KR101744079B1 - The face model generation method for the Dental procedure simulation - Google Patents
The face model generation method for the Dental procedure simulation Download PDFInfo
- Publication number
- KR101744079B1 KR101744079B1 KR1020140084001A KR20140084001A KR101744079B1 KR 101744079 B1 KR101744079 B1 KR 101744079B1 KR 1020140084001 A KR1020140084001 A KR 1020140084001A KR 20140084001 A KR20140084001 A KR 20140084001A KR 101744079 B1 KR101744079 B1 KR 101744079B1
- Authority
- KR
- South Korea
- Prior art keywords
- face
- model
- facial
- data
- image data
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C7/00—Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C19/00—Dental auxiliary appliances
- A61C19/04—Measuring instruments specially adapted for dentistry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Epidemiology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention relates to a method of generating a face model that reflects a face change according to a result of a tooth movement simulation in a dental procedure for correction. More particularly, the present invention relates to a method of generating a face model by using 2D face image data and head- Is a technique for generating a model similar to an actual face by using a model restoration technique and a texture mapping technique or a technique of matching the result data of a 3D face surface model and a 3D color scanner. That is, the present invention relates to a method for generating a face model for simulation of a dental procedure, which can reflect a change of a face type according to a tooth movement simulation in real time.
According to the present invention, a 3D CAD model-based process is provided. In the process of distinguishing facial surfaces included in a 3D head image when a 3D facial surface model is generated, an unnecessary portion such as nasal cavity or sinus, which is included in an empty space in the nostril, It provides data that can perform precise and accurate simulation by solving problems perceived as skin surface. It also provides data that can be used for different types of equipment, such as tomography (CT, etc.) The present invention provides a texture mapping technique capable of solving the error of texture mapping generated from different results of two devices due to the angle of incidence, etc., and provides a 3D face model constructed using only images having the best visibility based on a plurality of images, By matching the results by the 3D color scanner to the 3D face surface model, The present invention provides a 3D facial model that provides an aesthetic 3D facial model without any error that may occur in the process, and can simulate the dental procedure and reflect it in real time. In addition, The simulation can be performed with the face model reflected.
Description
The present invention relates to a method of generating a face model that reflects a face change according to a result of a tooth movement simulation in a dental procedure for correction. More particularly, the present invention relates to a method of generating a face model by using 2D face image data and head- Is a technique for generating a model similar to an actual face by using a model restoration technique and a texture mapping technique or a technique of matching the result data of a 3D face surface model and a 3D color scanner. That is, the present invention relates to a method for generating a face model for simulation of a dental procedure, which can reflect a change of a face type according to a tooth movement simulation in real time.
In order to obtain data used for medical consultation, 2D bitmap image data is acquired by photographing with a digital camera, a mobile phone camera, etc., and computed tomography (CT) and magnetic resonance imaging (MRI) A method of collecting a plurality of 2D tomographic image data through photographing or the like is used. At this time, since each image data is generated on a different principle, the characteristics of data that can be obtained therefrom are different from each other, and the methods and areas thereof are different from each other.
Therefore, a volume model created using 2D bitmap image data or 2D tomographic image data composed of pixels or voxels can be used as a 3D CAD A model-based process is impossible.
In order to solve this problem, 3D CAD model was created by segmentation and reconstruction using 2D tomographic image data, but this was a single color model with no real face expression. In this paper, we propose a method to apply facial image of a patient to a single color model by using texture mapping technique. However, There is a problem that the error of the texture mapping generated from the different result of the two devices due to the angle between the camera lens and the patient may be large. Also, during the creation of the 3D CAD model using the 2D tomographic image data, the nasal cavity and the sinusoidal part of the nose are generated as a model together with the condition of the automatic regionization, which then interferes with the texture mapping and the simulation .
SUMMARY OF THE INVENTION It is an object of the present invention to provide a process based on a 3D CAD model.
In addition, in the process of distinguishing the facial surface included in the 3D head image when generating the 3D facial surface model, unnecessary parts such as the nasal cavity and the sinus, which are included in the empty space in the nostril, are recognized as the skin surface, The goal is to provide data that can be used to perform simulations.
In addition, a texture mapping technique that can solve the errors arising from the different results of the two devices due to the characteristics of different equipment, the movement of the patient, the angle between the camera lens and the patient, And to provide the above objects.
It is another object of the present invention to provide a 3D face model constructed by using only images having the best visibility based on a plurality of images.
The object of the present invention is to provide an aesthetic 3D face model without errors that may occur in the texture mapping process by matching the result values by the 3D color scanner with the 3D face surface model.
It is another object of the present invention to provide a 3D face model that can simulate dental procedures and reflect them in real time.
The object of the present invention is to make it possible to carry out simulation in a state in which the data based on the diagnosis and simulation results of the doctor are reflected on the 3D face model.
According to another aspect of the present invention, there is provided a face model generation method for face change simulation according to the present invention,
A method for generating a face model reflecting facial deformation according to a tooth movement simulation, the method comprising: a first step of acquiring 2D facial photograph data and a 3D head image, and restoring a 3D facial surface model using the obtained 3D head image; Displaying a mark point on the obtained 2D face photograph data and displaying a mark point on the 3D face model; Moving and rotating the 3D face model based on the displayed mark point so as to apply the 3D face model showing the mark point to the 2D face image data to convert the position and size of the 3D face model into the 2D face image data A third step of generating a 3D facial surface model so as to match the facial appearance of the 2D facial photograph data by performing an operation to align the facial image data with the facial image data; The 3D face model is generated by projecting the 3D face model onto the texture plane based on the 2D face image data, and the 2D face image data and the 3D face model are parameterized to generate a texture mesh Step 4; Performing a visibility rank process for comparing visibility priorities by comparing the 3D face surface model with a normal vector of the texture mesh; And texture mapping (Texture Mapping) is performed by collecting only the best visibility textures marked by a visibility rank process using the 2D facial photograph data, the 3D facial surface model, and the texture mesh generated through the parameterization process ; .
In the fifth step, a visibility rank process compares angles of a normal vector of a predetermined region for texture mapping on the 3D face surface model with respect to one or more 2D face image data and a normal vector of the texture mesh, A visibility check is performed and a visibility rank is assigned to each region based on the visibility rank. Thus, a 2D image having the highest visibility can be identified in the corresponding region, so that texture mapping using data having the highest visibility can be performed .
delete
According to the present invention, a 3D CAD model-based process is provided. In the process of distinguishing facial surfaces included in a 3D head image when a 3D facial surface model is generated, an unnecessary portion such as nasal cavity or sinus, which is included in an empty space in the nostril, It is possible to provide data that can perform precise and accurate simulation by solving the problem recognized as a skin surface.
Provides texture mapping technology that can solve the errors arising from the different results of the two equipments due to different equipment characteristics, patient's movements, angle between the camera lens and the patient when performing tomographic imaging (CT, etc.) and face photographing .
By providing a 3D face model composed only of images with the best visibility based on a plurality of images and matching the result values by the 3D color scanner with the 3D face surface model, aesthetic 3D There is an effect of providing a face model.
The present invention provides a 3D face model capable of performing simulation in dental procedures and reflecting it in real time and enables simulation to be performed with the data based on the doctor's diagnosis and simulation results reflected in the 3D face model .
Fig. 1 is a diagram showing marking points displayed on 2D face photograph data
FIG. 2 is a diagram showing a reconstruction of a 3D facial surface model by composing a 3D head image having a volume from head tomographic image data,
3 is a view showing a process of removing the nasal cavity and nasal cavity included in the reconstructed 3D facial surface model
Fig. 4 is a view showing marking points on the restored 3D face surface model; Fig.
FIG. 5 is a diagram illustrating a 3D face-point-based registration that performs rotation, movement, and size conversion based on an arbitrary axis so that the restored 3D face surface model can be mapped to 2D face image data, Drawings showing the creation of a 3D face surface model
6 is a view showing a state of a texture mesh generated by performing 3D-point-based-based matching to parameterize a 3D face surface model
7 is a diagram showing a textured three-dimensional face model generated based on a texture mapping technique
8 is a flowchart showing a method of generating a face model for simulating a dental procedure;
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Prior to this, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary terms, and the inventor should appropriately interpret the concepts of the terms appropriately It should be interpreted in accordance with the meaning and concept consistent with the technical idea of the present invention based on the principle that it can be defined. Therefore, the embodiments described in this specification and the configurations shown in the drawings are merely the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.
FIG. 1 is a diagram showing marking points (10, 2D Landmarks) on 2D face picture data.
(10, 2D Landmarks) are displayed on the 2D face image data (2D Face Picture Image) as shown in FIG. For example, in the case of a frontal face, two markers can be placed on both sides of the tail and two markers on both sides of the tail. This will be followed by a 3D affine landmark-transform Is used to perform texture mapping (texture coordinate mapping). In other words, the texture mapping (texture mapping) generated from the different results of the two devices due to the characteristics of the different equipment, the movement of the patient, the angle between the camera lens and the patient during the tomographic (CT) The detailed description thereof will be given later.
Referring to FIG. 1, a 2D face picture image having a file format such as BMP, JPEG, and GIF is acquired using a digital camera, a mobile phone camera, or the like. The data thus obtained is a 2D bitmap image data, which is only two dimensions and can not be viewed three-dimensionally. In addition, 2D face picture data (2D Face Picture Image) can not be used for directly creating a new model or performing simulation. In this paper, we propose a model that uses 2D face image data (2D Face Picture Image) and
However, the method for generating a textured face surface is not limited to this method. For example, a method using a 3D color scanner (
Therefore, in the following description, 3D face model (2D face image) and
FIG. 2 shows a 3D head image having a volume from the DICOM series, segmentation of the facial skin region, and reconstruction based on the segmentation. 3D Face Surface) is restored.
Head tomography image data (DICOMseries) can be obtained from equipment such as CT (computed tomography), MRI (magnetic resonance imaging), and ultrasonic diagnostics (medical imaging) Acquisition of the tomographic image data (DICOMseries) is not limited to this and can be performed by various methods such as PET (poisitron emission tomography).
Then, the acquired head tomographic image data (DICOM series) are combined to create a 3D head image having a volume.
Then, the facial skin area is segmented and reconstructed into 3D to restore the 3D face surface.
Here, the segmentation of the facial skin region means a task of selecting a desired region by forming a boundary line. For example, a 3D head image is formed by stacking a plurality of images in a layered structure. Thus, a boundary of the surface of the face is formed by finding a sudden change in the pixel value at the portion where the air layer and the face surface are in contact with each other , And this operation corresponds to segmentation. That is, it means that the user uses the data obtained from the CT and MRI images, such as the skin, the jaw bone, the crown, and the root, to distinguish the part to be used as data. Thereafter, reconstruction is performed using a technique such as a Marching cube algorithm on a 3D head image based on the segmented information to generate a 3D face surface model (3D Face Surface) 3D facial surface CAD model) data.
As a result, the 3D face surface model created by segmentation and reconstruction of the 3D head image is data in which the interior is empty, that is, facial skin. However, such a 3D face surface model is a 3D CAD model having points, lines, and faces and has directionality, and a CAD model-based application process such as creating a new model or performing a simulation using the CAD model It is possible.
FIG. 3 is a diagram illustrating a process of removing the sinus and nasal portions included in the restored 3D face surface model (3D Face Surface). FIG. 4 is a view showing marking points (20, 3D Landmarks) on the restored 3D face surface model (3D Face Surface). FIG. 5 illustrates a method for rotating, translating, and resizing a restored 3D face surface model to be mapped with 2D face picture data. (3D Affine Landmark-Transform) that scales the 3D face image to create a transformed 3D face surface that is converted to the 2D face image. Fig.
Referring to FIG. 3, the 3D face surface model created through the above process includes sinuses, nasal cavities, and the like, which occupy empty space in the inside of the nostril, for example, inside the nostril. Sinuses, nasal cavities, and the like can interfere with data transformation when performing texture mapping or simulation. Therefore, the unnecessary part of the 3D face surface can be obtained through the Boolean operation of the quadratic surface which is relatively easy to calculate using the polyhedron of interest (POI) The face skin that is directly used for new model making or simulation is obtained.
As shown in Fig. 4, a marking point is located at a position corresponding to the marking point (10, 2D Landmarks) displayed on the 2D face picture image, as shown in Fig. 4, on the thus generated 3D face surface model Mark (20, 3D Landmrks).
Then, as shown in FIG. 5, the 3D face surface model (20, 3D Landmarks) is displayed on the 2D face image data (10, 2D Landmarks) In order to do this, the order of the marking points on the 3D Face Surface and the shape of the marking points are matched with the order of the marking points of the 2D face picture image and the shapes of the markings Rotate, Translate, and Scale to fit 2D Landmarks. The set of markers on the 2D Face Picture Image and the 3D face surface (Coordinates Mapping, Shared Local Coordinates) by sharing the coordinates of the set of marking points of the 3D face surface.
Applying the above operations to the 3D Face Surface makes it possible to convert the 2D Face Picture Data and the 3D Face Surface to the same orientation and shape, A Transformed 3D Face Surface is created. This process is called 3D Affine Landmark-Transform.
FIG. 6 shows the appearance of a texture mesh created by parameterizing a transformed 3D face surface by performing 3D Affine Landmark-Transform (3D) Fig.
Hereinafter, the parameterization process will be described. A parameterization process is started based on the transformed 3D face surface of Fig. 4. The transformed 3D face surface is converted into 2D face image data Image-based texture plane to project-plane generation.
(Normalized, synchronized) by adjusting the axis value, synchronizing the sections and adjusting the size, etc., so that the coordinates of the 2D facial image data (2D Face Picture Image) and the transformed 3D facial surface surface (Texture Paramater Space Nomalization). In this way, the transformed 3D facial surface model is transformed into 2D, which is the same as 2D facial image data, which is called texture-mesh data.
Here, the texture-mesh data is a 3D CAD model, which is composed of points, lines and faces, and is made up of polyhedrons and polygonal surfaces that are directional and exist both inside and outside. In the present invention, the surface constituting the texture mesh is formed by tens to hundreds of thousands of triangles so that the user can easily calculate the position and the like on the program.
In performing texture mapping, it is possible to use photographs taken in various directions to generate a face model closer to a real face. In such a case, a transformed 3D face surface model may be used. (Regions Visibility check) using the normal vector of the texture mesh (Texture-Mesh) generated through the parametrization process and the normal vector of the 3D face surface model, and then selects the optimal visibility part And a 'visibility rank' process. This reflects, for example, that visibility may vary depending on the direction in which the photographer took the nose, nose, eyes, cheeks, and the like. In other words, the first normal vector matching and matched to at least one polygonal area (first area), which is a predetermined area of the 3D face surface model, and a predetermined area of the texture mesh data The second normal vector existing in the second area) is determined. As the angle formed by the first normal vector and the second normal vector is closer to 180 degrees, the data is judged to have good visibility and the visibility rank Prioritize through the process. It is possible to select and map textures with higher visibility than those in texture mapping using the priority through the visibility rank process.
7 is a view showing a textured face model generated based on a texture mapping technique.
For texture mapping, 2D face picture data, 3D face surface, and texture-mesh data are used. A texture is extracted from an area of a 2D face picture image corresponding to each of a certain area (unit) constituting a texture mesh, and the texture is extracted from a texture mesh ) Of the 3D face surface to correspond to the 3D coordinates of the 3D face surface to obtain the textured face surface data.
At this time, a plurality of 2D facial image data (2D Face Picture Image) and a plurality of transformed 3D facial surface models (Transformed 3D Face Surfaces) corresponding thereto are sorted according to the priority order of the above-described visibility rank You can create a 3D face model with greater visibility through mapping and mapping.
Texture-mapping is a technique of applying a texture such as a desired pattern or color to a surface to enhance the realism of an image or an object to be expressed. In the 3D surface model, a 2D bitmap Image, and 3D image data to obtain a realistic and sophisticated model.
8 is a flowchart showing a method of generating a face model for dental procedure simulation.
First, DICOM series data were acquired using CT, MRI, etc., and 3D head images were obtained using these data. Then, segmentation and reconstruction were performed to obtain 3D head images, Obtain a 3D Face Surface. Thereafter, a 3D facial surface model modified to fit texture mapping (Texture-Mapping) or simulation is created by removing the nasal sinus, nasal cavity, etc., which act as obstacles in performing texture mapping or
Secondly, 2D face picture data (2D Face Picture Image) is obtained and mark points are displayed (10, 2D Landmarks).
Third, marking points (20, 3D Landmarks) are displayed at positions corresponding to the marking points (10, 2D Landmarks) displayed on the 2D face picture data on the 3D face surface model, Perform a 3D Affine Landmark-Transform to obtain a 3D Face Surface. In this way, the face orientation between the 2D face image data (2D Face Picture Image) and the 3D face surface model (3D Face Face) photographed can be matched to the 2D face image data and 3D face image data in texture mapping It is possible to solve the error caused by the inconsistency of the 3D face surface.
Fourth, a parameterization process is performed on the transformed 3D face surface to obtain texture-mesh data. This means that the transformed 2D face image data (2D Face Picture Image) is transformed into 2D in order to fit a transformed 3D face surface model, and the generated texture mesh (Texture-Mesh) has the same coordinates as 2D face picture data (2D Face Picture Image), thereby enabling texture mapping.
At this time, when one or more 2D face picture data is provided as input data, a plurality of 2D face picture data (2D Face Picture Images) taken in various directions When input, the visibility priority between the texture mesh created based on the previously input image and the 3D face surface (3D Face Surface), the texture mesh generated based on the currently input image, and the 3D face surface model And visibility rank is maintained as the highest visibility priority so as to prepare it as data for texture mapping.
In performing a texture mapping process using one or more 2D face picture data and a texture-mesh corresponding to the image obtained through the above process, A texture with a high visibility priority is selected using a visibility rank, and the 3D face surface is modeled to obtain a textured face surface.
As a result, textured face surface data as shown in FIG. 7 is obtained, which uses the 3D face surface data generated in the previous process, and the 3D face surface model (3D Face) (3D head image) and the coordinate system because it is based on the 3D head image. Therefore, the face shape due to the tooth movement using the coordinate system of the 3D head image It can be used for all tasks including face change simulation that reflects changes in real time. That is, a realistic and stereoscopic 3D face model (3D face model) is created so as to be similar to the actual face for dental procedure simulation.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. It will be understood that various modifications and changes may be made without departing from the scope of the appended claims.
10: Cover point displayed on 2D face photograph data
20: Cover point displayed on the 3D face surface model
Claims (4)
A first step of acquiring 2D facial image data and a 3D head image and reconstructing a 3D facial surface model using the obtained 3D head image;
Displaying a mark point on the obtained 2D face photograph data and displaying a mark point on the 3D face model;
Moving and rotating the 3D face model based on the displayed mark point so as to apply the 3D face model showing the mark point to the 2D face image data to convert the position and size of the 3D face model into the 2D face image data A third step of generating a converted 3D facial surface model so as to match the facial appearance of the 2D facial photograph data by performing the operation of fitting the facial model to the facial image data;
Dimensional face image data based on the 2D facial image data to generate a plane model by projecting the converted 3D facial surface model onto a texture plane based on the 2D facial photograph data, and performing a parameterization process on the 2D facial image data and the converted 3D facial surface model to generate a texture- Mesh);
A fifth step of performing a visibility rank process for comparing visibility priorities by comparing the 3D face surface model with a normal vector of the texture mesh; And
Texture mapping is performed by collecting only the optimal visibility textures marked by a visibility rank process using the 2D facial photograph data, the 3D facial surface model, and the texture mesh generated through the parameterization process ;
And generating a face model for the dental procedure simulation.
In the fifth step, a visibility rank process is performed.
The visibility check is performed for each of the one or more 2D face image data by comparing the angle formed by the normal vector of the predetermined area for the texture mapping on the 3D face surface model with the normal vector of the texture mesh, Dimensional face image data having the highest visibility can be discriminated by giving a visibility rank so that texture mapping using the data having the highest visibility can be performed. How to create a model.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140084001A KR101744079B1 (en) | 2014-07-04 | 2014-07-04 | The face model generation method for the Dental procedure simulation |
PCT/KR2015/006976 WO2016003258A1 (en) | 2014-07-04 | 2015-07-06 | Face model generation method for dental procedure simulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140084001A KR101744079B1 (en) | 2014-07-04 | 2014-07-04 | The face model generation method for the Dental procedure simulation |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20160004865A KR20160004865A (en) | 2016-01-13 |
KR101744079B1 true KR101744079B1 (en) | 2017-06-09 |
Family
ID=55019687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140084001A KR101744079B1 (en) | 2014-07-04 | 2014-07-04 | The face model generation method for the Dental procedure simulation |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101744079B1 (en) |
WO (1) | WO2016003258A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230137779A (en) | 2022-03-22 | 2023-10-05 | 주식회사 레이 | method of producing 3-dimensional face scan data |
KR20240109416A (en) | 2023-01-04 | 2024-07-11 | 주식회사 레이 | simulation method for prosthodontics |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127122A (en) * | 2016-06-16 | 2016-11-16 | 厦门道拓科技有限公司 | Head portrait detection method based on face action identification, system and intelligent terminal |
CN108876886B (en) * | 2017-05-09 | 2021-07-27 | 腾讯科技(深圳)有限公司 | Image processing method and device and computer equipment |
CN107748869B (en) * | 2017-10-26 | 2021-01-22 | 奥比中光科技集团股份有限公司 | 3D face identity authentication method and device |
CN108717730B (en) * | 2018-04-10 | 2023-01-10 | 福建天泉教育科技有限公司 | 3D character reconstruction method and terminal |
KR102289610B1 (en) * | 2019-05-09 | 2021-08-17 | 오스템임플란트 주식회사 | Method and apparatus for providing additional information of teeth |
CN110428491B (en) * | 2019-06-24 | 2021-05-04 | 北京大学 | Three-dimensional face reconstruction method, device, equipment and medium based on single-frame image |
CN112819741B (en) * | 2021-02-03 | 2024-03-08 | 四川大学 | Image fusion method and device, electronic equipment and storage medium |
WO2022191575A1 (en) * | 2021-03-09 | 2022-09-15 | 고려대학교 산학협력단 | Simulation device and method based on face image matching |
CN113112617B (en) * | 2021-04-13 | 2023-04-14 | 四川大学 | Three-dimensional image processing method and device, electronic equipment and storage medium |
CN115120372B (en) * | 2022-05-25 | 2023-04-14 | 北京大学口腔医学院 | Three-dimensional prosthesis form and position determining method |
CN118334294B (en) * | 2024-06-13 | 2024-08-20 | 四川大学 | Parameter domain interpolation human face deformation measurement method based on quasi-conformal mapping |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000008415A1 (en) | 1998-08-05 | 2000-02-17 | Cadent Ltd. | Imaging a three-dimensional structure by confocal focussing an array of light beams |
JP2011004796A (en) | 2009-06-23 | 2011-01-13 | Akita Univ | Jaw oral cavity model using optical molding technique and method for manufacturing the same |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100654396B1 (en) * | 2005-12-15 | 2006-12-06 | 제양수 | 3d conversion apparatus for 2d face image and simulation apparatus for hair style using computer |
KR20090092473A (en) * | 2008-02-27 | 2009-09-01 | 오리엔탈종합전자(주) | 3D Face Modeling Method based on 3D Morphable Shape Model |
KR100942026B1 (en) * | 2008-04-04 | 2010-02-11 | 세종대학교산학협력단 | Makeup system and method for virtual 3D face based on multiple sensation interface |
JP2011039869A (en) * | 2009-08-13 | 2011-02-24 | Nippon Hoso Kyokai <Nhk> | Face image processing apparatus and computer program |
KR101397476B1 (en) * | 2012-11-28 | 2014-05-20 | 주식회사 에스하이텍 | Virtual cosmetic surgery method by 3d virtual cosmetic surgery device |
-
2014
- 2014-07-04 KR KR1020140084001A patent/KR101744079B1/en active IP Right Grant
-
2015
- 2015-07-06 WO PCT/KR2015/006976 patent/WO2016003258A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000008415A1 (en) | 1998-08-05 | 2000-02-17 | Cadent Ltd. | Imaging a three-dimensional structure by confocal focussing an array of light beams |
JP2011004796A (en) | 2009-06-23 | 2011-01-13 | Akita Univ | Jaw oral cavity model using optical molding technique and method for manufacturing the same |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230137779A (en) | 2022-03-22 | 2023-10-05 | 주식회사 레이 | method of producing 3-dimensional face scan data |
KR20240109416A (en) | 2023-01-04 | 2024-07-11 | 주식회사 레이 | simulation method for prosthodontics |
Also Published As
Publication number | Publication date |
---|---|
WO2016003258A1 (en) | 2016-01-07 |
KR20160004865A (en) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101744079B1 (en) | The face model generation method for the Dental procedure simulation | |
US11735306B2 (en) | Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches | |
EP3818500B1 (en) | Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning | |
CN106023288B (en) | A kind of dynamic scapegoat's building method based on image | |
JP4979682B2 (en) | Method and system for pre-operative prediction | |
KR101744080B1 (en) | Teeth-model generation method for Dental procedure simulation | |
KR101560508B1 (en) | Method and arrangement for 3-dimensional image model adaptation | |
US20130107003A1 (en) | Apparatus and method for reconstructing outward appearance of dynamic object and automatically skinning dynamic object | |
CN102663818A (en) | Method and device for establishing three-dimensional craniomaxillofacial morphology model | |
WO2018075053A1 (en) | Object pose based on matching 2.5d depth information to 3d information | |
CN101339670A (en) | Computer auxiliary three-dimensional craniofacial rejuvenation method | |
CN114450719A (en) | Human body model reconstruction method, reconstruction system and storage medium | |
CN113628327A (en) | Head three-dimensional reconstruction method and equipment | |
CN113989434A (en) | Human body three-dimensional reconstruction method and device | |
Li et al. | ImTooth: Neural Implicit Tooth for Dental Augmented Reality | |
JP2023505615A (en) | Face mesh deformation with fine wrinkles | |
CN116912433B (en) | Three-dimensional model skeleton binding method, device, equipment and storage medium | |
Venkateswaran et al. | 3D design of orthotic casts and braces in medical applications | |
Noh et al. | Retouch transfer for 3D printed face replica with automatic alignment | |
KR102705610B1 (en) | Stereoscopic image capture device and method for multi-joint object based on multi-view camera | |
Catherwood et al. | 3D stereophotogrammetry: post-processing and surface integration | |
Takács et al. | Facial modeling for plastic surgery using magnetic resonance imagery and 3D surface data | |
JP2022139243A (en) | Generation device, generation method, and program | |
de Sousa Azevedo | 3d object reconstruction using computer vision: Reconstruction and characterization applications for external human anatomical structures | |
Ali et al. | Hair Shape Modeling from Video Captured Images and CT Data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right |