WO2016200201A1 - Dispositif et procédé de reconstruction de l'oreille et nécessité de reconstruction - Google Patents

Dispositif et procédé de reconstruction de l'oreille et nécessité de reconstruction Download PDF

Info

Publication number
WO2016200201A1
WO2016200201A1 PCT/KR2016/006178 KR2016006178W WO2016200201A1 WO 2016200201 A1 WO2016200201 A1 WO 2016200201A1 KR 2016006178 W KR2016006178 W KR 2016006178W WO 2016200201 A1 WO2016200201 A1 WO 2016200201A1
Authority
WO
WIPO (PCT)
Prior art keywords
ear
information
image
modeling data
angle
Prior art date
Application number
PCT/KR2016/006178
Other languages
English (en)
Korean (ko)
Inventor
최태현
김성완
김희찬
김석화
이치원
김명준
전병준
박우정
Original Assignee
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단 filed Critical 서울대학교산학협력단
Publication of WO2016200201A1 publication Critical patent/WO2016200201A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/5044Designing or manufacturing processes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/5044Designing or manufacturing processes
    • A61F2/5046Designing or manufacturing processes for designing or making customized prostheses, e.g. using templates, finite-element analysis or CAD-CAM techniques
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C67/00Shaping techniques not covered by groups B29C39/00 - B29C65/00, B29C70/00 or B29C73/00
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/5044Designing or manufacturing processes
    • A61F2/5046Designing or manufacturing processes for designing or making customized prostheses, e.g. using templates, finite-element analysis or CAD-CAM techniques
    • A61F2002/5047Designing or manufacturing processes for designing or making customized prostheses, e.g. using templates, finite-element analysis or CAD-CAM techniques using mathematical models
    • A61F2002/5049Computer aided shaping, e.g. rapid prototyping

Definitions

  • the present invention relates to an apparatus and method for precisely scanning each part of the ear to generate 3D modeling data and to generate artificial ears based thereon.
  • Microticism is an abnormality in which one or both ears are much smaller than normal and deformed. As a cause of microticism, the importance of environmental factors has already been identified, but genetic factors are also expected to be significant. One out of every 7,000 to 8,000 babies is born, about 95% of them appear in only one ear, and about 5% appear in both ears.
  • Ear scanning and ear reconstruction techniques are essential to the treatment of such microtia.
  • the structure of the fossa, the concha, and the external auditory meatus is hidden by the structure of the two-wheel, antihelix, and tragus. There is a problem that the structure is not scanned correctly.
  • the present invention is to solve the above-mentioned problems of the prior art, the present application is to solve the above-described problem, to provide an apparatus and method that can generate 3D modeling data for the ear by accurately photographing and scanning the internal structure of the ear
  • the present application is to solve the above-described problem, to provide an apparatus and method that can generate 3D modeling data for the ear by accurately photographing and scanning the internal structure of the ear
  • the present application accurately controls the position and Euler angle for scanning each part of the ear to obtain a 2D image through scanning and photographing each part of the ear, and the depth information calculated based on the shape information obtained through scanning It is an object of the present invention to provide an apparatus and method capable of increasing the accuracy of 3D modeling data required for artificial ear generation by converting a 2D image into a 3D image.
  • the present invention collects Cartesian coordinates and Euler angles in the corresponding parts during the imaging and scanning of each part of the ear, and based on this, by matching the 3D image to generate 3D modeling data corresponding to the ear model, the data for each part of the ear
  • An object of the present invention is to provide an apparatus and method for increasing accuracy in matching.
  • the present invention provides an apparatus and method for producing artificial ears suitable for a patient by 3D printing the 3D modeling data using a mixture of biocompatible materials and soluble crystals such as salts for porous structures. It aims to provide.
  • An object of the present disclosure is to provide an apparatus and method for producing an artificial ear of a desired shape by providing an editing interface capable of editing 3D modeling data for artificial ear generation.
  • the artificial ear generating device is an information acquisition unit for photographing and scanning each part of the ear, the location of the photographing and scanning connected to the information acquisition unit and A position adjuster for changing an angle and outputting information on the position and angle, information on the position and angle, shape information on each part of the ear generated through the scanning, and a 2D image generated through the photographing.
  • Image processing to generate a 3D image based on the provided control unit and the shape information, the position and angle information and the 2D image, and merging the 3D image based on the position and angle information. It may include wealth.
  • an artificial ear generating apparatus includes an information acquisition unit for photographing and scanning each part of the ear; A position adjusting unit connected to the information obtaining unit to change a photographing and scanning position and angle of each part of the ear, and outputting information on the position and angle of the information obtaining unit; The operation signal is applied to the position adjusting unit to adjust the photographing and scanning positions and angles of the information obtaining unit, the position and angle information of each part of the ear, and each part of the ear generated by scanning the information obtaining unit.
  • a controller configured to receive shape information about and a 2D image generated by photographing the information acquirer; And generating a 3D image based on the shape information on each part of the ear, information on the position and angle, and a 2D image corresponding to each part of the ear, and generating the 3D image based on the information on the position and angle.
  • the image processing unit may generate a 3D modeling data by merging the 3D images.
  • the position adjusting unit may include a linear stage providing the Cartesian coordinates and the Euler angle as information on the position and the angle, and an actuator to which the information acquisition unit is connected at the end thereof.
  • the position adjusting unit may be a six-axis robot arm to which the information acquisition unit is connected to the distal portion.
  • control unit may include a feature point extraction module for extracting feature points of a portion photographed from the 2D image acquired through the photographing of the information acquisition unit; And an operation signal generation module configured to generate an operation signal for adjusting the position and angle of the information acquisition unit by operating the position adjusting unit through comparison between the extracted feature point and feature point information for each part of the ear previously stored.
  • a marker is attached to each part of the ear, and the feature point extracting module may extract the feature point by extracting the marker from the photographed 2D image.
  • each part of the ear may be at least one of, for example, fossa, concha, external auditory meatus, helix, antihelix and tragus. have.
  • the image processing unit may include a depth information calculation module that calculates depth information of the 2D image based on the shape information of the 2D image.
  • a conversion module for converting the 2D image into a 3D image based on depth information of the 2D image; And a matching module for generating 3D modeling data by matching the 3D image based on the position and angle information.
  • the image processing unit may further include a transformation module for converting the 3D modeling data for the healthy ear into 3D modeling data corresponding to the affected ear by applying a mirroring transformation technique when the 3D modeling data is for the healthy ear. Can be.
  • the image processing unit may provide an editing interface for editing the 3D modeling data, and may further include an editing module for converting the 3D modeling data based on information input through the editing interface. have.
  • the artificial ear generating device may further include an output unit configured to output an ear corresponding to the 3D modeling data through 3D printing using a predetermined biocompatible material.
  • the output unit may output an artificial ear corresponding to the 3D modeling data through 3D printing using a non-biocompatible material.
  • the output unit may produce a mold for the output artificial ear, and output the artificial ear based on the produced mold.
  • the mold may be manufactured based on the 3D modeling data.
  • the output unit may output an artificial ear through a plastic molding method for a predetermined biocompatible material.
  • the biocompatible material is polycaprolactone (PCL), polyglycolic acid (PGA), polylactic acid (PLA), poly (D, L-lactic- co -glycolic-acid) (PLGA), poly (alkyl cyanoacrylate (PAC), hyaluronic acid (HA), hydrogel, titanium, tricalcium, phosphate, hydroxylapatite, silicone, acrylates, collagen, gelatin, chitosan, high density polyethylene (HDPE), low density polyethylene (LDPE), polyethylene (PE), Linear low-density polyethylene (LLDPE), Medium-density polyethylene (MDPE), Ultra-high-molecular-weight polyethylene (UHMWPE), polymethyl methacrylate (PMMA), polytetrafluoroethylene (PTFE), polydimethylsiloxane (PDMS) and It may be a mixture of at least one or more of fibrinogen.
  • PCL polycaprolactone
  • PGA polyglycolic acid
  • PLA polylactic acid
  • the artificial ear generating method is an information acquisition unit and the information acquisition unit for providing 2D image and shape information through the imaging and scanning of each part of the ear
  • the artificial ear generating method comprising: Controlling the position adjusting unit for photographing and scanning; collecting 2D image of each part of the ear; collecting information of the position and angle; and shape information obtained through scanning; and obtaining the 2D image, of the position and angle Generate a 3D image of each part of the ear based on the information and shape information, and the position and angle Based on information in the matching of the 3D image may include the step of generating a 3D modeling data.
  • the artificial ear generating method is an information acquisition unit and the information acquisition unit for providing 2D image and shape information through the imaging and scanning of each part of the ear
  • the position and angle information may be generated through a coordinate system generated based on the Cartesian coordinates and Euler angles or the Cartesian coordinates and Euler angles.
  • the controlling may include extracting feature points of a portion photographed from the 2D image acquired through photographing of the information obtaining unit; And controlling the position and angle of the information obtaining unit by operating the position adjusting unit by comparing the extracted feature point with feature point information for each part of the ear.
  • a marker is attached to each part of the ear, and the extracting of the feature point may extract the feature point by extracting the marker from the photographed 2D image.
  • the generating of the 3D modeling data may include calculating depth information of the 2D image based on shape information of the 2D image; Converting the 2D image into a 3D image based on depth information of the 2D image; And generating 3D modeling data by matching the 3D image based on the position and angle information.
  • the artificial ear generation method further includes converting 3D modeling data for the healthy ear to 3D modeling data corresponding to the affected ear by applying a mirroring transformation technique when the 3D modeling data is for the healthy ear. can do.
  • the artificial ear generating method may include providing an editing interface capable of editing the 3D modeling data; And converting the 3D modeling data based on the information input through the editing interface.
  • the artificial ear generating method may include outputting an ear corresponding to the 3D modeling data through 3D printing using a predetermined biocompatible material.
  • the artificial ear generating method may include outputting an artificial ear corresponding to the 3D modeling data through 3D printing using a non-biocompatible material.
  • the outputting step may produce a mold for the output artificial ear, and output the artificial ear based on the manufactured mold.
  • the outputting step may output the artificial ear through a plastic molding method for a predetermined biocompatible material.
  • any one of the problem solving means of the present invention described above by precisely controlling the position and Euler angle for scanning each part of the ear to obtain a 2D image through scanning and photographing each part of the ear, By converting the 2D image into the 3D image using the depth information calculated based on the shape information, the accuracy of the 3D modeling data required for artificial ear generation may be increased.
  • the present invention collects Cartesian coordinates and Euler angles in the corresponding parts during the imaging and scanning of each part of the ear, and based on this, by matching the 3D image to generate 3D modeling data corresponding to the ear model, to each part of the ear Accuracy in data matching
  • the present application can produce artificial ears suitable for a child by 3D printing 3D modeling data using a biocompatible material.
  • the present application provides an editing interface that can edit the 3D modeling data for artificial ear generation, it is possible to produce the artificial ear of the desired shape of the user.
  • FIG. 1 is a view showing an artificial ear generating device according to an embodiment of the present application.
  • FIG. 2 is a block diagram showing the detailed configuration of the artificial ear generating device according to an embodiment of the present application.
  • Figure 3 is an exemplary view attached a marker applied to the artificial ear generating device according to an embodiment of the present application.
  • FIGS. 4A and 4B are diagrams for describing a mirroring transformation technique applied to an artificial ear generating apparatus according to an embodiment of the present application.
  • FIG. 5 is a flowchart schematically illustrating an artificial ear generation process according to an exemplary embodiment of the present application.
  • FIG. 6 is a flowchart illustrating an artificial ear generation process in detail according to an exemplary embodiment of the present application.
  • FIG. 7 is a flowchart illustrating an artificial ear generation process according to another exemplary embodiment of the present application.
  • FIG. 8 is a flowchart illustrating an artificial ear generation process according to another exemplary embodiment of the present application.
  • FIG. 9 is a flowchart illustrating an artificial ear generation process according to another exemplary embodiment of the present application.
  • FIG. 1 is a view showing an artificial ear generating device according to an embodiment of the present application
  • Figure 2 is a block diagram showing a detailed configuration of the artificial ear generating device according to an embodiment of the present application.
  • the artificial ear generating apparatus may include an information obtaining unit 100, a position adjusting unit 110, a control unit 120, an image processing unit 130, and an output unit 140.
  • the information acquisition unit 100 may include an image capturing means, for example, a camera 102 and a 3D scanner 104, to photograph or scan an ear and output the same through the control unit 120.
  • the information acquisition unit 100 according to an embodiment of the present application is physically connected to the position adjusting unit 110, is moved in the X, Y and Z axis direction by the position adjusting unit 110 corresponds to the object
  • the ear may be scanned to provide shape information and an image of the ear to the controller 120.
  • the camera 102 of the information acquisition unit 100 may photograph a portion of the ear to be scanned and provide the captured image to the controller 120.
  • the 3D scanner 104 may obtain the shape information of the object by projecting the laser, infrared, and white light onto the ear, and may provide the same to the controller 120.
  • the ear may be an ear seen from a living ear.
  • the ear may be a gypsum bone modeled after the real ear in addition to the real ear, and the information acquisition unit 100 may photograph and scan each part of the ear.
  • the gypsum bone is covered with cotton wool, the rectangular acrylic frame is fixed to the ear, poured alginate and waited until it hardens, and then the alginate having the engraved shape of the ear is separated to the engraved alginate. It can be made by injecting plaster.
  • the method of shaping a living ear is not limited to the gypsum bone as shown in the example, and the ear can be made in various ways.
  • a bone for a living ear can be made directly by an expert, but can also be made using a mechanical device.
  • the position adjusting unit 110 may move the physically connected information obtaining unit 100 in the X, Y, and Z axis directions or change the Euler angles (yaw, pitch, roll) under the control of the controller 120. have.
  • the position controller 110 may provide the controller 120 with position information of the information acquirer 100, that is, information about a position and an angle at which the information acquirer 100 captures and scans.
  • the position controller 110 may provide the controller 120 with Cartesian coordinates and an Euler angle with respect to the ear region to be photographed and scanned by the information acquirer 100.
  • Position adjuster 110 has a structure in which the actuator 114, for example, hip joint actuator, ball joint actuator, etc. are connected to the XYZ linear stage (112), the actuator 114
  • the information acquisition unit 100 may be physically connected to the information acquisition unit 100 to move the information acquisition unit 100 to various angles and positions, thereby photographing and scanning the target ear.
  • the position adjusting unit 110 changes the position of the information obtaining unit 100 by moving the XYZ linear stage 112 under the control of the control unit 120, and various angles through the driving of the actuator 114.
  • the photographing and scanning angles of the information obtaining unit 100 may be changed so that the photographing and scanning may be performed.
  • the position adjusting unit 110 obtains Cartesian coordinates of the area photographed and scanned by using the XYZ linear stage 112, and obtains information about Euler angles through the actuator 114, such as Cartesian coordinate system information and Euler.
  • the angle may be provided to the controller 120.
  • Euler angle means roll, pitch, yaw, etc.
  • Cartesian coordinates are relative X, Y, Z coordinates at the origin, that is, the camera 102 And position information of the 3D scanner 104.
  • the pitch is the angle of rotation around the axis in the horizontal plane perpendicular to the direction of movement of the XYZ linear stage 112
  • the roll is the angle of rotation of the axis in the horizontal plane parallel to the direction of movement
  • the yaw is relative to the direction of movement. It can mean the angle of rotation around the axis in the vertical vertical plane.
  • a new coordinate system may be defined using Cartesian coordinates and Euler angles.
  • the position adjusting unit 110 is implemented using a combination of the XYZ linear stage 112 and the actuator 114, but may be implemented using a 6-axis robot arm. That is, by attaching the information acquisition unit 100 to the end-effector of the six-axis robot arm to obtain Cartesian coordinates and Euler angles, the camera 102 and the 3D scanner 104 of the information acquisition unit 100 It is also possible to change the shooting and scanning position and angle.
  • the controller 120 captures an image captured by the 3D scanner 104 based on Cartesian coordinates and Euler angles provided from the position controller 110 and shape information obtained through the scanning of the 3D scanner 104 (hereinafter, ' Depth information of a " 2D image " As described above, a new type of coordinate system defined by Cartesian coordinates and Euler angles or a new type of coordinate system defined by other parameters may be used instead of Cartesian coordinates and Euler angles, respectively.
  • control unit 120 may define a new coordinate system using the Cartesian coordinates and Euler angles provided from the position adjusting unit 110, and photograph the new coordinate system, the Cartesian coordinates, the O'Hiller angle, and the 3D scanner 104. Depth information of the received image may be obtained.
  • the controller 120 may provide an operation signal for adjusting the position controller 110 by comparing the 2D image provided from the camera 102 with the feature point information for each ear region stored in the storage medium 122. Based on the operation signal, the XYZ linear stage 112 and the actuator 114 may be operated to adjust the photographing and scanning positions and angles of the information acquisition unit 100.
  • controller 120 may store the 2D image, the shape information, the Cartesian coordinates, and the Euler angle in the storage medium 122.
  • the controller 120 may include a storage medium 122, a feature point extraction module 124, an operation signal generation module 126, a control module 128, and the like.
  • the storage medium 122 is internal to each part of the ear, such as fossa, concha, external auditory meatus, helix, antihelix, tragus, etc.
  • the comparison image and the feature point information about the structure are stored.
  • the comparison image and the feature point information may be used for comparison with an image provided from the camera 102 of the information acquisition unit 100.
  • the feature point extraction module 124 may extract a feature point of a portion photographed from the 2D image received from the camera 102 of the information acquisition unit 100, and provide the extracted feature point to the operation signal generation module 126.
  • each major part of the ear farnesoida
  • concha concha
  • external auditory meatus helix
  • bihelix helix
  • antihelix tragus
  • tragus tragus
  • the feature point extraction module 124 may extract feature points for each part of the ear through marker extraction from the 2D image.
  • the marker may be attached to each part of the ear in the form of a predetermined recognition character, for example, a number, or may have a circular shape recognizable when taking an image by the camera 102, but is not limited thereto.
  • the operation signal generation module 126 generates an operation signal for adjusting the position and angle of the information acquisition unit 100 by comparing the extracted feature point and the feature point information for each part of the ear stored in the storage medium 122 to control the module 128. ) Can be provided.
  • the control module 128 may output a control signal corresponding to the operation signal to the position controller 110 to control the XYZ lineage stage 112 and the actuator 114 of the position controller 110. Accordingly, the position adjusting unit 110 may drive the XYZ lineage stage 112 and the actuator 114 to adjust the photographing and scanning positions and angles of the information obtaining unit 100.
  • control module 128 may match the Cartesian coordinates inputted from the position controller 110 with the Euler angles, the 2D image inputted from the information acquirer 100, and the shape information, and store them in the storage medium 122.
  • the image processing unit 130 calculates depth information of the 2D image based on data stored in the storage medium 122, that is, Cartesian coordinates, Euler angles and shape information, and based on the calculated depth information, the 2D image. Can be converted into a 3D image.
  • the image processing unit 130 may generate 3D modeling data for the ear by matching each of the converted 3D images based on Cartesian coordinates and Euler angles.
  • the image processing unit 130 may display 3D modeling data and an editing interface capable of editing the same on a display unit (not shown) to convert the 3D modeling data according to a user's editing interface manipulation.
  • the image processing unit 130 as described above may include the depth information calculation module 132, the conversion module 134, the matching module 135, the transformation module 136, and the editing module 138. ) May be included.
  • the depth information calculation module 132 may calculate depth information for each 2D image by using Cartesian coordinates, Euler angles, and shape information matched with the 2D image, and provide the calculated depth information to the conversion module 134. have.
  • the conversion module 134 may convert each 2D image into a 3D image based on the depth information.
  • the registration module 135 may generate 3D modeling data corresponding to the ear photographed and scanned by the information acquisition unit 100 by matching the 3D image with the 3D image based on the Cartesian coordinates and the Euler angle.
  • the transformation module 136 may generate 3D modeling data for the annular ear by mirroring the 3D modeling data for the healthy ear.
  • the deformation module 136 may generate 3D modeling data of the affected ear by applying a mirroring transformation technique based on a preset reference point, for example, a face center portion of the ear scanning target.
  • the transformation module 136 may use the mirroring transformation technique to generate 3D modeling data for the architectural ear for the affected ear as shown in FIG. 4B. Transform into 3D modeling data.
  • the healthy ear may be the left ear
  • the affected ear may be the right ear
  • the healthy ear may be the right ear
  • the affected ear may be the left ear.
  • the editing module 138 provides 3D modeling data, such as 3D modeling data modified by the deformation module 136 or 3D modeling data generated through registration of the matching module 135, and an editing interface for editing the 3D modeling data.
  • the 3D modeling data can be converted and provided based on the information input through the editing interface.
  • the editing interface is used to describe the internal structures of each area of the ear, such as fossa, concha, external auditory meatus, helix, antihelix, tragus, and the like. It can mean an interface that can be changed.
  • the output unit 140 may receive the 3D modeling data and output an ear corresponding to the 3D modeling data through 3D printing.
  • the output unit 140 may output the liquid and powder biocompatible materials by shaping the ears in a laminate manufacturing method, a photocurable resin molding method, a laser sintering method, a powder spray method, or the like based on 3D modeling data.
  • various methods may be adopted without being limited thereto.
  • biocompatible materials used in the output unit 140 include polycaprolactone (PCL), polyglycolic acid (PGA), polylactic acid (PLA), poly (D, L-lactic- co -glycolic-acid) (PLGA), poly (alkyl cyanoacrylate) (PAC), hyaluronic acid (HA), hydrogel, titanium, tricalcium, phosphate, hydroxylapatite, silicone, acrylates, collagen, gelatin, chitosan, high density polyethylene (HDPE), low density polyethylene ( LDPE), polyethylene (PE), Linear low-density polyethylene (LLDPE), Medium-density polyethylene (MDPE), Ultra-high-molecular-weight polyethylene (UHMWPE), polymethyl methacrylate (PMMA), polytetrafluoroethylene (PTFE), polydimethylsiloxane ( It may be, but is not limited to, a mixture of at least one of substances for generating a cellular support such as PDMS) and fibrinogen.
  • the output unit 140 has been described as an example included in the artificial ear generating device, but the output unit 140 is connected to the artificial ear generating device through a network to provide 3D modeling data. It can also be sent over the network and output.
  • the ear corresponding to the 3D modeling data output through the 3D printing may be used as a sample when carving the ear with the rib cartilage of the patient.
  • the sample ear may be output by 3D printing using a general molding material rather than a biocompatible material.
  • the ear output through 3D printing using a biocompatible material can serve as a scaffold of the affected ear, the ear can be reconstructed without using the patient's rib cartilage.
  • the output unit 140 may output an artificial ear to be like an actual ear using a set non-biocompatible material, and may manufacture a mold using the same.
  • the produced mold may be based on 3D modeling data.
  • the non-biocompatible material may be a mixture of at least one or more of non-biocompatible sand, plastic, gypsum, lead, alloys, and metals.
  • the output unit 140 may output an artificial ear to be like an actual ear using a biocompatible material, and may manufacture a mold using the same.
  • the manufacturing method of the mold may be one of a bottom molding method, a hybrid molding method, an assembly molding method, a rotation molding method, an even mold molding method, and a three-dimensional printing method.
  • a bottom molding method For example, an artificial ear that is output between two frames filled with sand may be fixed to overlap the two frames, and then a mold for the ear model may be manufactured.
  • a three-dimensional printing technique may be used to create a mold for the artificial ear. You can also print directly. In the case of artificial ears, a porous structure of hundreds of micrometers must be made, which can be realized using a high resolution 3D printer.
  • the material of the mold is not limited to a specific material.
  • place the ear output by 3D printing in a suitable container pour a mixture of Polydimethylsiloxane (PDMS) base and curing agent in a certain ratio, remove the air bubbles in the desiccator, and then at a certain time at 70 ° C Incubate for a while to harden PDMS.
  • PDMS can be manufactured by cutting or puncturing the PDMS to remove the ear output by 3D printing, and oxygen plasma treatment on the cut surface of the cut PDMS having an intaglio shape.
  • the mold may be made of a rigid material, but according to various embodiments of the present invention, it may be a flexible mold.
  • an artificial ear output as a filament made of a material such as PVA, which is soluble in water may be melted in water without making a mold, thereby making an intaglio.
  • a material that is soluble in a specific solvent such as PLA may be used, and various methods may be used to remove artificial ears output from a mold other than a flame retardant material.
  • the output unit 140 may manufacture an artificial ear corresponding to 3D modeling data through a plastic molding method for a predetermined biocompatible material.
  • the output unit 140 is a material that is soluble only in water or a specific solvent, such as salt crystals in the case of manufacturing artificial ears by dissolving a biocompatible material in a pre-made mold, such as salt crystals (in this case, a material having a very high melting point) Grind several hundred micrometers together, mix them together, inject them into a pre-made mold, and then melt the material in a specific solvent to make artificial ears with a porous structure. .
  • a porous structure can be manufactured by generating a lot of bubbles and then hardening them immediately before injecting the biocompatible material into the mold.
  • a porous structure may be essential.
  • the method may be used as a pre-output material made by adding soluble crystals together with the biocompatible material, or by adding soluble crystals to the molten biocompatible material before being put into the mold.
  • a method of generating bubbles in the molten state of the biocompatible material and rapidly solidifying the same may be used.
  • the plastic molding method may be at least one of compression molding, transfer molding, injection molding, extrusion molding, laminated molding, blow molding, vacuum molding and rotational molding, but various molding methods are not limited thereto.
  • various molding methods are not limited thereto.
  • FIG. 5 is a flowchart schematically illustrating an artificial ear generation process according to an exemplary embodiment of the present application.
  • the process may include a control step S210, a data collection step S220, a 3D modeling data generation step S230, and an output step S240.
  • the control step (S210) is for controlling the position and angle of the camera 102 and the 3D scanner 104 of the information acquisition unit 100 to photograph and scan.
  • the control step S210 includes fossa and concha, which are parts of the ear. Position of the camera 102 and the 3D scanner 104 such that the internal structure of the external auditory meatus, the helix, the antihelix, the tragus, etc. can be read. And control the angle. Specifically, the control step (S210) determines whether the part to be photographed and scanned of the ear is well displayed on the basis of the 2D image captured by the camera 102, based on the control of the position adjusting unit 110 The position and angle of the 102 and the 3D scanner 104 are changed.
  • the data collection step (S220) receives the Cartesian coordinates and Euler angles corresponding to the corresponding position and angle from the position adjusting unit 110 when the internal structure of the portion to be photographed and scanned is readable.
  • the 2D image and the shape information output from the camera 102 and the 3D scanner 104 of the information acquisition unit 100 are collected and stored.
  • 3D modeling data generation step (S230) calculates depth information based on the data collected in the data collection step (S220), 2D image, shape information, Cartesian coordinates, and Euler angle, and generates the 2D image using the calculated depth information. Convert it to a 3D image and match it to generate 3D modeling data.
  • the 3D modeling data may be deformed (mirror deformation) or edited according to a user's request.
  • the output step S240 may output an artificial ear through 3D printing using a biocompatible material based on the 3D modeling data generated in the 3D modeling data generation step S230, or generate an artificial ear by carving cartilage of a patient's ribs. have.
  • FIG. 6 is a flowchart illustrating an artificial ear generation process in detail according to an exemplary embodiment of the present application.
  • the ear means a healthy ear
  • a process of generating an ear model to be attached to the affected ear by generating 3D modeling data for the healthy ear will be described as an example.
  • control unit 120 receives a 2D image from the camera 102 of the information acquisition unit 100 and receives information on Cartesian coordinates and Euler angles from the position adjusting unit 110. It is provided (S302).
  • the controller 120 extracts the feature points from the 2D image (S304), and determines whether the information acquisition unit 100 is disposed at the correct position and angle by comparing the extracted feature points with the feature point information stored in the storage medium 122. (S306).
  • the controller 120 matches the 2D image and shape information input from the information acquisition unit 100 with the Cartesian coordinates and Euler angles input from the position adjusting unit 110 and the storage medium 122. Stored in step S308).
  • the control unit 120 when not disposed, the control unit 120 based on the Cartesian coordinates and Euler angles input from the position adjusting unit 110, the actuator XYZ lineage stage 112 and the actuator 114 of the position adjusting unit 110 By generating a control signal for operating the) to operate the position adjusting unit 110 to change the position and angle of the information acquisition unit 100 (S310). Then, the control unit 120 proceeds to S302 and performs the subsequent steps.
  • control unit 120 photographs the fossa, the concha, the external auditory meatus, the helix, the antihelix, the tragus, and the like at various angles and positions. And collect shape information, 2D images, Cartesian coordinates, Euler angles, and the like for each part by scanning and storing the shape information in the storage medium 122.
  • the image processing unit 130 calculates depth information for each 2D image based on the shape information, the Cartesian coordinates, and the Euler angle for each 2D image (S312).
  • the image processing unit 130 converts the 2D image into a 3D image based on each depth information (S314) and stores the 2D image in the storage medium 122.
  • the image processing unit 130 generates 3D modeling data for the healthy ear by matching the 3D image based on Cartesian coordinates and Euler angles for each 3D image (S316).
  • the image processing unit 130 generates 3D modeling data corresponding to the annular ear by applying a mirroring transformation technique to the 3D modeling data (S318).
  • the image processing unit 130 requests the output by providing the 3D modeling data generated in S318 to the output unit 140 or the internal output unit 140 connected through a network (not shown) (S320).
  • the output unit 140 outputs the ear model by 3D printing the 3D modeling data using the biocompatible material (S322).
  • FIG. 7 is a flowchart illustrating an artificial ear generation process according to another exemplary embodiment of the present application.
  • control unit 120 receives a 2D image from the camera 102 of the information acquisition unit 100 and provides information on Cartesian coordinates and Euler angles from the position adjusting unit 110. It is provided (S402).
  • the controller 120 determines whether the marker is extracted from the 2D image (S404). When the marker is extracted, the controller 120 determines the 2D image and the shape information input from the information acquisition unit 100. Matching Cartesian coordinates and Euler angles inputted from and position adjusting unit 110 and stored in the storage medium 122 (S406).
  • the control unit 120 is based on the Cartesian coordinates and Euler angle input from the position adjusting unit 110 and the actuator XYZ lineage stage 112 of the position adjusting unit 110 By generating a control signal for operating the 114 to operate the position adjusting unit 110 to change the position and angle of the information obtaining unit 100 (S408). Then, the controller 120 proceeds to S402 to perform the subsequent step.
  • the ear is scanned at various angles and positions and stored in the storage medium 122.
  • the control unit 120 photographs the fossa, the concha, the external auditory meatus, the helix, the antihelix, the tragus, and the like at various angles and positions. And collect shape information, 2D images, Cartesian coordinates, Euler angles, and the like for each part by scanning and storing the shape information in the storage medium 122.
  • steps S410 to S420 are the same as steps S312 to S322 described with reference to FIG. 6, a description thereof will be omitted.
  • step S801 ears cut out from the living body ear may be manufactured. At this time, the ear cut may be made of gypsum but is not limited thereto. 3D modeling data corresponding to the ear seen in step S802 may be generated. Step S802 is the same as the artificial ear generation process described with reference to FIGS. 6 to 7, and thus a detailed description thereof will be omitted.
  • step S803 a mold may be manufactured using the biocompatible material.
  • a mixture of polydimethylsiloxane (PDMS) base and curing agent is poured to a certain ratio to harden the PDMS, cut or puncture the PDMS to remove the ear output by 3D printing, and have a negative cut PDMS Oxygen plasma treatment on the cut surface of the PDMS template can be produced using a biocompatible material.
  • PDMS polydimethylsiloxane
  • the artificial ear corresponding to the 3D modeling data may be manufactured by plastic molding using the biocompatible material in the mold made in advance.
  • the artificial ear may be manufactured by grinding and dissolving a substance that is dissolved only in a specific solvent, such as salt crystals, by several hundred micrometers, and then putting it in a mold. have.
  • step S901 is a flowchart illustrating an artificial ear generation process according to another exemplary embodiment of the present application.
  • operation S901 the ear which is seen from the living ear may be manufactured. Since step S901 is the same as the operation of step S801, detailed description is omitted.
  • step S801 ears cut out from the living body ear may be manufactured.
  • the ear cut may be made of gypsum but is not limited thereto.
  • 3D modeling data corresponding to the ear seen in step S902 may be generated.
  • Step S902 is the same as the artificial ear generation process described with reference to FIGS. 6 to 7, and thus a detailed description thereof will be omitted.
  • a mold can be made using the non-biocompatible material.
  • a mold for an ear model can be made by overlapping two frames by fixing previously printed artificial ears between two sand-filled frames, and extracting the artificial ears, and using a three-dimensional printing technique. You can also output
  • the artificial ear corresponding to the 3D modeling data may be manufactured by plastic molding using the biocompatible material in the mold previously manufactured.
  • the artificial ear generating method as described above may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module executed by a computer.
  • Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transmission mechanism, and includes any information delivery media.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Vascular Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Transplantation (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Cardiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Mechanical Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Prostheses (AREA)

Abstract

L'invention concerne un dispositif de régénération d'une oreille artificielle, comportant : une unité d'acquisition d'informations d'imagerie et de balayage de différentes parties d'une oreille; une unité d'ajustement de position qui est reliée à l'unité d'acquisition d'informations, qui modifie la position et l'angle d'imagerie et de balayage, et produit des informations relatives à la position et à l'angle; une unité de commande qui reçoit les informations relatives à la position et à l'angle, des informations relatives à la forme des diverses parties de l'oreille générées par le balayage, et une image 2D générée par l'imagerie; une unité de traitement d'images qui génère une image 3D sur la base des informations relatives à la forme, des informations relatives à la position et à l'angle, et de l'image 2D, et génère des données de modélisation 3D en combinant les images 3D sur la base des données de position et d'angle; et une unité de sortie qui utilise l'impression 3D pour produire une oreille artificielle correspondant aux données de modélisation 3D.
PCT/KR2016/006178 2015-06-12 2016-06-10 Dispositif et procédé de reconstruction de l'oreille et nécessité de reconstruction WO2016200201A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20150083096 2015-06-12
KR10-2015-0083096 2015-06-12

Publications (1)

Publication Number Publication Date
WO2016200201A1 true WO2016200201A1 (fr) 2016-12-15

Family

ID=57504924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/006178 WO2016200201A1 (fr) 2015-06-12 2016-06-10 Dispositif et procédé de reconstruction de l'oreille et nécessité de reconstruction

Country Status (2)

Country Link
KR (1) KR101818007B1 (fr)
WO (1) WO2016200201A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021061559A1 (fr) * 2019-09-23 2021-04-01 The Johns Hopkins University Formation sous vide d'échafaudages bioabsorbables thermoplastiques destinés à être utilisés dans la reconstruction auriculaire
CN112686884A (zh) * 2021-01-12 2021-04-20 李成龙 一种影像学标记特征自动建模系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102227735B1 (ko) 2020-01-03 2021-03-15 국립암센터 장기의 3d 모델링 방법 및 3d 장기 모델
KR102658754B1 (ko) 2021-04-21 2024-04-17 부산대학교병원 외이 임플란트 및 외이 임플란트 제조 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100682889B1 (ko) * 2003-08-29 2007-02-15 삼성전자주식회사 영상에 기반한 사실감 있는 3차원 얼굴 모델링 방법 및 장치
US7826643B2 (en) * 2003-01-31 2010-11-02 Technest Holdings, Inc. Three-dimensional ear biometrics system and method
US20130027515A1 (en) * 2010-03-30 2013-01-31 Michael Vinther Scanning of cavities with restricted accessibility
WO2014039429A1 (fr) * 2012-09-04 2014-03-13 Anthrogenesis Corporation Procédés de génération de tissu
US20140257518A1 (en) * 2013-03-08 2014-09-11 The Trustees Of Princeton University Multi-functional hybrid devices/structures using 3d printing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009542840A (ja) * 2006-07-05 2009-12-03 エージェンシー フォー サイエンス, テクノロジー アンド リサーチ 多孔質ポリマー物品
US8417487B2 (en) * 2007-10-05 2013-04-09 3D Systems, Inc. Replaceable fairing for prosthetic limb or brace
US9968708B2 (en) * 2013-11-19 2018-05-15 Cornell University Tissue scaffold materials for tissue regeneration and methods of making

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7826643B2 (en) * 2003-01-31 2010-11-02 Technest Holdings, Inc. Three-dimensional ear biometrics system and method
KR100682889B1 (ko) * 2003-08-29 2007-02-15 삼성전자주식회사 영상에 기반한 사실감 있는 3차원 얼굴 모델링 방법 및 장치
US20130027515A1 (en) * 2010-03-30 2013-01-31 Michael Vinther Scanning of cavities with restricted accessibility
WO2014039429A1 (fr) * 2012-09-04 2014-03-13 Anthrogenesis Corporation Procédés de génération de tissu
US20140257518A1 (en) * 2013-03-08 2014-09-11 The Trustees Of Princeton University Multi-functional hybrid devices/structures using 3d printing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021061559A1 (fr) * 2019-09-23 2021-04-01 The Johns Hopkins University Formation sous vide d'échafaudages bioabsorbables thermoplastiques destinés à être utilisés dans la reconstruction auriculaire
CN112686884A (zh) * 2021-01-12 2021-04-20 李成龙 一种影像学标记特征自动建模系统及方法

Also Published As

Publication number Publication date
KR20160146568A (ko) 2016-12-21
KR101818007B1 (ko) 2018-02-21

Similar Documents

Publication Publication Date Title
WO2016200201A1 (fr) Dispositif et procédé de reconstruction de l'oreille et nécessité de reconstruction
CA2160792C (fr) Procede et agencement utilisant un articulateur ainsi qu'un equipment informatique
Cheah et al. Integration of laser surface digitizing with CAD/CAM techniques for developing facial prostheses. Part 2: Development of molding techniques for casting prosthetic parts.
ATE412329T1 (de) Herstellungsverfahren und system zur schnellen erzeugung von hörhilfegerätschale
KR20100066538A (ko) 3d를 프리뷰하는 시스템 및 방법
WO2014073818A1 (fr) Procédé de création d'image d'implant et système de création d'image d'implant
JP2001166809A (ja) 実立体モデル作成装置、立体データ作成装置、疑似立体データ作成装置並びにその方法
US20140188260A1 (en) Method of digitally constructing a prosthesis
KR100730344B1 (ko) Ct를 이용한 인공치아 제조방법
US20170290685A1 (en) Advanced Fitment of Prosthetic Devices
JP3984585B2 (ja) お面の製造方法
CN107798656A (zh) 一种基于距离传感器和陀螺仪的口腔全景图像拼接方法
JP2017221329A (ja) 2次元画像を用いて生体の下顎開閉軸とバーチャル咬合器の開閉軸を一致させる方法
KR100469086B1 (ko) 3차원 골조직 또는 연조직 모형 제작 장치, 이의 응용방법및 이에 따른 모형
US20140257762A1 (en) Method and device for transferring statics
KR102006593B1 (ko) 포토그래메트리와 3d 프린터를 활용한 특수분장용 디지털 작업 시스템 및 이것을 이용한 디지털 작업 방법
KR101132747B1 (ko) 3차원 세라믹 다공성 인공지지체 및 그 제조방법
CN114533325B (zh) 基于动度监测的平坦黏膜扫描和光学整塑方法及系统
CN104603859A (zh) 牙科补缀及赝复体数字建档与制作的方法及其教学训练
WO2021215843A1 (fr) Procédé de détection de marqueur d'image buccale, et dispositif d'adaptation d'image buccale et procédé utilisant celui-ci
KR20080050283A (ko) 3차원 모션 데이터 생성을 위한 상관 관계 추출 방법과이를 이용한 실사 배경 영상에 인체형 캐릭터의 용이한합성을 위한 모션 캡쳐 시스템 및 방법
WO2019124845A1 (fr) Système de génération d'image et procédé pour le diagnostic d'implant
KR20210068044A (ko) 사람의 사진 안면 이미지들 및/또는 필름들을 상기 사람을 위한 치의학적 및/또는 미용적 치과 치료의 계획 및/또는 수복물의 준비에 통합하기 위한 방법
WO2021221443A1 (fr) Procédé et appareil pour calculer une position de génération de support
Santosi et al. An innovative photogrammetric system for 3D digitization of dental models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16807841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16807841

Country of ref document: EP

Kind code of ref document: A1