CN112008982B - Model printing device - Google Patents
Model printing device Download PDFInfo
- Publication number
- CN112008982B CN112008982B CN202010821015.XA CN202010821015A CN112008982B CN 112008982 B CN112008982 B CN 112008982B CN 202010821015 A CN202010821015 A CN 202010821015A CN 112008982 B CN112008982 B CN 112008982B
- Authority
- CN
- China
- Prior art keywords
- tissue
- model
- dimensional
- target object
- printing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
Landscapes
- Chemical & Material Sciences (AREA)
- Engineering & Computer Science (AREA)
- Materials Engineering (AREA)
- Manufacturing & Machinery (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Optics & Photonics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
An embodiment of the present application provides a model printing apparatus, the apparatus includes: the data acquisition module is used for acquiring scanning imaging data of a target object; the model reconstruction module is connected with the data acquisition module and used for establishing a three-dimensional digital model of the target object according to the scanning imaging data; the slicing module is connected with the model reconstruction module and used for slicing the three-dimensional digital model to acquire printing data of each slice; and the printing module is connected with the slicing module and used for carrying out three-dimensional printing according to the printing data of each slice so as to obtain a three-dimensional printing model of the target object. According to the technical scheme, the medical organization model is integrated and automatically printed, the printing period is shortened, and the printing efficiency is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of three-dimensional printing, in particular to a model printing device.
Background
With the continuous development of three-dimensional printing or 3D printing technology and the continuous increase of the demand for precise and personalized medical treatment, the 3D printing technology is remarkably developed in the medical industry.
However, the existing three-dimensional printing of the medical model requires a professional with a medical background to build a corresponding three-dimensional digital model according to the medical image data, which results in a large amount of time consumed in the modeling process. Meanwhile, the whole process from data acquisition and modeling to printing needs offline data transmission among a plurality of different devices, and almost every process needs human participation, resulting in low printing efficiency.
Disclosure of Invention
The embodiment of the application provides a model printing device, has realized integration, the automatic printing of medical treatment tissue model, has shortened the print cycle, has improved printing efficiency.
In a first aspect, an embodiment of the present application provides a model printing apparatus, including:
the data acquisition module is used for acquiring scanning imaging data of a target object;
the model reconstruction module is used for establishing a three-dimensional digital model of the target object according to the scanning imaging data;
the slicing module is used for carrying out slicing processing on the three-dimensional digital model so as to obtain printing data of each slice;
and the printing module is used for carrying out three-dimensional printing according to the printing data of each slice so as to obtain a three-dimensional printing model of the target object.
Optionally, the model reconstruction module is specifically configured to:
and establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
Optionally, the model reconstruction module includes:
the tissue segmentation unit is used for carrying out tissue extraction according to the scanning imaging data so as to obtain the scanning imaging data of each tissue;
and the three-dimensional model calculation unit is used for determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue and determining a three-dimensional digital model of the target object according to the tissue three-dimensional model of each tissue.
Optionally, the tissue segmentation unit includes:
a segmentation algorithm storage subunit, configured to store a tissue segmentation algorithm of the target object;
and the tissue segmentation subunit is used for performing tissue extraction on the target object according to the scanning imaging data based on the tissue segmentation algorithm.
Optionally, the tissue segmentation algorithm includes a preset number of tissue segmentation algorithms arranged according to a preset sequence, and the tissue segmentation subunit includes:
and selecting a current tissue segmentation algorithm according to a preset sequence, and performing tissue extraction on the target object according to the scanning imaging data based on the current tissue segmentation algorithm.
Optionally, the apparatus further comprises:
and the tissue identification unit is used for identifying each tissue and type of the target object according to the scanning imaging data or determining each tissue and type of the target object according to a matching result of the scanning imaging data and standard imaging data.
Optionally, the tissue segmentation algorithm includes at least two tissue segmentation algorithms, and the tissue segmentation subunit is specifically configured to:
determining a tissue segmentation algorithm of each tissue according to each tissue and the type thereof;
and for each type of tissue, performing tissue extraction on the tissue of the target object according to a tissue segmentation algorithm corresponding to the tissue and scanning imaging data.
Optionally, the model reconstruction module further includes:
and the organization naming unit is used for naming the organization according to the type of the organization aiming at each organization so as to determine the organization name of the organization.
Optionally, the model reconstruction module further includes:
a standard data acquisition unit for acquiring standard scan data of each organization of the target object;
the tissue data comparison unit is used for judging whether the tissue is successfully extracted or not according to the comparison result of the standard scanning data of the tissue and the scanning imaging data of the tissue aiming at each tissue;
correspondingly, the three-dimensional model calculation unit is specifically configured to:
for each tissue, when the tissue extraction is successful, determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue; and determining a three-dimensional digital model of the target object according to the three-dimensional model of the tissue of each tissue.
Optionally, the three-dimensional model calculation unit includes:
the tissue model determining subunit is used for determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue;
and the model fusion subunit is used for fusing the tissue three-dimensional models of the tissues to determine the three-dimensional digital model of the target object.
Optionally, the model printing apparatus further includes:
and the model repairing module is used for performing model repairing on the three-dimensional digital model of the target object to obtain a three-dimensional repairing model of the target object.
Optionally, the model printing apparatus further includes:
the model comparison module is used for storing and comparing the three-dimensional repair model and the three-dimensional digital model to obtain a model comparison result;
and the algorithm improvement module is used for improving the model reconstruction algorithm according to the model comparison result.
Optionally, the model reconstruction algorithm comprises an artificial intelligence AI modeling algorithm.
Optionally, the slicing module comprises:
the attribute distribution unit is used for performing attribute distribution on the tissue three-dimensional model of each tissue according to at least one item of tissue type, tissue name and position information of the tissue three-dimensional model of the tissue, wherein the attribute comprises at least one item of color, hardness, elasticity and transparency;
and the slicing unit is used for slicing the three-dimensional digital model according to the attribute of each tissue three-dimensional model so as to obtain the printing data of each slice. Optionally, the model printing apparatus further includes:
and the mark adding module is used for adding a guide mark to the three-dimensional digital model according to the disease species of the target object.
Optionally, the model printing apparatus further includes:
the operation guide plate generation module is used for generating a three-dimensional model of the operation guide plate of the target object according to the three-dimensional digital model;
correspondingly, the slicing module is specifically configured to:
and slicing the three-dimensional digital model and the three-dimensional model of the surgical guide plate to obtain printing data of each slice.
Optionally, the model printing apparatus further includes:
and the post-processing module is used for performing post-processing on the three-dimensional printing model.
Optionally, the model printing apparatus further includes:
and the feedback module is used for acquiring a feedback result of the three-dimensional printing model based on a preset user interface, and storing and displaying the feedback result.
In a second aspect, an embodiment of the present application provides a model printing method, including:
acquiring, via a data acquisition module, scan imaging data of a target object;
establishing, via a model reconstruction module, a three-dimensional digital model of the target object from the scan imaging data;
slicing the three-dimensional digital model through a slicing module to obtain printing data of each slice;
performing three-dimensional printing according to the printing data of each slice through a printing module to obtain a three-dimensional printing model of the target object;
the data acquisition module is connected with the model reconstruction module, the model reconstruction module is connected with the slicing module, and the slicing module is connected with the printing module.
Optionally, the building a three-dimensional digital model of the target object according to the scan imaging data includes:
and establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
Optionally, the building a three-dimensional digital model of the target object from the scan imaging data includes:
performing tissue extraction according to the scanning imaging data to obtain the scanning imaging data of each tissue;
determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue;
and determining a three-dimensional digital model of the target object according to the tissue three-dimensional model of each tissue.
Optionally, the tissue extraction according to the scan imaging data includes:
acquiring a tissue segmentation algorithm of the target object;
and based on the tissue segmentation algorithm, performing tissue extraction on the target object according to the scanning imaging data.
Optionally, the tissue segmentation algorithm includes a preset number of tissue segmentation algorithms arranged according to a preset sequence, and the tissue extraction of the target object according to the scanning imaging data based on the tissue segmentation algorithm includes:
and selecting a current tissue segmentation algorithm according to a preset sequence, and performing tissue extraction on the target object according to the scanning imaging data based on the current tissue segmentation algorithm.
Optionally, before performing tissue extraction on the target object according to the scanning imaging data based on the tissue segmentation algorithm, the method further includes:
identifying each tissue and type of the target object according to the scanning imaging data; or;
and determining each tissue and the type of the target object according to the matching result of the scanning imaging data and the standard imaging data.
Optionally, the tissue segmentation algorithm includes at least two tissue segmentation algorithms, and the tissue extraction of the target object according to the scan imaging data based on the tissue segmentation algorithm includes:
determining a tissue segmentation algorithm corresponding to each tissue according to each tissue and the type of each tissue;
and for each type of tissue, performing tissue extraction on the tissue of the target object according to a tissue segmentation algorithm corresponding to the tissue and scanning imaging data.
Optionally, after identifying the respective tissues and types thereof of the target object according to the scanning imaging data, the method further includes:
for each tissue, tissue naming the tissue according to the type of the tissue to determine a tissue name of the tissue.
Optionally, after obtaining the scanning imaging data of each tissue, the method further includes:
acquiring standard scanning data of each tissue of the target object;
for each tissue, judging whether the tissue is successfully extracted according to a comparison result of standard scanning data of the tissue and scanning imaging data of the tissue;
correspondingly, the determining a three-dimensional model of the tissue of each tissue according to the scanning imaging data of each tissue comprises:
for each tissue, when the tissue extraction is successful, a tissue three-dimensional model of the tissue is determined from the scanned imaging data of the tissue.
Optionally, said determining a three-dimensional digital model of said target object from said three-dimensional model of tissue of each said tissue comprises:
model fusion is performed on the tissue three-dimensional models of the tissues to determine a three-dimensional digital model of the target object.
Optionally, after the three-dimensional digital model of the target object is built, the method further includes:
and performing model restoration on the three-dimensional digital model of the target object to obtain a three-dimensional restoration model of the target object.
Optionally, the model printing method further includes:
storing and comparing the three-dimensional repairing model and the three-dimensional digital model to obtain a model comparison result;
and improving the model reconstruction algorithm according to the model comparison result.
Optionally, the three-dimensional digital model of the target object is composed of a three-dimensional model of tissues, and the slicing process on the three-dimensional digital model includes:
for each tissue, performing attribute distribution on the tissue three-dimensional model of the tissue according to at least one of the tissue type, the tissue name and the position information of the tissue three-dimensional model of the tissue, wherein the attributes comprise at least one of color, hardness, elasticity and transparency;
and slicing the three-dimensional digital model according to the attribute of each tissue three-dimensional model.
Optionally, after the three-dimensional digital model of the target object is built, the method further includes:
and adding a guide mark to the three-dimensional digital model according to the disease species of the target object.
Optionally, after the three-dimensional digital model of the target object is built, the method further includes:
generating a three-dimensional model of a surgical guide of the target object from the three-dimensional digital model;
correspondingly, the slicing process for the three-dimensional digital model includes:
and slicing the three-dimensional digital model and the three-dimensional model of the surgical guide plate.
Optionally, the model printing method further includes:
and carrying out post-processing on the three-dimensional printing model.
Optionally, the model printing method further includes:
and acquiring a feedback result of the three-dimensional printing model based on a preset user interface, and storing and displaying the feedback result.
In a third aspect, an embodiment of the present application further provides a model printing system, which includes a three-dimensional printer, a memory, and at least one processor; wherein the memory stores computer-executable instructions; the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform a model printing method as provided in any embodiment of the present application to control the three-dimensional printer to print a three-dimensional printing model of the target object.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-executable instructions are used to implement the model printing method as provided in any embodiment of the present application.
The embodiment of the application provides a model printing device, according to the scanning imaging data of target object, the three-dimensional digital model of target object is established automatically, and carry out section processing and three-dimensional printing to this model, thereby obtain the three-dimensional printing model of target object, the integration process from scanning data to three-dimensional model has been realized, operation flow has been simplified, the automatic three-dimensional printing of organizing the model has been realized, the printing cycle has been shortened, printing efficiency has been improved, and simultaneously, the automation and the intelligent degree of organizing three-dimensional printing have been improved, the operation degree of difficulty has been reduced, user experience has been improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is an application scenario diagram of a model printing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of model printing provided in one embodiment of the present application;
FIG. 3 is a flow chart of a method of model printing provided in another embodiment of the present application;
FIG. 4 is a flowchart of step S302 in the embodiment of FIG. 3 of the present application;
FIG. 5 is a flow chart of a method of printing a model according to another embodiment of the present application;
FIG. 6 is a flow chart of a method of model printing provided in another embodiment of the present application;
FIG. 7 is a schematic diagram of a model printing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a model reconstruction module according to the embodiment shown in FIG. 7;
FIG. 9 is a schematic structural diagram of a model reconstruction module according to the embodiment shown in FIG. 7 of the present application;
FIG. 10 is a schematic structural diagram of a three-dimensional model calculation unit in the embodiment of FIG. 8 of the present application;
FIG. 11 is a schematic structural diagram of a slicing module in the embodiment of FIG. 7 of the present application;
fig. 12 is a schematic structural diagram of a model printing system according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The following explains an application scenario of the embodiment of the present application:
fig. 1 is an application scenario diagram of a model printing method according to an embodiment of the present application, as shown in fig. 1, in order to facilitate formulation of a surgical plan, analysis of a patient's condition, and communication with a patient, scanning imaging data of an imaging portion of the patient needs to be copied from a medical imaging device 110 and copied into a modeling device 120 loaded with modeling software 121, a three-dimensional model of the imaging portion is established by a professional, and then the three-dimensional model is copied from the modeling software 121 into a slicing device 130 loaded with slicing software 131 for slicing, so as to obtain slicing data, and the slicing data is copied into a three-dimensional printer 140 for printing, whereas the medical imaging device 110, the modeling device 120, the slicing device 130, and the three-dimensional printer 140 are often managed and operated by different platforms, and each process needs human participation, such as establishment of a three-dimensional model and slicing of the model, which results in low automation degree of three-dimensional printing of the model, long time consumption, and poor user experience.
In order to solve the above problems, the main concept of the technical solution of the embodiment of the present application is: according to the scanning imaging data of the target object, a three-dimensional digital model of the target object is automatically established, and the model is subjected to slicing processing and three-dimensional printing, so that a three-dimensional printing model of the target object is obtained, the integration process from the scanning data to the three-dimensional model is realized, the automatic three-dimensional printing of an organization model is realized, the printing period is shortened, and the printing efficiency is improved.
Fig. 2 is a flowchart of a model printing method according to an embodiment of the present application. The model printing method may be executed by a processor or an electronic device, such as a computer. As shown in fig. 2, the model printing method provided in this embodiment includes the following steps:
in step S201, scan imaging data of a target object is acquired.
Specifically, the scan imaging data of the target object may be acquired via the data acquisition module.
Step S202, establishing a three-dimensional digital model of the target object according to the scanning imaging data.
Specifically, a three-dimensional digital model of the target object may be established according to the scanning imaging data via a model reconstruction module, wherein the model reconstruction module is connected to the data acquisition module, so that the scanning imaging data may be automatically transmitted.
Specifically, a three-dimensional digital model of the target object may be established according to parameters such as a gray value, a CT value (also referred to as Hounsfield value), a position, and the like of each pixel of the scan imaging data.
Optionally, the building a three-dimensional digital model of the target object from the scan imaging data includes:
and establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
Step S203, slice processing is performed on the three-dimensional digital model to acquire print data of each slice.
Specifically, the three-dimensional digital model may be sliced by a slicing module to obtain print data of each slice, wherein the slicing module is connected to the model reconstruction module, so that data corresponding to the three-dimensional digital model may be automatically transmitted.
And step S204, performing three-dimensional printing according to the printing data of each slice to obtain a three-dimensional printing model of the target object.
Specifically, three-dimensional printing can be performed according to the printing data of each slice via a printing module to obtain a three-dimensional printing model of the target object, wherein the slicing module is connected with the printing module, so that the printing data can be automatically transmitted.
According to the model printing method provided by the embodiment of the application, the three-dimensional digital model of the target object is automatically established according to the scanning imaging data of the target object, and the model is subjected to slicing processing and three-dimensional printing, so that the three-dimensional printing model of the target object is obtained, the integrated process from the scanning data to the three-dimensional model is realized, the automatic three-dimensional printing of the organization model is realized, the operation flow is simplified, the printing period is shortened, the printing efficiency is improved, meanwhile, the automation and the intelligent degree of the organization three-dimensional printing are improved, the operation difficulty is reduced, and the user experience is improved.
Fig. 3 is a flowchart of a model printing method according to another embodiment of the present application, and as shown in fig. 3, the model printing method according to this embodiment is based on the model printing method according to the embodiment shown in fig. 2, in which step S202 is refined, a step of adding a guidance mark is added after step S202, and a step of performing post-processing on a three-dimensional printing model and obtaining a feedback result is added after step S204, and the model printing method according to this embodiment may include the following steps:
in step S301, scan imaging data of a target object is acquired.
Step S302, tissue extraction is carried out according to the scanning imaging data to obtain the scanning imaging data of each tissue.
The tissue may be various types of tissue included in the target object, such as bone, blood, a blood vessel wall, and the like.
Specifically, a CT value-tissue comparison table may be pre-established, so that the type of the tissue corresponding to the pixel may be identified according to the CT value of the pixel, for example, when the CT value of the pixel is greater than 400HU (Hounsfield Unit ), the tissue corresponding to the pixel is identified as a bone, when the CT value of the pixel is in a range from 13HU to 32HU, the tissue corresponding to the pixel is identified as blood, and the like, and then different tissues are extracted based on the CT value, so as to obtain the scanning imaging data corresponding to each tissue.
Optionally, fig. 4 is a flowchart of step S302 in the embodiment shown in fig. 3 of the present application, and as shown in fig. 4, step S302 includes:
and step S3021, obtaining a tissue segmentation algorithm of the target object.
Step S3022, based on the tissue segmentation algorithm, performing tissue extraction on the target object according to the scanning imaging data to obtain scanning imaging data of each tissue.
Further, different tissues may correspond to different tissue segmentation algorithms. Specifically, the tissue type of each tissue of the target object can be determined through tissue identification, and then the tissue segmentation algorithm of the current tissue is determined according to the tissue type of the current tissue, so that the selection of the appropriate tissue segmentation algorithm for different types of tissues is realized, and the precision of tissue segmentation is improved.
Step S303, determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue.
Specifically, after obtaining the scanning imaging data of each tissue, the three-dimensional digital model of each tissue is calculated for the scanning data of each tissue, and the specific process is the same as the process for establishing the three-dimensional digital model of the target object, which is not described herein again, so that the three-dimensional model of each tissue is obtained.
Further, the tissue three-dimensional model of each tissue can be determined according to the scanning imaging data of the tissue based on a preset three-dimensional model calculation algorithm. The three-dimensional model calculation algorithm of each tissue can be obtained according to a pre-established tissue-three-dimensional model calculation algorithm correspondence table.
Specifically, different types of tissues can be subjected to tissue extraction according to different or the same scanning imaging data, that is, the scanning imaging data of each tissue can be obtained by adopting the same scanning imaging data based on a tissue segmentation algorithm, and then a tissue three-dimensional model of each tissue is calculated; different scanning data can be adopted, and the scanning imaging data of each tissue is obtained based on a tissue segmentation algorithm, so that the tissue three-dimensional model of each tissue is calculated.
Step S304, determining a three-dimensional digital model of the target object according to the tissue three-dimensional model of each tissue.
Specifically, the three-dimensional data model of the target object may be determined according to the positional relationship of each tissue and the tissue three-dimensional model.
Specifically, since the medical model of the target object is generally formed by nesting and penetrating different tissues, the three-dimensional tissue models created by extracting the scanning imaging data of each tissue based on the scanning imaging data are independent from each other, and in order to more truly represent the positional relationship of each tissue, model fusion needs to be performed on each three-dimensional tissue model, so that a three-dimensional character model with higher accuracy is obtained.
Optionally, said determining a three-dimensional digital model of said target object from said three-dimensional model of tissue of each said tissue comprises:
model fusion is performed on the tissue three-dimensional models of the tissues to determine a three-dimensional digital model of the target object.
Specifically, model fusion can be performed on the tissue three-dimensional models according to the positions of the tissues, so that an overall three-dimensional digital model of the target object is obtained.
And S305, adding a guide mark to the three-dimensional digital model according to the disease species of the target object.
Specifically, the disease species of the target object can be manually input, can also be added into the scanning imaging data in advance, and can also be automatically identified according to the three-dimensional digital model of the target object; and further determining a guide mark for adding to the three-dimensional digital model according to the surgical plan corresponding to the disease type. The surgical plan corresponding to the disease category may be determined based on a pre-stored relationship between the disease category and the surgical plan. Of course, the surgical plan may be modified by manual editing, or the position of the guide mark or other parameters may be modified.
Optionally, after the three-dimensional digital model of the target object is built, the method further includes:
and generating a three-dimensional model of a surgical guide of the target object according to the three-dimensional digital model.
Further, a three-dimensional model of the surgical guide of the target object can be generated according to the three-dimensional digital model, the disease species of the target object and the surgical plan corresponding to the disease species.
In step S306, the three-dimensional digital model to which the guidance mark is added is subjected to slicing processing to acquire print data of each slice.
And step S307, performing three-dimensional printing according to the printing data of each slice to acquire a three-dimensional printing model of the target object.
And step S308, post-processing the three-dimensional printing model.
Wherein the post-treatment may include support removal, sanding, polishing, sterilization, and other treatment operations.
And S309, acquiring a feedback result of the three-dimensional printing model based on a preset user interface, and storing and displaying the feedback result.
In the embodiment, the scanning imaging data is organized and extracted to obtain the scanning imaging data corresponding to each tissue, so that the three-dimensional tissue model of each tissue is established, and the three-dimensional tissue model is fused to obtain the three-dimensional digital model of the target object, so that the accuracy of three-dimensional modeling is improved; the three-dimensional digital model is sliced and printed to obtain a printed three-dimensional printing model, so that the automatic printing of the three-dimensional entity model of the target object is realized, the printing efficiency is improved, the operation difficulty of the three-dimensional printing is reduced, and meanwhile, the operation related guide mark and the operation guide plate are added into the three-dimensional digital model and generated based on the disease species of the target object, so that the related operation is assisted, and the guidance and assistance performance of the three-dimensional printing model is improved; the three-dimensional printing model is subjected to post-processing, so that the three-dimensional printing model is more in line with the requirements of users; feedback of the user on the three-dimensional printing model is obtained through a preset user interface, and a three-dimensional printing related algorithm is improved based on the user feedback, so that the accuracy of three-dimensional printing is improved.
Fig. 5 is a flowchart of a model printing method according to another embodiment of the present application, and as shown in fig. 5, the model printing method according to this embodiment refines step S202 and step S203 based on the model printing method according to the embodiment shown in fig. 2, and the model printing method according to this embodiment may include the following steps:
step S501, scanning imaging data of the target object is acquired.
And step S502, identifying each tissue and type of the target object according to the scanning imaging data.
In particular, the respective tissues of the target object and their types may be determined based on CT values or gray scale values of the scanned imaging data.
Optionally, the tissues and types of the target object may be determined according to the matching result of the scanning imaging data and the standard imaging data.
Optionally, after identifying the respective tissues and types thereof of the target object according to the scanning imaging data, the method further includes:
for each tissue, tissue naming is performed on the tissue according to the type of the tissue so as to determine a tissue name of the tissue.
And S503, determining a tissue segmentation algorithm corresponding to each tissue according to each tissue and the type of each tissue.
Wherein the tissue segmentation algorithm comprises at least two tissue segmentation algorithms.
Specifically, according to the tissue type of each tissue, a tissue segmentation algorithm matched with the type of each tissue is matched for each tissue, so that the precision of the tissue segmentation algorithm is improved.
Specifically, after the tissue included in the target object and the type of each tissue are determined, the tissue segmentation algorithm of each tissue may be determined based on a correspondence relationship between the tissue type and the tissue segmentation algorithm, which is established in advance. Different tissue segmentation algorithms are matched for different types of tissues so as to improve the accuracy and efficiency of tissue segmentation.
Specifically, the execution sequence of each tissue segmentation algorithm for tissue extraction or segmentation may be automatically determined according to the type of the tissue included in the target object, so that the tissue segmentation algorithms may be sequentially executed according to the execution sequence for the current tissue to obtain the scan imaging data of each tissue.
Step S504, aiming at each type of tissue, tissue extraction is carried out on the tissue of the target object according to a tissue segmentation algorithm and scanning imaging data corresponding to the tissue, so as to obtain the scanning imaging data of each tissue.
Further, in order to reduce the error of tissue extraction, it may be determined whether tissue extraction is accurate according to the comparison result by comparing scan imaging data of each tissue with data corresponding to a standard tissue stored in advance after tissue extraction is performed on a target object.
And step S505, acquiring standard scanning data of each organization of the target object.
The standard scan data may be scan imaging data of a standard tissue corresponding to each tissue stored in advance. The standard scan data may include data such as CT value distribution intervals and shape data of the standard tissue corresponding to each tissue.
When scanning imaging is performed, in order to improve imaging quality, a contrast medium is injected into blood, so that a CT value of a blood part is within a range of a CT value of a bone, and when bone extraction is performed, the blood part is extracted by mistake, so that an extraction error is large. In order to improve the accuracy of tissue extraction, in addition to considering the CT value, it is necessary to determine whether the extracted tissue matches the standard tissue by comprehensively considering the morphological data of the standard tissue of each tissue stored in advance, and if so, the tissue extraction is successful.
Step S506, aiming at each tissue, judging whether the tissue is successfully extracted according to the comparison result of the standard scanning data of the tissue and the scanning imaging data of the tissue.
Specifically, for each type of tissue, when the matching degree of the scanned imaging data of the tissue and the standard scanning data is higher than the preset matching degree, the tissue extraction is determined to be successful. Further, after the matching is successful, the organization type of the standard organization can also be determined as the organization type of the organization.
Furthermore, a three-dimensional digital pre-forming model of the tissue can be established according to the scanning imaging data of the tissue, the three-dimensional digital pre-forming model can be performed with lower precision to improve the efficiency of pre-modeling, so that the three-dimensional digital pre-forming model of the tissue is obtained, and whether the tissue is successfully extracted or not is judged by comparing the three-dimensional digital pre-forming model with a standard three-dimensional model of the tissue.
And step S507, aiming at each tissue, when the tissue is successfully extracted, determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue.
Specifically, for each tissue, when the tissue extraction is successful, a three-dimensional model of the tissue is determined according to the scanning imaging data of the tissue based on a three-dimensional model calculation algorithm.
Further, different tissues may correspond to different three-dimensional model calculation algorithms.
Specifically, the correspondence between the tissue and the three-dimensional model calculation algorithm may be established in advance, so that the three-dimensional model calculation algorithm of each tissue is determined according to the correspondence.
And step S508, carrying out model fusion on the tissue three-dimensional models of the tissues to determine the three-dimensional digital model of the target object.
Further, after the three-dimensional digital model of the target object is determined, the three-dimensional digital model may be adjusted according to an instruction input by a user, or the three-dimensional digital model may be repaired based on a preset repair algorithm, thereby improving the accuracy of modeling.
Optionally, after determining the three-dimensional digital model of the target object, the method further includes:
and performing model restoration on the three-dimensional digital model of the target object to obtain a three-dimensional restoration model of the target object.
Specifically, the model repairing can be performed on the three-dimensional digital model of the target object according to a user operation instruction or a preset repairing algorithm.
Furthermore, the model reconstruction algorithm can be improved according to the comparison result of the three-dimensional repair model and the three-dimensional digital model, the model reconstruction algorithm can be one or more of a tissue segmentation algorithm, a three-dimensional model calculation algorithm and an AI modeling algorithm, and different tissues can correspond to different model reconstruction algorithms, so that the accuracy of the three-dimensional model can be improved.
Step S509, for each tissue, performing attribute assignment on the tissue three-dimensional model of the tissue according to the type of the tissue.
Wherein the attribute comprises at least one of color, softness, elasticity and transparency.
Specifically, the three-dimensional digital model is composed of or fused with tissue three-dimensional models of various tissues, and the three-dimensional digital model can label the tissue three-dimensional models, so that the attribute distribution is performed on the tissue three-dimensional models according to the labels.
Specifically, a correspondence between the tissue type of the target object and each attribute may be established in advance, and the value of each attribute corresponding to each type of tissue is specified in the correspondence, so that the attribute value of the three-dimensional tissue model corresponding to each tissue of the target object may be determined according to the correspondence.
Optionally, the performing attribute assignment on the tissue three-dimensional model of the tissue according to the type of the tissue includes:
and performing attribute distribution on the tissue three-dimensional model of the tissue according to at least one item of position information, tissue type and tissue name of the tissue three-dimensional model of the tissue.
The position information of the three-dimensional model of the tissue may be coordinate information or information capable of describing a relative position relationship between the tissues.
Specifically, the medical model of the target object usually includes a plurality of different tissues, such as soft tissue and hard tissue, and in order to distinguish the different tissues in the model, different attributes need to be assigned to the different tissues, so that based on the three-dimensional digital model, tissue models with different attributes can be printed.
Further, the type of the tissue can be determined according to the tissue name, and then a corresponding attribute value is determined for the three-dimensional model of the tissue.
Specifically, the tissue type of the tissue can be determined according to the recognition result by recognizing the tissue name, and then the value of each attribute of the three-dimensional tissue model corresponding to the tissue can be determined. For example, when the tissue name is XX artery, the color attribute of the tissue is determined to be red, when the tissue name is XX vein, the color attribute of the tissue is determined to be blue, and when the tissue name is XX bone, the color attribute of the tissue is determined to be white.
Further, attribute assignment may be performed on the three-dimensional tissue model of the tissue according to the position information and the tissue type of the three-dimensional tissue model of the tissue, for example, determining a transparency attribute according to the position information. For example, when a plurality of tissues are nested with each other, the transparency of the tissue on the outer side is determined to be lower than the transparency of the tissue on the inner side.
And step S510, slicing the three-dimensional digital model according to the attribute of each tissue three-dimensional model to acquire printing data of each slice.
Specifically, the three-dimensional digital model needs to be sliced according to the attributes of the three-dimensional models of the tissues, that is, corresponding slicing processing modes are determined for the tissues with different attributes, so that the reproduction degree of the attributes allocated to the three-dimensional models is improved.
For example, when the attribute of the three-dimensional model of the tissue is assigned to be color translucent, the slicing process may include partitioning the three-dimensional model to divide the three-dimensional model into an outer color region and an inner transparent region, the outer color region wrapping outside the inner transparent region, and the slicing process further includes configuring the outer color region to be formed by printing of a color transparent material and a color-free transparent material, the inner transparent region being configured to be formed by printing of a color-free transparent material, thereby achieving a color translucent effect; when the attribute of the three-dimensional model of tissue is assigned to be colored opaque, the slicing process may not partition the three-dimensional model and configure the three-dimensional model as a whole to be formed of a colored transparent material and a white material.
And step S511, performing three-dimensional printing according to the printing data of each slice to obtain a three-dimensional printing model of the target object.
In the embodiment, the type of the tissue is automatically identified by scanning the imaging data, and then the tissue segmentation algorithm for tissue extraction is determined according to the tissue type, so that the accuracy of tissue extraction is improved; meanwhile, organization naming is carried out on the organization based on the organization type so as to determine the organization name of each organization, so that information such as the organization type can be directly determined according to the organization name in the subsequent step, and the three-dimensional printing efficiency is improved; determining the scanning imaging data of each tissue through tissue extraction, further determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue, and obtaining a complete three-dimensional digital model of the target object through model fusion; and according to the tissue type, different attribute values are given to different tissues of the three-dimensional digital model, slicing processing is carried out based on the attribute values, and three-dimensional printing is carried out, so that different tissues can be distinguished conveniently, and the richness of the three-dimensional model is improved.
Fig. 6 is a flowchart of a model printing method according to another embodiment of the present application, and as shown in fig. 6, the model printing method according to this embodiment refines step S202 based on the model printing method according to the embodiment shown in fig. 2, and adds steps of model repair and AI modeling algorithm improvement after step S202, and the model printing method according to this embodiment may include the following steps:
step S601, acquiring scan imaging data of the target object.
Step S602, establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
Specifically, the model reconstruction algorithm can automatically process the scanning imaging data to generate a three-dimensional digital model thereof, and the model reconstruction algorithm can reconstruct a neural network model which can be obtained through training of a large amount of historical data. Specifically, the corresponding model reconstruction algorithm can be trained for each tissue, and the parameters of the model reconstruction algorithm are continuously updated or adjusted in the using process, so that the accuracy of three-dimensional model reconstruction is improved.
And S603, performing model restoration on the three-dimensional digital model of the target object to obtain a three-dimensional restoration model of the target object.
Specifically, the model repairing of the three-dimensional digital model of the target object may include performing model optimization processing operations such as medical simulation rendering and three-dimensional structure surface smoothing on the three-dimensional digital model, so that the model is more vivid.
Specifically, the three-dimensional digital model can be repaired according to a user repairing instruction.
And S604, storing and comparing the three-dimensional repair model and the three-dimensional digital model to obtain a model comparison result.
And S605, improving the model reconstruction algorithm according to the model comparison result.
Furthermore, the model reconstruction algorithm of the target object can be improved or adjusted according to the model comparison result, so that the precision of the three-dimensional digital model is improved.
And step S606, slicing the three-dimensional repair model to acquire printing data of each slice.
And step S607, performing three-dimensional printing according to the printing data of each slice to acquire a three-dimensional printing model of the target object.
In the embodiment, the automatic establishment of the three-dimensional digital model of the target object is realized based on the model reconstruction algorithm, and the automation degree and efficiency of three-dimensional printing are improved; meanwhile, the accuracy of modeling is improved by repairing the three-dimensional digital model, so that the fidelity of the three-dimensional printing model is improved; the model reconstruction algorithm is improved through the repair result, the intelligent degree of three-dimensional printing is improved, and the accuracy of subsequent model printing is improved.
Fig. 7 is a schematic structural diagram of a model printing apparatus according to an embodiment of the present application, and as shown in fig. 7, the model printing apparatus according to the embodiment includes: a data acquisition module 710, a model reconstruction module 720, a slicing module 730, and a printing module 740.
The data acquisition module 710 is configured to acquire scanning imaging data of a target object; a model reconstruction module 720, configured to create a three-dimensional digital model of the target object according to the scan imaging data; a slicing module 730, configured to slice the three-dimensional digital model to obtain print data of each slice; and the printing module 740 is configured to perform three-dimensional printing according to the print data of each slice to obtain a three-dimensional printing model of the target object.
Specifically, the output end of the data acquisition module 710 is connected with the input end of the model reconstruction module 720, the output end of the model reconstruction module 720 is connected with the input end of the slicing module 730, and the output end of the slicing module 730 is connected with the input end of the printing module 740, so that automatic transmission of data at each stage is realized, automatic printing of a three-dimensional printing model is realized, integrated printing equipment of a medical model is provided, and the efficiency of three-dimensional printing is improved.
The embodiment of the application provides a model printing device, according to the scanning imaging data of target object, the three-dimensional digital model of target object is established automatically, and carry out section processing and three-dimensional printing to this model, thereby obtain the three-dimensional printing model of target object, the integration process from scanning data to three-dimensional model has been realized, operation flow has been simplified, the automatic three-dimensional printing of organizing the model has been realized, the printing cycle has been shortened, printing efficiency has been improved, and simultaneously, the automation and the intelligent degree of organizing three-dimensional printing have been improved, the operation degree of difficulty has been reduced, user experience has been improved.
The target object may be any part or region of the target user, such as the chest, legs, neck, head, etc. The scan Imaging data may be data scanned by a corresponding medical Imaging device, and may be data scanned in real time or in historical time, the medical Imaging device includes, but is not limited to, an X-ray device, a CT (Computed Tomography) device, an MRI (Magnetic Resonance Imaging) device, a functional MRI device, and may also be directly connected to a hospital image Archiving and Communication system (PACS) or other databases; of course, to expand the data sources, the scan imaging data may also be obtained from a removable storage device through an associated interface, such as a USB (Universal Serial Bus) interface, a wireless connection interface, and the like. Specifically, the scan Imaging data may be data in DICOM (Digital Imaging and Communications in Medicine) format. A three-dimensional digital model is a digital model that describes aspects of a target object in the form of numbers, color, texture, etc. The print data may be data in a format directly recognizable by the three-dimensional printer.
Specifically, the scan imaging data may be data acquired based on the same medical imaging device, or may be data acquired based on different medical imaging devices. Illustratively, the scan imaging data may include both CT data and ultrasound data.
Specifically, the data acquisition module 710 may acquire the scanning imaging data of the target object in a wired or wireless manner.
Further, the model printing apparatus may further include a preprocessing module for preprocessing the scan imaging data after the scan imaging data is acquired. Wherein the preprocessing may be at least one of noise reduction processing, format conversion, image enhancement processing, and the like.
Specifically, the model reconstruction module 720 may automatically construct a three-dimensional digital model of the target object according to the scan imaging data through a virtual three-dimensional space of three-dimensional manufacturing software, where the three-dimensional manufacturing software may be UG, pro/E, 3DSMAX, solidworks, etc., and may be modeled based on a Non-Uniform Rational B-Splines modeling manner.
Specifically, the model reconstruction module 720 may establish a three-dimensional digital model of the target object according to the gray-scale value, the CT value (also called Hounsfield value), the position, and other parameters of each pixel of the scanned imaging data. Optionally, the model reconstruction module 720 is specifically configured to:
and establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
Wherein the model reconstruction algorithm comprises an Artificial Intelligence (AI) modeling algorithm. The AI modeling algorithm may be a neural network algorithm, a deep learning algorithm, a machine learning algorithm based modeling algorithm.
Further, the AI modeling algorithm may specifically be any one or a fusion of a plurality of U-net neural network algorithm, grap-Cut image segmentation algorithm, atlas-based non-rigid Registration segmentation algorithm, and Multi-Atlas Registration algorithm.
Further, the model reconstruction module 720 may determine a model reconstruction algorithm corresponding to the current target object based on a pre-established correspondence between the model reconstruction algorithm and the target object according to the type of the target object, and further establish a three-dimensional digital model of the target object according to the scan imaging data based on the model reconstruction algorithm.
The three-dimensional digital model of the target object is automatically established through a model reconstruction algorithm, so that the modeling efficiency is improved, and the operation difficulty of three-dimensional printing of the medical organization model is greatly reduced.
Specifically, the slicing module 730 may slice the three-dimensional digital model based on slicing software, and the slicing software may convert the three-dimensional digital model into print control data that can be recognized by the three-dimensional printer, so that the three-dimensional printer prints based on the print control data to obtain the three-dimensional print model of the target object. The three-dimensional digital models in formats such as STL and OBJ are obtained through modeling software, the three-dimensional digital models are horizontally cut by slicing software to obtain individual plane graphs, and the plane graphs are further processed to obtain printing data which is then sent to a three-dimensional printer for printing.
Specifically, when the slicing module 730 slices the three-dimensional digital model, parameters such as the slice layer height, the shell thickness, the filament drawing, the filling density, the printing speed, the support, the first layer adhesion, the initial layer thickness and the like need to be set, wherein the layer height is a parameter for describing the resolution of 3D printing and is used for specifying the height of each layer of consumable material, and the larger the layer height value is, the more fuzzy the model details are; the shell refers to the number of times of the outer wall needing to be printed by the three-dimensional printer before the hollow part is printed, the shell thickness representation is the thickness of the outer wall, and the thicker the shell is, the thicker and firmer the outer wall of the model is; the support refers to that a support structure needs to be printed on the lower part of the cantilever structure when the target object comprises the cantilever structure, and the type of the support can be a block support, a tree support, a grid support and the like.
Specifically, the printing module 740 may be a three-dimensional printer (3 d printer,3 dp) or a three-dimensional printing device, and is configured to sequentially print printing data corresponding to each layer of slices, so as to obtain a three-dimensional solid model of the target object.
The three-dimensional printer is formed by stacking and superposing liquid photosensitive resin materials, molten plastic wires, gypsum powder and other materials layer by layer in a spraying, extruding or binder spraying bonding mode, and the specific operation process is similar to that of a traditional printer.
Specifically, the three-dimensional printing device may be a fused deposition modeling device, a three-dimensional photocuring modeling device, a selective laser sintering device, a selective laser melting device, a three-dimensional inkjet modeling device, a jet fusion modeling device, or the like.
Further, the printing module 740 may determine a required three-dimensional printing device according to a user requirement, and then perform three-dimensional printing according to the printing data of each slice based on the three-dimensional printing device, so as to obtain a three-dimensional printing model of the target object. Specifically, the user demand may be a print data demand, a print cost demand, a material demand, and the like.
Optionally, fig. 8 is a schematic structural diagram of the model reconstruction module in the embodiment shown in fig. 7 of the present application, and as shown in fig. 8, the model reconstruction module 720 includes:
a tissue segmentation unit 721 configured to perform tissue extraction according to the scanned imaging data to obtain scanned imaging data of each tissue; a three-dimensional model calculation unit 722, configured to determine a three-dimensional model of the tissue according to the scanning imaging data of the tissue, and determine a three-dimensional digital model of the target object according to the three-dimensional model of the tissue.
The tissue may be various types of tissue included in the target object, such as bone, blood, a blood vessel wall, and the like.
Specifically, the tissue segmentation unit 721 specifically includes: the comparison table establishing subunit is used for establishing a CT value-tissue comparison table in advance; a tissue identification subunit configured to identify a type of tissue corresponding to a pixel based on a CT value of the pixel, for example, when the CT value of the pixel is greater than 400HU (Hounsfield Unit), the tissue corresponding to the pixel is identified as a bone, and when the CT value of the pixel is in a 13HU to 32HU interval, the tissue corresponding to the pixel is identified as blood, or the like; and the tissue data extraction subunit is used for extracting different types of tissues so as to obtain the scanning imaging data corresponding to each tissue.
Optionally, the tissue segmentation unit 721 comprises:
a segmentation algorithm storage subunit, configured to store a tissue segmentation algorithm of the target object; and the tissue segmentation subunit is used for performing tissue extraction on the target object according to the scanning imaging data based on the tissue segmentation algorithm.
The tissue segmentation algorithm may be one or more, and the tissue segmentation algorithm may include one or more of a threshold method, a dynamic region growing algorithm, a clustering algorithm, and an edge detection algorithm.
Further, different tissues may correspond to different tissue segmentation algorithms. Specifically, the tissue segmentation subunit is specifically configured to: the tissue type of each tissue of the target object is determined through tissue identification, the tissue segmentation algorithm of the current tissue is further determined according to the tissue type of the current tissue, and the current tissue is extracted from the scanning imaging data based on the tissue segmentation algorithm, so that the selection of the appropriate tissue segmentation algorithm for different types of tissues is realized, and the precision of tissue segmentation is improved.
For example, if the current tissue is an artery, the tissue segmentation algorithm is determined to be a dynamic region growing algorithm, and if the current tissue is a bone, the tissue segmentation algorithm is determined to be a threshold method, and the like.
Optionally, the tissue segmentation algorithm includes a preset number of tissue segmentation algorithms arranged according to a preset sequence, and the tissue segmentation subunit is specifically configured to:
and selecting a current tissue segmentation algorithm according to a preset sequence, and performing tissue extraction on the target object according to the scanning imaging data based on the current tissue segmentation algorithm.
Specifically, the tissue segmentation subunit may specifically determine an order of each tissue segmentation algorithm according to the tissue type, that is, different tissue types may correspond to different tissue segmentation algorithm orders, or a default order may be adopted as the preset order.
Specifically, the tissue segmentation subunit may specifically adopt a tissue segmentation algorithm located at a first position according to a preset sequence, and perform tissue extraction on the target object according to the scan imaging data, if the tissue extraction is successful, a subsequent tissue segmentation algorithm is not required to be adopted, if the tissue extraction is failed, the tissue segmentation algorithm located at a second position is adopted to perform the tissue extraction, and so on until each tissue of the target object is successfully extracted.
Further, the tissue segmentation subunit may determine whether the extraction is successful according to the tissue standard data of each tissue.
Specifically, the three-dimensional model calculating unit 722 is specifically configured to calculate, for the scan data of each tissue, a three-dimensional digital model of the tissue after obtaining the scan imaging data of each tissue, where a specific process of the calculation is the same as the process of establishing the three-dimensional digital model of the target object, and is not described here again, so as to obtain a three-dimensional model of each tissue, and further determine the three-dimensional digital model of the target object according to the three-dimensional model of each tissue.
Further, the three-dimensional model calculation unit 722 may also determine a tissue three-dimensional model of each tissue from the scanned imaging data of the tissue based on a preset three-dimensional model calculation algorithm. The three-dimensional model calculation algorithm of each tissue can be obtained according to a pre-established tissue-three-dimensional model calculation algorithm correspondence table.
Specifically, different types of tissues can be subjected to tissue extraction according to different or the same scanning imaging data, that is, the scanning imaging data of each tissue can be obtained by adopting the same scanning imaging data based on a tissue segmentation algorithm, and then a tissue three-dimensional model of each tissue is calculated; different scanning data can be adopted, and the scanning imaging data of each tissue is obtained based on a tissue segmentation algorithm, so that the tissue three-dimensional model of each tissue is calculated.
Illustratively, a three-dimensional model of tissue of a bone may be created based on the CT scan imaging data, while a three-dimensional model of tissue of a blood vessel may be created based on the ultrasound data.
Specifically, the three-dimensional model calculation unit 722 may determine the three-dimensional data model of the target object based on the positional relationship of the respective tissues and the tissue three-dimensional model.
Optionally, the model printing apparatus further includes:
the tissue identification unit is used for identifying each tissue and the type of the target object according to the scanning imaging data; or; and determining each tissue and the type of the target object according to the matching result of the scanning imaging data and the standard imaging data.
Specifically, the tissue identification unit may determine the respective tissue of the target object and its type based on the CT value or the gray value of the scanned imaging data.
Optionally, the segmentation algorithm storage subunit stores at least two tissue segmentation algorithms, and the tissue segmentation subunit is specifically configured to:
determining a tissue segmentation algorithm corresponding to each tissue according to each tissue and the type of each tissue; and for each type of tissue, performing tissue extraction on the tissue of the target object according to a tissue segmentation algorithm corresponding to the tissue and scanning imaging data.
Specifically, according to the tissue type of each tissue, a tissue segmentation algorithm matched with the type of each tissue is matched for each tissue, so that the precision of the tissue segmentation algorithm is improved.
Specifically, after the tissue included in the target object and the type of each tissue are determined, the tissue segmentation algorithm of each tissue may be determined based on a correspondence relationship between the tissue type and the tissue segmentation algorithm, which is established in advance. Different tissue segmentation algorithms are matched for different types of tissues so as to improve the accuracy and efficiency of tissue segmentation.
Specifically, the execution order of each tissue segmentation algorithm for tissue extraction or segmentation may also be automatically determined according to the type of the tissue included in the target object, so that the tissue segmentation algorithms may be sequentially executed according to the execution order with respect to the current tissue to obtain the scan imaging data of each tissue.
Further, in order to reduce the error of tissue extraction, a tissue extraction verification unit may be further included, which is configured to determine whether tissue extraction is accurate according to a comparison result by comparing scan imaging data of each tissue with data corresponding to a standard tissue stored in advance after tissue extraction is performed on a target object.
Optionally, the model reconstruction module 720 further includes:
and the organization naming unit is used for naming the organization according to the type of the organization aiming at each organization so as to determine the organization name of the organization.
The organization name may include the type of the organization, and may further include corresponding identification information. The identification information may be identification or number information corresponding to the target object, and is used to distinguish different patients or different target objects.
For example, assuming that the target object is the chest of patient a, the tissue naming unit may determine that the tissue name of the coronary artery tissue may be patient a-coronary artery.
When scanning imaging is performed, in order to improve imaging quality, a contrast medium is injected into blood, so that a CT value of a blood part is within a range of a CT value of a bone, and when bone extraction is performed, the blood part is extracted by mistake, so that an extraction error is large. In order to improve the accuracy of tissue extraction, in addition to considering the CT value, it is necessary to determine whether the extracted tissue matches the standard tissue by comprehensively considering the morphological data of the standard tissue of each tissue stored in advance, and if so, the tissue extraction is successful.
Optionally, fig. 9 is a schematic structural diagram of the model reconstruction module in the embodiment shown in fig. 7 of the present application, and as can be known from fig. 7 to 9, the model reconstruction module 720 in this embodiment further includes: a standard data acquisition unit 723 and an organization data comparison unit 724,
the standard data acquiring unit 723 is used for acquiring standard scanning data of each organization of the target object; the tissue data comparison unit 724 is used for judging whether the tissue is successfully extracted or not according to the comparison result of the standard scanning data of the tissue and the scanning imaging data of the tissue aiming at each tissue; accordingly, the three-dimensional model calculation unit 722 is specifically configured to: for each tissue, when the tissue extraction is successful, determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue; and determining a three-dimensional digital model of the target object according to the three-dimensional model of the tissue of each tissue.
The standard scan data may be scan imaging data of a standard tissue corresponding to each tissue stored in advance. The standard scan data may include data such as CT value distribution intervals and shape data of the standard tissue corresponding to each tissue.
Specifically, the organization data comparing unit 724 is specifically configured to: and for each type of tissue, when the matching degree of the scanning imaging data of the tissue and the standard scanning data is higher than the preset matching degree, determining that the tissue is successfully extracted. Further, after the matching is successful, the organization type of the standard organization can also be determined as the organization type of the organization.
Further, if the tissue extraction is not successful according to the comparison result, prompt information can be generated through a prompt information unit to remind related personnel to participate manually, or the tissue segmentation algorithm of the tissue can be replaced or adjusted to obtain a new tissue extraction result, new scanning imaging data of the tissue can be obtained, and the new scanning imaging data is compared with the standard scanning data of the tissue until the tissue extraction is successful.
Further, a three-dimensional digital pre-molding model of the tissue may be established according to the scanning imaging data of the tissue through a pre-molding module, and the three-dimensional digital pre-molding model may be performed with a lower precision to improve the efficiency of pre-molding, so as to obtain the three-dimensional digital pre-molding model of the tissue, and the tissue data comparing unit 724 may determine whether the tissue is successfully extracted by comparing the three-dimensional digital pre-molding model with the standard three-dimensional model of the tissue.
Specifically, the three-dimensional model calculation unit 722 is specifically configured to: and for each tissue, when the tissue extraction is successful, determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue based on a three-dimensional model calculation algorithm.
Further, different tissues may correspond to different three-dimensional model calculation algorithms.
Specifically, the three-dimensional model calculation unit 722 is further configured to pre-establish a correspondence relationship between the tissue and the three-dimensional model calculation algorithm, so as to determine the three-dimensional model calculation algorithm of each tissue according to the correspondence relationship.
Specifically, since the medical model of the target object is generally formed by nesting and penetrating different tissues, the three-dimensional tissue models created by extracting the scanning imaging data of each tissue based on the scanning imaging data are independent from each other, and in order to more truly represent the positional relationship of each tissue, model fusion needs to be performed on each three-dimensional tissue model, so that a three-dimensional character model with higher accuracy is obtained.
Optionally, fig. 10 is a schematic structural diagram of a three-dimensional model calculating unit in the embodiment shown in fig. 8 of the present application, and as shown in fig. 10, the three-dimensional model calculating unit 722 includes:
a tissue model determination subunit 7221, configured to determine a tissue three-dimensional model of each tissue according to the scanned imaging data of each tissue; a model fusion subunit 7222 configured to fuse the tissue three-dimensional models of the tissues to determine a three-dimensional digital model of the target object.
Specifically, the model fusion subunit 7222 may perform model fusion on the three-dimensional models of the tissues according to the positions of the tissues, so as to obtain an overall three-dimensional digital model of the target object.
Specifically, the model fusion subunit 7222 may cut and combine the overlapped portions of the three-dimensional tissue models corresponding to different tissues to implement model fusion, thereby obtaining a three-dimensional digital model closer to the target object itself.
Optionally, the model printing apparatus further includes:
and the model repairing module is used for performing model repairing on the three-dimensional digital model of the target object to obtain a three-dimensional repairing model of the target object.
Specifically, the model repairing module may perform model repairing on the three-dimensional digital model of the target object according to a user operation instruction or a preset repairing algorithm.
Furthermore, the model restoration module can also improve a model reconstruction algorithm according to a comparison result of the three-dimensional restoration model and the three-dimensional digital model, the model reconstruction algorithm comprises one or more of the tissue segmentation algorithm, the three-dimensional model calculation algorithm and the AI modeling algorithm, and different tissues can correspond to different model reconstruction algorithms, so that the accuracy of the three-dimensional model is improved.
Optionally, the model printing apparatus further includes:
the model comparison module is used for storing and comparing the three-dimensional repair model and the three-dimensional digital model to obtain a model comparison result; and the algorithm improvement module is used for improving the model reconstruction algorithm according to the model comparison result.
Optionally, fig. 11 is a schematic structural diagram of a slicing module in the embodiment shown in fig. 7 of the present application, and as shown in fig. 11, the slicing module 730 includes:
an attribute assigning unit 731, configured to assign, for each tissue, an attribute to the tissue three-dimensional model of the tissue according to at least one of a tissue type, a tissue name, and location information of the tissue three-dimensional model of the tissue, where the attribute includes at least one of color, hardness, elasticity, and transparency; a slicing unit 732, configured to slice the three-dimensional digital model according to the attribute of each tissue three-dimensional model to obtain print data of each slice.
The position information of the three-dimensional model of the tissue may be coordinate information or information capable of describing a relative position relationship between the tissues.
Specifically, the three-dimensional digital model is composed of or fused with tissue three-dimensional models of various tissues, and the three-dimensional digital model can label the tissue three-dimensional models, so that the attribute distribution is performed on the tissue three-dimensional models according to the labels.
Specifically, the attribute assigning unit 731 is specifically configured to: the corresponding relation between the tissue type and each attribute of the target object is established in advance, and the value of each attribute corresponding to each type of tissue is clarified in the corresponding relation, so that the attribute value of the tissue three-dimensional model corresponding to each tissue of the target object can be determined according to the corresponding relation.
Specifically, the medical model of the target object usually includes a plurality of different tissues, such as soft tissue and hard tissue, and in order to distinguish the different tissues in the model, different attributes need to be assigned to the different tissues, so that based on the three-dimensional digital model, tissue models with different attributes can be printed.
Further, the type of the tissue can be determined according to the tissue name, and then the corresponding attribute value is determined for the tissue three-dimensional model.
Specifically, the tissue name may be identified, so that the tissue type of the tissue is determined according to the identification result, and further, the value of each attribute of the three-dimensional model of the tissue corresponding to the tissue is determined. For example, when the tissue name is XX artery, the color attribute of the tissue is determined to be red, when the tissue name is XX vein, the color attribute of the tissue is determined to be blue, and when the tissue name is XX bone, the color attribute of the tissue is determined to be white.
Further, attribute assignment may be performed on the tissue three-dimensional model of the tissue according to the position information and the tissue type of the tissue three-dimensional model of the tissue, such as determining a transparency attribute according to the position information. For example, when a plurality of tissues are nested with each other, the transparency of the tissue on the outer side is determined to be lower than the transparency of the tissue on the inner side.
Specifically, the three-dimensional digital model needs to be sliced according to the attributes of the three-dimensional models of the tissues, that is, corresponding slicing processing modes are determined for the tissues with different attributes, so that the reproduction degree of the attributes allocated to the three-dimensional models is improved.
For example, when the attribute of the three-dimensional model of the tissue is assigned to be color translucent, the slicing process may include partitioning the three-dimensional model to divide the three-dimensional model into an outer colored region and an inner transparent region, the outer colored region wrapping outside the inner transparent region, and the slicing process further includes configuring the outer colored region to be formed by printing of a color transparent material and a color-less transparent material, the inner transparent region being configured to be formed by printing of a color-less transparent material, thereby achieving the color translucent effect; when the attribute of the three-dimensional model of tissue is assigned to be colored opaque, the slicing process may not partition the three-dimensional model and configure the three-dimensional model as a whole to be formed of a colored transparent material and a white material.
Optionally, the model printing apparatus further includes:
and the mark adding module is used for adding a guide mark to the three-dimensional digital model according to the disease species of the target object.
The disease species of the target object can be input manually, or can be added to the scanned imaging data in advance, such as adding a disease species mark, so that the disease species of the target object can be determined according to the disease species mark. The addition of the guide mark may be to add related indicative information for guiding the surgical incision, the surgical path, and the like, and specifically may be an arrow mark, a numerical mark, a shape mark, and the like.
Specifically, the marker adding module may automatically identify a disease type of the target object according to the three-dimensional digital model of the target object, and further determine a guidance marker for adding to the three-dimensional digital model according to a surgical plan corresponding to the disease type. The surgical plan corresponding to the disease category may be determined based on a pre-stored relationship between the disease category and the surgical plan. Of course, the surgical plan may be modified by manual editing, or the position of the guide mark or other parameters may be modified.
Further, the mark adding module may identify a department to which a disease category of the target object belongs according to the three-dimensional digital model of the target object, and further obtain a sub-disease category library of the department in the disease category library, so as to match with each disease category in the sub-disease category library, and when a matching degree between the target object and a certain disease category is greater than a preset matching threshold, for example, 90%, determine the disease category as the disease category of the target object, or determine the disease category with the highest matching degree as the disease category of the target object. Or the disease species of the target object can be determined by extracting the disease species characteristics of the three-dimensional digital model and performing characteristic matching with the standard characteristics of each disease species.
Further, the model printing apparatus further includes: a surgical navigation device. The operation navigation device can perform operation navigation based on Virtual Reality devices, such as VR (Virtual Reality) devices, AR (Augmented Reality) devices and the like according to the guide mark. The virtual reality device may specifically be AR glasses. In the operation process, the guide mark, the three-dimensional digital model and the target object can be registered, so that related personnel can see the guide mark on the target object and the three-dimensional digital model based on the virtual reality equipment, and the guide mark can be an operation path position and/or an operation incision position, so that the success rate of the operation is improved.
Optionally, the model printing apparatus further includes:
the operation guide plate generation module is used for generating a three-dimensional model of the operation guide plate of the target object according to the three-dimensional digital model; correspondingly, the slicing module 730 is specifically configured to: and slicing the three-dimensional digital model and the three-dimensional model of the surgical guide plate to obtain printing data of each slice.
The surgical guide plate is a device for assisting a surgery, such as an orthopedic surgery guide plate used for positioning, guiding, protecting tissues and the like of the orthopedic surgery.
Further, the surgical guide generation module may generate a three-dimensional model of the surgical guide of the target object according to the three-dimensional digital model, the disease type of the target object, and the surgical plan corresponding to the disease type.
Optionally, the model printing apparatus further includes:
and the post-processing module is used for performing post-processing on the three-dimensional printing model.
Wherein the post-treatment may include support removal, sanding, polishing, sterilization, and other treatment operations.
Specifically, the post-processing module may determine a post-processing operation according to the type of the three-dimensional printing device, and then perform post-processing on the three-dimensional model to be printed based on the post-processing operation.
For example, when the three-dimensional printing device is a three-dimensional inkjet forming device, the post-processing module may include a support removing mechanism, a gloss oil applying mechanism, a gloss oil curing mechanism, and the like, wherein the support removing mechanism may be a sand blasting machine, a water spraying machine, an alkali soaking machine, and the like. Of course, the post-processing may also include sterilizing the three-dimensionally printed medical model so that the requirements of the hospital can be met.
Optionally, the model printing apparatus further includes:
and the feedback module is used for acquiring a feedback result of the three-dimensional printing model based on a preset user interface, and storing and displaying the feedback result.
The feedback result may be the evaluation, score, and the like of the three-dimensional printing model by the user.
Specifically, the feedback result may include a score of each part of the three-dimensional printing model by the user, so that an algorithm corresponding to each part of the three-dimensional printing model may be adjusted based on the feedback result, such as the above-mentioned AI modeling algorithm, three-dimensional model calculation algorithm, tissue segmentation algorithm, and the like.
Furthermore, after the relevant operation is executed, the user can feed back the three-dimensional printing model through the feedback module according to the operation execution result, so that the reconstruction process of the three-dimensional printing model is refined, and the reconstruction quality is improved.
Fig. 12 is a schematic structural diagram of a model printing system according to an embodiment of the present application, and as shown in fig. 8, the model printing system includes: three-dimensional printer 810, memory 820, and at least one processor 830.
The three-dimensional printer 810 may be a three-dimensional printing apparatus in any of the embodiments described above, and is connected to the processor 830, and the computer program is stored in the memory 820 and configured to be executed by the processor 830 to implement the model printing method provided in any of the embodiments corresponding to fig. 2 to 6 of the present application, so as to control the three-dimensional printer 810 to print the three-dimensional printing model of the target object.
Wherein the memory 820 and the processor 830 are coupled via a bus 840.
The relevant description may be understood by referring to the relevant description and effect corresponding to the steps in fig. 2 to fig. 6, and redundant description is not repeated here.
One embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, where the computer program is executed by a processor to implement the model printing method provided in any one of the embodiments corresponding to fig. 2 to fig. 6 of the present disclosure.
The computer readable storage medium may be, among others, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (17)
1. A model printing system, comprising a model printing apparatus and a processor, the model printing apparatus being connected to the processor, the at least one processor being configured to control the model printing apparatus to print a three-dimensional printing model of a target object;
the model printing apparatus includes:
the data acquisition module is used for acquiring scanning imaging data of a target object;
the model reconstruction module is connected with the data acquisition module and used for establishing a three-dimensional digital model of the target object according to the scanning imaging data;
the slicing module is connected with the model reconstruction module and used for slicing the three-dimensional digital model to acquire printing data of each slice;
the printing module is connected with the slicing module and used for performing three-dimensional printing according to the printing data of each slice so as to obtain a three-dimensional printing model of the target object;
the slicing module includes:
the attribute allocation unit is used for performing attribute allocation on the tissue three-dimensional model of the tissue according to at least one item of tissue type, tissue name and position information of the tissue three-dimensional model of the tissue aiming at each tissue of the target object, wherein the attribute comprises at least one item of color, hardness, elasticity and transparency;
and the slicing unit is used for slicing the three-dimensional digital model according to the slicing processing mode corresponding to the attribute of each tissue three-dimensional model so as to obtain the printing data of each slice.
2. The system of claim 1, wherein the model reconstruction module is specifically configured to:
and establishing a three-dimensional digital model of the target object according to the scanning imaging data based on a model reconstruction algorithm.
3. The system of claim 1 or 2, wherein the model reconstruction module comprises:
the tissue segmentation unit is used for carrying out tissue extraction according to the scanning imaging data so as to obtain the scanning imaging data of each tissue;
and the three-dimensional model calculation unit is used for determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue and determining a three-dimensional digital model of the target object according to the tissue three-dimensional model of each tissue.
4. The system of claim 3, wherein the tissue segmentation unit comprises:
a segmentation algorithm storage subunit, configured to store a tissue segmentation algorithm of the target object;
and the tissue segmentation subunit is used for performing tissue extraction on the target object according to the scanning imaging data based on the tissue segmentation algorithm.
5. The system of claim 4, wherein the tissue segmentation algorithm comprises a preset number of tissue segmentation algorithms arranged in a preset order, and the tissue segmentation subunit comprises:
and selecting a current tissue segmentation algorithm according to a preset sequence, and performing tissue extraction on the target object according to the scanning imaging data based on the current tissue segmentation algorithm.
6. The system of claim 4, wherein the apparatus further comprises:
and the tissue identification unit is used for identifying each tissue and type of the target object according to the scanning imaging data or determining each tissue and type of the target object according to a matching result of the scanning imaging data and standard imaging data.
7. The system according to claim 6, wherein the tissue segmentation algorithms comprise at least two tissue segmentation algorithms, the tissue segmentation subunit being specifically configured to:
determining a tissue segmentation algorithm of each tissue according to each tissue and the type thereof;
and for each type of tissue, performing tissue extraction on the tissue of the target object according to a tissue segmentation algorithm corresponding to the tissue and scanning imaging data.
8. The system of claim 6, wherein the model reconstruction module further comprises:
and the organization naming unit is used for naming the organization according to the type of the organization aiming at each organization so as to determine the organization name of the organization.
9. The system of claim 3, wherein the model reconstruction module further comprises:
a standard data acquisition unit for acquiring standard scan data of each organization of the target object;
the tissue data comparison unit is used for judging whether the tissue is successfully extracted or not according to the comparison result of the standard scanning data of the tissue and the scanning imaging data of the tissue aiming at each tissue;
correspondingly, the three-dimensional model calculation unit is specifically configured to:
for each tissue, when the tissue extraction is successful, determining a tissue three-dimensional model of the tissue according to the scanning imaging data of the tissue; and determining a three-dimensional digital model of the target object according to the three-dimensional model of the tissue of each tissue.
10. The system of claim 3, wherein the three-dimensional model computation unit comprises:
the tissue model determining subunit is used for determining a tissue three-dimensional model of each tissue according to the scanning imaging data of each tissue;
and the model fusion subunit is used for fusing the tissue three-dimensional models of the tissues so as to determine the three-dimensional digital model of the target object.
11. The system of claim 2, wherein the model printing apparatus further comprises:
and the model repairing module is used for performing model repairing on the three-dimensional digital model of the target object to obtain a three-dimensional repairing model of the target object.
12. The system of claim 11, wherein the model printing apparatus further comprises:
the model comparison module is used for storing and comparing the three-dimensional repair model and the three-dimensional digital model to obtain a model comparison result;
and the algorithm improvement module is used for improving the model reconstruction algorithm according to the model comparison result.
13. The system of claim 2, wherein the model reconstruction algorithm comprises an Artificial Intelligence (AI) modeling algorithm.
14. The system of claim 1, wherein the model printing apparatus further comprises:
and the mark adding module is used for adding a guide mark to the three-dimensional digital model according to the disease species of the target object.
15. The system of claim 1, wherein the model printing apparatus further comprises:
the operation guide plate generation module is used for generating a three-dimensional model of the operation guide plate of the target object according to the three-dimensional digital model;
correspondingly, the slicing module is specifically configured to:
and slicing the three-dimensional digital model and the three-dimensional model of the surgical guide plate to obtain printing data of each slice.
16. The system of claim 1, wherein the model printing apparatus further comprises:
and the post-processing module is used for performing post-processing on the three-dimensional printing model.
17. The system of claim 1, wherein the model printing apparatus further comprises:
and the feedback module is used for acquiring a feedback result of the three-dimensional printing model based on a preset user interface, and storing and displaying the feedback result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821015.XA CN112008982B (en) | 2020-08-14 | 2020-08-14 | Model printing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010821015.XA CN112008982B (en) | 2020-08-14 | 2020-08-14 | Model printing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112008982A CN112008982A (en) | 2020-12-01 |
CN112008982B true CN112008982B (en) | 2023-03-21 |
Family
ID=73504564
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010821015.XA Active CN112008982B (en) | 2020-08-14 | 2020-08-14 | Model printing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112008982B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113693753B (en) * | 2021-08-23 | 2022-10-18 | 上海六普医疗科技有限公司 | Simulated full-color transparent full-porcelain tooth 3D printing method |
CN114211753B (en) * | 2021-10-11 | 2024-04-02 | 广州黑格智造信息科技有限公司 | Preprocessing method and device for three-dimensional printing data and digital operation platform |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019089340A (en) * | 2013-02-12 | 2019-06-13 | カーボン,インコーポレイテッド | Method and apparatus for three-dimensional fabrication by supplying through carrier |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103116754B (en) * | 2013-01-24 | 2016-05-18 | 浙江大学 | Batch images dividing method and system based on model of cognition |
DE102013218437A1 (en) * | 2013-09-13 | 2015-03-19 | Siemens Aktiengesellschaft | Method for automatic or semi-automatic segmentation and device |
CN104091347A (en) * | 2014-07-26 | 2014-10-08 | 刘宇清 | Intracranial tumor operation planning and simulating method based on 3D print technology |
US10556418B2 (en) * | 2017-02-14 | 2020-02-11 | Autodesk, Inc. | Systems and methods of open-cell internal structure and closed-cell internal structure generation for additive manufacturing |
CN107599412A (en) * | 2017-09-14 | 2018-01-19 | 深圳市艾科赛龙科技股份有限公司 | A kind of three-dimensional modeling method based on institutional framework, system and threedimensional model |
CN109737890A (en) * | 2019-02-18 | 2019-05-10 | 常熟火星视觉信息科技有限公司 | A kind of three-dimensional scanner, three-dimensional modeling method and three-dimensional scanning device |
CN110481028B (en) * | 2019-04-03 | 2021-05-18 | 甘肃普锐特科技有限公司 | Method for manufacturing 3D printing medical simulation human body model |
CN110605853B (en) * | 2019-10-22 | 2022-01-14 | 珠海赛纳三维科技有限公司 | Three-dimensional organ model, printing method and printing device for three-dimensional organ model and printing equipment |
-
2020
- 2020-08-14 CN CN202010821015.XA patent/CN112008982B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019089340A (en) * | 2013-02-12 | 2019-06-13 | カーボン,インコーポレイテッド | Method and apparatus for three-dimensional fabrication by supplying through carrier |
Also Published As
Publication number | Publication date |
---|---|
CN112008982A (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE49541E1 (en) | Method for defining a finish line of a dental prosthesis | |
CN104462650B (en) | A kind of hypostazation heart 3D model production methods of achievable external and internal compositionses | |
US8817332B2 (en) | Single-action three-dimensional model printing methods | |
Giannopoulos et al. | 3D printed ventricular septal defect patch: a primer for the 2015 Radiological Society of North America (RSNA) hands-on course in 3D printing | |
CN112008982B (en) | Model printing device | |
CN104271067A (en) | Feature-driven rule-based framework for orthopedic surgical planning | |
CN107067398A (en) | Complementing method and device for lacking blood vessel in 3 D medical model | |
US12118724B2 (en) | Interactive coronary labeling using interventional x-ray images and deep learning | |
CN113619119B (en) | Printing system, printing method, storage medium and three-dimensional model of three-dimensional object | |
US7792360B2 (en) | Method, a computer program, and apparatus, an image analysis system and an imaging system for an object mapping in a multi-dimensional dataset | |
US20180322809A1 (en) | Bio-model comprising a fluid system and method of manufacturing a bio-model comprising a fluid system | |
EP2142968B1 (en) | A method for the manufacturing of a reproduction of an encapsulated three-dimensional physical object and objects obtained by the method | |
US20190220974A1 (en) | Method of manufacturing a bio-model comprising a synthetic skin layer and bio-model comprising a synthetic skin layer | |
CN112201349A (en) | Orthodontic operation scheme generation system based on artificial intelligence | |
Andersson et al. | Digital 3D Facial Reconstruction Based on Computed Tomography | |
Chen | Designing Customized 3D Printed Models for Surgical Planning in Repair of Congenital Heart Defects | |
Tsioukas et al. | The long and winding road from CT and MRI images to 3D models | |
KR102287020B1 (en) | 3d virtual simulation system based on deep learning and operating method thereof | |
CN116385474B (en) | Tooth scanning model segmentation method and device based on deep learning and electronic equipment | |
WO2018178359A1 (en) | Device and method for sketch template generation or adaption | |
TW201933299A (en) | Medical preoperative simulation model and method for molding the same capable of accurately reflecting the pathological characteristics of a patient's organs | |
US20240312368A1 (en) | Method for manufacturing anatomical models adapted to simulate organs or parts of organs of a patient | |
US20230298272A1 (en) | System and Method for an Automated Surgical Guide Design (SGD) | |
Dotremont | From medical images to 3D model: processing and segmentation | |
Sears et al. | Anatomical Modeling at the Point of Care |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |