CN113344984A - Three-dimensional model registration method, equipment and storage medium - Google Patents
Three-dimensional model registration method, equipment and storage medium Download PDFInfo
- Publication number
- CN113344984A CN113344984A CN202110655430.7A CN202110655430A CN113344984A CN 113344984 A CN113344984 A CN 113344984A CN 202110655430 A CN202110655430 A CN 202110655430A CN 113344984 A CN113344984 A CN 113344984A
- Authority
- CN
- China
- Prior art keywords
- model
- registration
- dimensional
- data
- dimensional model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a three-dimensional model registration method, equipment and a storage medium, wherein the method comprises the following steps: separating three-dimensional model data into source models A0With the object model B0(ii) a To A0Pre-treating, namely treating A after treatment0Inputting the trained MeshSegNet network for segmentation, and performing post-processing on the segmented partial model to obtain A1(ii) a Selecting a target model B0Of the registration section B1(ii) a To A1And B1Performing coarse registration to obtain linear transformation matrix, and comparing A1Is converted into A2(ii) a To A2And B1Fine registration is carried out to obtain a linear transformation matrix, and A is processed2Transforming to obtain registered three-dimensionModel A3. The invention can realize registration aiming at multi-mode three-dimensional model data with great difference in model precision and model appearance, effectively reduce the interference of non-overlapping parts in the three-dimensional model data in the registration process and improve the accuracy of the registration result; the requirement on the initial position of the model is not high, the robustness of the method is improved, and the application range is widened.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a three-dimensional model registration method, three-dimensional model registration equipment and a storage medium.
Background
The registration of the three-dimensional model data, namely, two groups of three-dimensional data (such as point clouds, surface grids and the like) with overlapped parts on the shapes are given, and the two groups of three-dimensional model data are aligned in a geometric transformation mode so as to carry out subsequent operation, so that the method is widely applied to a plurality of fields of medical diagnosis, cultural relic restoration, remote sensing detection and the like. Most of the three-dimensional registration technologies generally applied at the present stage are automatic registration, that is, the spatial transformation between two three-dimensional data sets is rapidly determined by computer simulation in combination with the geometric characteristics of three-dimensional data. If the point cloud registration method is classified according to whether the object generates deformation, rigid registration and non-rigid registration can be performed. Wherein, rigid registration refers to a factor that deformation is not considered in the registration process; non-rigid registration refers to a factor that needs to take into account deformation of an object during registration. The rigid registration is developed more mature at home and abroad, wherein the ICP algorithm and the derivative algorithm thereof have wide application, high efficiency and accuracy.
However, such algorithms require that the initial positions must be similar, otherwise the convergence direction is uncertain, which results in reduced convergence speed and registration accuracy, and may also trap in local search and result in unreliable registration. In addition, for multi-modal three-dimensional data acquired by different instruments, the problems of insufficient proportion of overlapping regions, more interference parts, inconsistent model precision and the like exist, and the registration effect is difficult to meet the requirement by directly using ICP and a derivative algorithm thereof.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: on the premise of rigid registration, in the multi-mode three-dimensional data registration process, due to the technical problems that the initial positions are too far away, the proportion of overlapping regions is insufficient, the model precision is inconsistent and the like, the registration effect is poor, the invention provides a three-dimensional model registration method and a three-dimensional model registration system for solving the problems, and the registration can be realized aiming at multi-mode three-dimensional model data with great difference in model precision and model appearance by comprehensively using a traditional algorithm and a deep learning algorithm; the interference of non-overlapping parts in the three-dimensional model data in the registration process is effectively reduced, and the accuracy of the registration result is improved; the method has low requirement on the initial position of the model, further improves the robustness of the method and widens the application range.
The invention is realized by the following technical scheme:
a three-dimensional model registration method, comprising the steps of:
s1, dividing three-dimensional model data to be registered into a source model A0With the object model B0;
S2, source model A0Preprocessing is carried out, and the processed source model A is0Inputting the trained MeshSegNet network for segmentation, and performing post-processing on the segmented partial models to obtain a first model A1;
S3, selecting a target model B0Of the registration section B1;
S4, for the first model A1And a registration section B1Performing coarse registration to obtain a first linear transformation matrix, and using the matrix to perform a first model A1Transforming to obtain a second model A2;
S5, for the second model A2And a registration section B1Fine registration is carried out to obtain a second linear transformation matrix, and a second model A is subjected to2Transforming to obtain a registered three-dimensional model A3。
Further preferably, in step S1, the three-dimensional model data includes three-dimensional point cloud data or three-dimensional surface mesh data including vertex information.
Further preferably, the step S2 is implemented to include the steps of:
s21, a source model A is paired0Down-sampling is performed and the source model A is used0Converting into three-dimensional grid data;
s22, preprocessing the source model A0Inputting the trained MeshSegNet network for three-dimensional segmentation to obtain segmented partial models, wherein each vertex in the segmented models has a label value;
and S23, carrying out post-processing on the segmented model, wherein the post-processing comprises but is not limited to up-sampling, edge thinning based on graph segmentation and the like.
Further preferably, in step S2, the MeshSegNet network includes the following modules:
a1. the multilayer perceptron module is composed of a plurality of one-dimensional convolution layers;
a2. the characteristic conversion module is composed of a plurality of one-dimensional convolution layers and tensor reforming layers;
a3. the graph constraint learning module consists of a plurality of one-dimensional convolution layers and symmetrical average pooling layers;
a4. other sub-modules, including but not limited to an upsampling layer, a global max-pooling layer, and the like.
Further preferably, in step S2, the training of the MeshSegNet network includes the following steps:
b1. data preprocessing, namely performing data annotation, down-sampling and data enhancement on a plurality of three-dimensional model data;
b2. constructing a data set, and dividing the preprocessed multiple three-dimensional data into a training set and a verification set;
b3. and training the network, taking the three-dimensional data in the data set as input, and iteratively optimizing network parameters according to precision indexes such as DSC and the like.
Further preferably, in step S3, the registration part B1 may be selected according to the vertex label values of the three-dimensional model data or according to the spatial position.
More preferably, in step S4, the first model a is subjected to1And a registration section B1Methods of performing coarse registration include the Landmark algorithm, PFH algorithm, FPFH algorithm, or 3DSC algorithm. Namely, the coarse registration method is a method based on local features and global features, such as PFH, FPFH, 3DSC, etc.
More preferably, in step S5, the second model a is subjected to2And a registration section B1The method of performing fine registration includes the ICP algorithm or a derivative of the ICP algorithm.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the three-dimensional model registration method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the three-dimensional model registration method as described above.
The invention has the following advantages and beneficial effects:
the invention aims to provide a three-dimensional model registration method and a three-dimensional model registration device, aiming at solving the technical problem of poor registration effect caused by factors such as too far distance of initial positions, insufficient proportion of overlapped regions, inconsistent model precision and the like in a multi-mode three-dimensional data registration process on the premise of rigid registration.
According to the three-dimensional model registration method and system provided by the invention, the method combines a deep neural network and a traditional registration algorithm, and by selecting an effective registration part in the three-dimensional model, registration alignment can be realized aiming at multi-mode three-dimensional model data with large differences in model precision and model appearance, so that the interference of non-overlapping parts in the three-dimensional model data in the registration process can be effectively reduced, and the accuracy of the registration result is improved. The method has low requirement on the initial position of the model, further improves the robustness of the method and widens the application range. The system for realizing the method also has the same technical effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
fig. 1 is a flowchart of a three-dimensional model registration method provided in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a multi-modal dentition three-dimensional model provided in embodiment 1 of the present invention;
fig. 3 is a diagram of a MeshSegNet network architecture according to embodiment 1 of the present invention;
fig. 4 is a three-dimensional segmentation effect diagram of the MeshSegNet network provided in embodiment 1 of the present invention;
fig. 5 is a three-dimensional registration effect diagram provided in embodiment 1 of the present invention;
fig. 6 is a graph of the registration effect obtained using a conventional registration algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example 1
The embodiment provides a three-dimensional model registration method, as shown in fig. 1, the specific steps are as follows:
s1, collecting three-dimensional model data to be registered, and dividing the three-dimensional model data into a source model A0With the object model B0. Taking the registration of a multi-modal three-dimensional model of a tooth as an example, as shown in FIG. 2, a source model A0Scanning a tooth three-dimensional model of a certain patient by an oral cavity laser scanner; and the object model B0A three-dimensional model of the teeth of the same patient, a target model B, reconstructed for CBCT acquisition0Each tooth in (a) has a particular tag value. From the shape, the source model A0A crown portion containing teeth and a portion of gums; and the object model B0Including the entire portion of the tooth, not including the gums. Thus, although the source model A0With the object model B0The tooth three-dimensional model is a tooth three-dimensional model of the same patient, but the shape difference of the tooth three-dimensional model is larger, and the overlapping part is less. From a dataformIn the above view, the source model A0Object model B0The three-dimensional model is acquired by two different instruments and equipment and is a three-dimensional model of two modes, so that the number of vertexes contained in the model is different, and the accuracy is different.
S2, source model A0Preprocessing is carried out, and the processed source model A is0Inputting the trained MeshSegNet network for segmentation, and performing post-processing on the segmented partial models to obtain a first model A1(ii) a The specific implementation steps are as follows:
s21, a source model A is paired0Down-sampling is performed and the source model A is used0Converting into three-dimensional grid data;
s22, preprocessing the source model A0Inputting the trained MeshSegNet network for three-dimensional segmentation to obtain segmented partial models, wherein each vertex in the segmented models has a label value;
and S23, performing post-processing on the segmented model, including but not limited to upsampling, edge refining based on graph segmentation and the like.
As shown in fig. 3, the MeshSegNet network architecture consists of the following modules:
a1. a multilayer perceptron Module (MLP) consisting of a plurality of one-dimensional convolutional layers;
a2. a feature conversion module (FTM) which is composed of a plurality of one-dimensional convolution layers and tensor reforming layers;
a3. the graph constraint learning module (GLM) is composed of a plurality of one-dimensional convolution layers and a symmetrical average pooling layer;
a4. and other sub-modules including an upsampling layer, a global maximum pooling layer and the like.
As shown in FIG. 4, source model A0The three-dimensional segmentation result is obtained by inputting the three-dimensional segmentation result into the trained MeshSegNet network, and different label values can be given to the vertexes of the dental crown models at different positions for distinguishing.
The training procedure for the MeshSegNet network is as follows:
b1. and (3) data preprocessing, namely performing data annotation, downsampling and data enhancement on the 100 pairs of tooth three-dimensional model data acquired by the oral cavity laser scanner. During marking, two-class marking can be carried out according to the tooth crowns and the gum, multi-class marking can also be carried out according to the medical positions of the tooth crowns, the number of points contained after each data is subjected to down-sampling can be set to 10000 during down-sampling, and the data enhancement method comprises translation, rotation and local noise point adding.
b2. And constructing a data set, and dividing the preprocessed three-dimensional data into a training set and a verification set in a ratio of 7: 3.
b3. And training the network, taking the three-dimensional data in the data set as input, and iteratively optimizing network parameters according to precision indexes such as DSC and the like.
S3, selecting a target model B0Of the registration section B1. Wherein a part B to be registered is selected1Can be based on the target model B0Selecting the peak label values of different teeth from the first model A1A plurality of teeth with one-to-one correspondence of middle space positions form a registration part B1To further improve the registration effect, registration section B1May intercept data for only the crown portion. Selecting a registration part B1The purpose of this step is to minimize the source model A as much as possible0With the object model B0The interference of the parts with different shapes and accuracies on the registration effect.
S4, using Landmark algorithm to carry out comparison on the first model A1And a registration section B1Performing coarse registration to obtain a first linear transformation matrix, and using the first linear transformation matrix to perform the first model A1Linear transformation is carried out to obtain a second model A2;
S5, using ICP algorithm to carry out comparison on the second model A2And a registration section B1Fine registration is carried out to obtain a second linear transformation matrix, and the second linear transformation matrix is used for carrying out the second model A2Linear transformation is carried out to obtain a registered three-dimensional model A3。
The final registration effect is shown in fig. 5, and it can be seen that after various preprocessing steps are combined and coarse registration and fine registration are performed, the registered three-dimensional model a obtained through final transformation is obtained3Has been associated with the object model B0The dental crown parts in the dental arch are aligned, and the registration effect is better. By contrast, FIG. 6 is a graph using only the coarse registration sumThe registration effect of the traditional registration algorithm of fine registration shows that the registration effect is not good when the traditional registration algorithm is used for the complex three-dimensional model.
Example 2
The present embodiment provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the three-dimensional model registration method provided in embodiment 1 when executing the computer program.
Example 3
The present embodiment provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the three-dimensional model registration method provided in embodiment 1.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A method of three-dimensional model registration, comprising the steps of:
s1, dividing three-dimensional model data to be registered into a source model A0With the object model B0;
S2, source model A0Preprocessing is carried out, and the processed source model A is0Inputting the trained MeshSegNet network for segmentation, and performing post-processing on the segmented partial models to obtain a first model A1;
S3, selecting a target model B0Of the registration section B1;
S4, for the first model A1And a registration section B1Performing coarse registration to obtain a first linear transformation matrix, and using the matrix to perform a first model A1Transforming to obtain a second model A2;
S5, for the second model A2And a registration section B1Fine registration is performed to obtain a linear transformation matrix, and the matrix is used to pair the second model A2Transforming to obtain a registered three-dimensional model A3。
2. The method for registering three-dimensional models according to claim 1, wherein in step S1, the three-dimensional model data includes three-dimensional point cloud data or three-dimensional surface mesh data containing vertex information.
3. The three-dimensional model registration method according to claim 1, wherein the step S2 is implemented by the steps of:
s21, a source model A is paired0Down-sampling is performed and the source model A is used0Converting into three-dimensional grid data;
s22, preprocessing the source model A0Inputting the trained MeshSegNet network for three-dimensional segmentation to obtain segmented partial models, wherein each vertex in the segmented models has a label value;
and S23, carrying out post-processing on the segmented model, wherein the post-processing comprises up-sampling and/or edge thinning based on graph segmentation.
4. The three-dimensional model registration method according to claim 1, wherein in step S2, the MeshSegNet network comprises the following modules:
a1. the multilayer perceptron module is composed of a plurality of one-dimensional convolution layers;
a2. the characteristic conversion module is composed of a plurality of one-dimensional convolution layers and tensor reforming layers;
a3. the graph constraint learning module consists of a plurality of one-dimensional convolution layers and symmetrical average pooling layers;
a4. other sub-modules, including an upsampling layer and/or a global max-pooling layer.
5. The three-dimensional model registration method according to claim 1, wherein in step S2, the training of the MeshSegNet network comprises the following steps:
b1. data preprocessing, namely performing data annotation, down-sampling and data enhancement on a plurality of three-dimensional model data;
b2. constructing a data set, and dividing the preprocessed multiple three-dimensional data into a training set and a verification set;
b3. and training the network, taking the three-dimensional data in the data set as input, and iteratively optimizing network parameters according to the precision index.
6. The method for registering three-dimensional models according to claim 1, wherein the step S3, the method for selecting the registration part B1 comprises: and selecting according to the vertex label value of the three-dimensional model data or selecting according to the space position.
7. The three-dimensional model registration method according to claim 1, wherein in step S4, the first model a is registered1And a registration section B1Methods of performing coarse registration include the Landmark algorithm, PFH algorithm, FPFH algorithm, or 3DSC algorithm.
8. The three-dimensional model registration method according to claim 1, wherein in step S5, the two-dimensional model a is registered2And a registration section B1The method of performing fine registration includes the ICP algorithm or a derivative of the ICP algorithm.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a three-dimensional model registration method as claimed in any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of registering a three-dimensional model as set forth in any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655430.7A CN113344984A (en) | 2021-06-11 | 2021-06-11 | Three-dimensional model registration method, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655430.7A CN113344984A (en) | 2021-06-11 | 2021-06-11 | Three-dimensional model registration method, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113344984A true CN113344984A (en) | 2021-09-03 |
Family
ID=77476993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110655430.7A Pending CN113344984A (en) | 2021-06-11 | 2021-06-11 | Three-dimensional model registration method, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344984A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110060556A1 (en) * | 2009-06-30 | 2011-03-10 | Srikumar Ramalingam | Method for Registering 3D Points with 3D Planes |
US20110235898A1 (en) * | 2010-03-24 | 2011-09-29 | National Institute Of Advanced Industrial Science And Technology | Matching process in three-dimensional registration and computer-readable storage medium storing a program thereof |
CN103356155A (en) * | 2013-06-24 | 2013-10-23 | 清华大学深圳研究生院 | Virtual endoscope assisted cavity lesion examination system |
CN107123164A (en) * | 2017-03-14 | 2017-09-01 | 华南理工大学 | Keep the three-dimensional rebuilding method and system of sharp features |
CN110838173A (en) * | 2019-11-15 | 2020-02-25 | 天津医科大学 | Three-dimensional texture feature-based individual brain covariant network construction method |
-
2021
- 2021-06-11 CN CN202110655430.7A patent/CN113344984A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110060556A1 (en) * | 2009-06-30 | 2011-03-10 | Srikumar Ramalingam | Method for Registering 3D Points with 3D Planes |
US20110235898A1 (en) * | 2010-03-24 | 2011-09-29 | National Institute Of Advanced Industrial Science And Technology | Matching process in three-dimensional registration and computer-readable storage medium storing a program thereof |
CN103356155A (en) * | 2013-06-24 | 2013-10-23 | 清华大学深圳研究生院 | Virtual endoscope assisted cavity lesion examination system |
CN107123164A (en) * | 2017-03-14 | 2017-09-01 | 华南理工大学 | Keep the three-dimensional rebuilding method and system of sharp features |
CN110838173A (en) * | 2019-11-15 | 2020-02-25 | 天津医科大学 | Three-dimensional texture feature-based individual brain covariant network construction method |
Non-Patent Citations (1)
Title |
---|
CHUNFENG LIAN等: ""Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces From 3D Intraoral Scanners"", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7493464B2 (en) | Automated canonical pose determination for 3D objects and 3D object registration using deep learning | |
Hering et al. | mlvirnet: Multilevel variational image registration network | |
CN112200843B (en) | Super-voxel-based CBCT and laser scanning point cloud data tooth registration method | |
CN111862171B (en) | CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion | |
CN113077471A (en) | Medical image segmentation method based on U-shaped network | |
US11302094B2 (en) | System and method for segmenting normal organ and/or tumor structure based on artificial intelligence for radiation treatment planning | |
CN106919944A (en) | A kind of wide-angle image method for quickly identifying based on ORB algorithms | |
CN111685899A (en) | Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models | |
CN114494296A (en) | Brain glioma segmentation method and system based on fusion of Unet and Transformer | |
CN111265317B (en) | Tooth orthodontic process prediction method | |
CN109325951A (en) | A method of based on the conversion and segmenting medical volume for generating confrontation network | |
CN115830163A (en) | Progressive medical image cross-mode generation method and device based on deterministic guidance of deep learning | |
CN115965641A (en) | Pharyngeal image segmentation and positioning method based on deplapv 3+ network | |
CN113706514B (en) | Focus positioning method, device, equipment and storage medium based on template image | |
CN117726614B (en) | Quality perception network and attention-like Siamese network collaborative medical fusion image quality evaluation method | |
Yang et al. | ImplantFormer: vision transformer-based implant position regression using dental CBCT data | |
Hu et al. | Mpcnet: Improved meshsegnet based on position encoding and channel attention | |
CN113344984A (en) | Three-dimensional model registration method, equipment and storage medium | |
CN114782454B (en) | Image recognition system for preoperative navigation of pelvic tumor images | |
KR102476888B1 (en) | Artificial diagnostic data processing apparatus and its method in digital pathology images | |
CN116310335A (en) | Method for segmenting pterygium focus area based on Vision Transformer | |
CN115526898A (en) | Medical image segmentation method | |
CN115239740A (en) | GT-UNet-based full-center segmentation algorithm | |
CN113850710A (en) | Cross-modal medical image accurate conversion method | |
CN112967295A (en) | Image processing method and system based on residual error network and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210903 |
|
RJ01 | Rejection of invention patent application after publication |