CN117437350B - Three-dimensional reconstruction system and method for preoperative planning - Google Patents

Three-dimensional reconstruction system and method for preoperative planning Download PDF

Info

Publication number
CN117437350B
CN117437350B CN202311173175.8A CN202311173175A CN117437350B CN 117437350 B CN117437350 B CN 117437350B CN 202311173175 A CN202311173175 A CN 202311173175A CN 117437350 B CN117437350 B CN 117437350B
Authority
CN
China
Prior art keywords
image
value
dimensional reconstruction
dimensional
graying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311173175.8A
Other languages
Chinese (zh)
Other versions
CN117437350A (en
Inventor
蔡惠明
李长流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Nuoyuan Medical Devices Co Ltd
Original Assignee
Nanjing Nuoyuan Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Nuoyuan Medical Devices Co Ltd filed Critical Nanjing Nuoyuan Medical Devices Co Ltd
Priority to CN202311173175.8A priority Critical patent/CN117437350B/en
Publication of CN117437350A publication Critical patent/CN117437350A/en
Application granted granted Critical
Publication of CN117437350B publication Critical patent/CN117437350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a three-dimensional reconstruction system and a three-dimensional reconstruction method for preoperative planning, which belong to the technical field of medical images, and specifically comprise the following steps: collecting images of focus positions of patients, carrying out graying and denoising treatment on the images, extracting features of the images subjected to the graying and denoising treatment by utilizing an image segmentation strategy, segmenting the images of the focus positions subjected to the treatment into a plurality of parts by combining an attention mechanism, solving matching cost of segmented image slices by utilizing a three-dimensional reconstruction strategy, carrying out fitting reconstruction on the segmented image slices by means of cost aggregation and parallax optimization, displaying focus areas of the patients in a three-dimensional visualization mode, processing edges when the images are segmented, and carrying out matching by utilizing edge pixels when the segmented image slices are matched, so that segmentation precision and matching precision are improved, three-dimensional images of the focus positions of the patients after the three-dimensional reconstruction are accurately displayed, surgical success rate is improved, and surgical risks are greatly reduced.

Description

Three-dimensional reconstruction system and method for preoperative planning
Technical Field
The invention belongs to the technical field of medical images, and particularly relates to a three-dimensional reconstruction system and method for preoperative planning.
Background
In recent years, as patients increase, medical image processing technology as a key auxiliary diagnosis and treatment means has been attracting more attention, and the design of segmentation and reconstruction methods has been a focus of research in the fields of medical image processing and computer-aided diagnosis, which are focused and extremely active. Before operation, three-dimensional imaging is carried out on the focus position of a patient, so that the method has important significance for planning and guiding a preoperative doctor, the success probability of the operation can be greatly improved, and the risk in the operation is reduced. However, in the imaging process, noise is often superimposed and appears, so that the quality of the acquired image is degraded, the segmentation precision is limited, the existing automatic or semi-automatic segmentation method is difficult to accurately predict the boundary of the region of interest, a nonlinear curved surface with complex patient focus tissues is difficult to present in reconstruction, and on the aspect of matching of cut image slices, the matching precision of the cut image slices is limited due to complex patient focus tissues, so that the three-dimensional image of the focus position of the patient cannot be accurately displayed.
Chinese patent publication No. CN107895364B, for example, discloses a three-dimensional reconstruction system for virtual preoperative planning. The scheme includes preprocessing CT or MRI image, image segmentation of the denoised, smoothed and reinforced image, and cavity processing introduced after segmentation to obtain complete image without information. And then the segmented complete image is subjected to three-dimensional reconstruction, 9 types of improved MC algorithms are added on the basis of 15 basic topological configurations of the original algorithm, the defect of connection problem of the original algorithm is overcome, the fitted curved surface is more complete and is not easy to generate a cavity, and finally smooth treatment is introduced, so that the fitted curved surface is smooth and flat, and the reconstructed model can be directly used for the following virtual operation cutting and collision detection application.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a three-dimensional reconstruction system and a three-dimensional reconstruction method for preoperative planning, the system has comprehensive functions, images of focus positions of patients are acquired, the images are subjected to graying and denoising treatment, the characteristics of the images after the graying and denoising treatment are extracted by utilizing an image segmentation strategy, the images of the focus positions after the treatment are segmented into a plurality of parts by combining an attention mechanism, the matching cost of segmented image slices is solved by utilizing the three-dimensional reconstruction strategy, the segmented image slices are subjected to fitting reconstruction by cost aggregation and parallax optimization, focus areas of the patients are displayed in a three-dimensional visualization mode, edges of the segmented image are processed, and when the segmented image slices are matched, the edge pixels are utilized for matching, so that the segmentation precision and the matching precision are improved, the three-dimensional images of the focus positions of the patients after the three-dimensional reconstruction are accurately displayed, and the planning precision is improved when the planning is performed before the operation, the success rate of the operation is improved, and the operation risk is greatly reduced.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a three-dimensional reconstruction method for preoperative planning, comprising the specific steps of:
Collecting an image of the focus position of a patient, and carrying out graying and denoising treatment on the image;
Extracting the characteristics of the processed image by utilizing an image segmentation strategy, and segmenting the image of the processed focus position into a plurality of parts by combining an attention mechanism;
And (3) solving the matching of the segmented image slices by utilizing a three-dimensional reconstruction strategy, and carrying out fitting reconstruction by cost aggregation and parallax optimization to display the focus area of the patient in a three-dimensional visualization mode.
Specifically, the specific method for the image segmentation strategy comprises the following steps:
Step S101: placing the processed image into a trained neural network model, splicing shallow layer features and high layer features in the neural network model, performing convolution operation to obtain a gray value feature value, and setting the extracted gray value feature value as Query;
Step S102: combining an attention mechanism, carrying out weighted calculation on the extracted gray value feature Query, capturing the edge gray value feature of the whole image, and obtaining the whole gray value feature of the image;
Step S103: counting the number of each gray pixel in the image subjected to the graying and denoising treatment, setting n i as the number of pixels with the gray level of i in the image subjected to the graying and denoising treatment, and setting the probability of the gray level of i in the image subjected to the graying and denoising treatment as And/>Setting the gray value of the whole image after the graying and denoising treatment as m G, the gray threshold as k, the probability of dividing pixels into A as pg A (k), the average gray of pixels allocated to A as m A (k), the probability of dividing pixels into B as pg B (k), and the average gray of pixels allocated to B as m B (k);
Step S104: the maximized inter-class variance of the pixel is calculated as:
σ2=pgA(k)×pgB(k)(mA(k)-mB(k))2
Wherein,
And (3) carrying out pixel classification according to the calculated variance, and dividing the image of the focus position by combining the gray value characteristic value and the pixel classification.
Specifically, the three-dimensional reconstruction strategy specifically comprises the following steps:
Step S201: the single-sided edge pixel point sets of the segmented image slice T 1 and any image slice T 2 are set to P and Q,P=(p1,p2,...pi...pm),Q=(q1,q2,...qj...qn),, where P i represents the ith edge pixel point in slice T 1, q j represents the jth edge pixel point in slice T 2, the gray values are λ P and λ Q, Calculating the matching initial cost of the segmented image slices, wherein a cost value calculation formula is as follows:
Wherein, Representing a cost value control parameter, |·| representing an absolute value function, e representing the base of the natural logarithm;
Step S202: cost aggregation, setting a gray level difference threshold value as eta, a pixel distance difference threshold value as mu, and a single-side edge pixel point set difference threshold value of slices T 1 and T 2 as sigma, wherein a principle formula of pixel aggregation is as follows:
wherein d (p i,qj) represents the pixel distance between the pixel point p i and the pixel point q j;
step S203: according to the cost value, calculating an aggregated cost value, wherein a calculation formula is as follows:
wherein w represents the w-th edge pixel point in slices T 1 and T 2, p w represents the w-th edge pixel point in slice T 1, and q w represents the w-th edge pixel point in slice T 2;
Step S204: according to the steps S201-S203, the aggregate cost value of the four edges of the cut image slice T 1 is calculated, and the parallax value S selects the average value with the minimum cost value of four sides, namely Wherein minC zs represents the aggregate minimum cost value of the upper edge of the segmented image slice T 1, minC zx represents the aggregate minimum cost value of the lower edge of the segmented image slice T 1, minC zl represents the aggregate minimum cost value of the left edge of the segmented image slice T 1, and minC zr represents the aggregate minimum cost value of the right edge of the segmented image slice T 1;
step S205: and selecting the matched segmented image slices according to the parallax value S, and performing three-dimensional reconstruction to display the focus area of the patient in a three-dimensional visual mode.
Specifically, the graying and denoising processes specifically include graying a color image and processing Gaussian noise and rice noise in an imaging process.
Specifically, the neural network model in step S101 is Densenet networks.
Specifically, the attention mechanism in step S102 includes: channel attention mechanisms and spatial attention mechanisms.
Specifically, the channel attention mechanism is used for weighting operation in the channel domain of the feature map.
Specifically, the spatial attention mechanism is used for converting various deformation data in space and automatically capturing regional characteristics.
Specifically, the three-dimensional reconstruction in step S205 performs three-dimensional reconstruction on the matched image slices by using cubic convolution interpolation.
A three-dimensional reconstruction system for preoperative planning, comprising:
The image acquisition equipment is used for shooting focus position images of a patient;
The image processing unit is used for carrying out graying and denoising treatment on the focus position image of the patient, extracting the characteristics of the image subjected to the graying and denoising treatment by utilizing an image segmentation strategy, segmenting the image of the focus position after the treatment into a plurality of parts by combining an attention mechanism, solving the matching of segmented image slices by utilizing a three-dimensional reconstruction strategy, and carrying out fitting reconstruction by cost aggregation and parallax optimization so as to enable the focus region of the patient to be displayed in a three-dimensional visual mode;
The three-dimensional imaging device is used for displaying the three-dimensional reconstructed focus position image of the patient in a three-dimensional visual form.
The image acquisition apparatus includes: nuclear magnetic resonance equipment, CT equipment and PET equipment.
An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of a three-dimensional reconstruction method for preoperative planning.
A computer-readable storage medium having stored thereon computer instructions which, when executed, perform the steps of a three-dimensional reconstruction method for preoperative planning.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention optimizes and improves the architecture, operation steps and flow of the three-dimensional reconstruction system for preoperative planning, and the system has the advantages of simple flow, low investment and operation cost and low production and working cost, and improves the three-dimensional reconstruction precision.
2. The invention provides a three-dimensional reconstruction method for preoperative planning, which is used for collecting images of focus positions of a patient, carrying out graying and denoising treatment on the images, extracting features of the images subjected to the graying and denoising treatment by utilizing an image segmentation strategy, dividing the images of the focus positions after treatment into a plurality of parts by combining an attention mechanism, solving matching cost of segmented image slices by utilizing a three-dimensional reconstruction strategy, carrying out fitting reconstruction by cost aggregation and parallax optimization, displaying focus areas of the patient in a three-dimensional visualization mode, processing edges when the images are segmented, and carrying out matching by utilizing edge pixels when the segmented image slices are matched, so that the segmentation precision and the matching precision are improved, the three-dimensional images of the focus positions of the patient after three-dimensional reconstruction are accurately displayed, and the planning precision is improved, the surgical success rate is improved and the surgical risk is greatly reduced when the images are planned before the surgery.
3. According to the three-dimensional reconstruction method, a three-dimensional reconstruction strategy is provided, matching cost among edges of the images is calculated, cost aggregation is carried out, edge cost of four sides of the images is calculated, the average value of minimum values of the edge cost of the four sides of the images is taken, the influence of noise on the edges is effectively reduced through edge processing, the matching accuracy is greatly improved, and then the three-dimensional reconstruction is carried out, so that the focus position of a patient is accurately displayed, and the three-dimensional image under a complex environment is displayed.
Drawings
FIG. 1 is a flow chart of a three-dimensional reconstruction method for preoperative planning of the present invention;
FIG. 2 is a graph of edge and edge matching of a segmented image slice according to the present invention;
FIG. 3 is a schematic diagram of a three-dimensional reconstruction system for preoperative planning in accordance with the present invention;
Fig. 4 is an electronic device diagram of the three-dimensional reconstruction method for preoperative planning of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1
Referring to fig. 1, an embodiment of the present invention is provided: a three-dimensional reconstruction method for preoperative planning, comprising the specific steps of:
Collecting an image of the focus position of a patient, and carrying out graying and denoising treatment on the image;
Extracting features of the image subjected to graying and denoising treatment by utilizing an image segmentation strategy, and segmenting the image of the focus position after treatment into a plurality of parts by combining an attention mechanism;
And solving the matching cost of the segmented image slices by utilizing a three-dimensional reconstruction strategy, and carrying out fitting reconstruction on the segmented image slices by cost aggregation and parallax optimization to display the focus area of the patient in a three-dimensional visualization mode.
The specific method of the image segmentation strategy comprises the following steps:
Step S101: putting the image subjected to graying and denoising treatment into a trained neural network model, splicing shallow layer features and high layer features in the neural network model, then performing convolution operation to obtain a gray value feature value, and setting the extracted gray value feature value as Query;
Step S102: combining an attention mechanism, carrying out weighted calculation on the extracted gray value characteristic value Query, capturing the edge gray value characteristic of the whole image, and obtaining the whole gray value characteristic of the image;
In a conventional convolutional neural network, an operation object is usually a feature map and a network parameter, and the network parameter is always fixed and shared in a forward process, so that the feature map is difficult to realize second-order operation, such as modulo or correlation calculation, and although the operation process can be approximately fitted in a fourier expansion manner in a macroscopic manner, the feasibility is relatively low from the aspects of complexity and efficiency of network fitting. How to design a self-interaction mechanism between features in a network to improve the characterization capability of the network to the features, so as to improve the construction capability of the features of the network is a considerable problem. The intuitiveness of the attention mechanism, which is an operation of correlating and reassigning feature weights, is described as weighted summation of features to focus on important features and suppress the remaining features, but analyzed essentially, important features are defined as task-based highly correlated features, which is advantageous in that self-interaction between features is achieved.
Step S103: counting the number of each gray pixel in the image subjected to the graying and denoising treatment, setting n i as the number of pixels with the gray level of i in the image subjected to the graying and denoising treatment, and setting the probability of the gray level of i in the image subjected to the graying and denoising treatment asAnd/>Setting the gray value of the whole image after the graying and denoising treatment as m G, the gray threshold as k, the probability of dividing pixels into A as pg A (k), the average gray of pixels allocated to A as m A (k), the probability of dividing pixels into B as pg B (k), and the average gray of pixels allocated to B as m B (k);
Step S104: the maximized inter-class variance of the pixel is calculated as:
σ2=pgA(k)×pgB(k)(mA(k)-mB(k))2
Wherein,
And (3) carrying out pixel classification according to the calculated variance, and dividing the image of the focus position after the graying and denoising treatment by combining the gray value characteristic value and the pixel classification.
Example 2
Referring to fig. 2, another embodiment of the present invention is provided: the three-dimensional reconstruction strategy comprises the following specific steps:
Step S201: the single-sided edge pixel point sets of the segmented image slice T 1 and any image slice T 2 are set to P and Q,P=(p1,p2,...pi...pm),Q=(q1,q2,...qj...qn),, where P i represents the ith edge pixel point in slice T 1, q j represents the jth edge pixel point in slice T 2, the gray values are λ P and λ Q, Calculating the matching initial cost of the segmented image slices, wherein a cost value calculation formula is as follows:
Wherein, Representing a cost value control parameter, |·| representing an absolute value function;
Another method of cost calculation: the AD method is used for determining the matching cost by averaging absolute values of differences between three color components of left and right view pixel points, and the calculation formula is as follows:
Wherein C AD (P, d) represents the cost value of the AD method, P represents the pixel point in the left view image, pd represents the pixel point corresponding to the pixel point P in the right view image, Values of R, G and B representing pixel point P in left view,/>The values of R, G and B representing pixel points Pd in the right view.
Step S202: cost aggregation, setting a gray level difference threshold value as eta, a pixel distance difference threshold value as mu, and a single-side edge pixel point set difference threshold value of slices T 1 and T 2 as sigma, wherein a principle formula of pixel aggregation is as follows:
wherein d (p i,qj) represents the pixel distance between the pixel point p i and the pixel point q j;
the pixel distance difference threshold mu value is selected, the mu value is the best 2 or 3 effect, the comparison of the edge pixels cannot be performed only on the most edge pixel, because noise influence exists during picture cutting, the matching precision can be affected, the noise error can be effectively reduced through the comparison of a plurality of edge pixels, and the matching and three-dimensional reconstruction can be performed more accurately.
Step S203: according to the cost value, calculating an aggregated cost value, wherein a calculation formula is as follows:
wherein w represents the w-th edge pixel point in slices T 1 and T 2, p w represents the w-th edge pixel point in slice T 1, and q w represents the w-th edge pixel point in slice T 2;
Step S204: according to the steps S201-S203, the aggregate cost value of the four edges of the cut image slice T 1 is calculated, and the parallax value S selects the average value with the minimum cost value of four sides, namely Wherein minC zs represents the aggregate minimum cost value of the upper edge of the segmented image slice T 1, minC zx represents the aggregate minimum cost value of the lower edge of the segmented image slice T 1, minC zl represents the aggregate minimum cost value of the left edge of the segmented image slice T 1, and minC zr represents the aggregate minimum cost value of the right edge of the segmented image slice T 1;
The method has the advantages that the parallax value S is selected as the average value with the minimum four-side cost value, and the method comprises the following steps: through image cutting, different interested areas are separated, the processing of the cut image edge is enhanced by utilizing a space attention mechanism and a channel attention mechanism, the minimum four-side cost value is selected to reduce the matching cost, the efficiency is improved, the average value is more accurately matched with the same interested area,
For example, the matching of the liver and the blood vessel, the gray values of the edge pixels of the liver and the blood vessel are very different, and when the cost is calculated, the parallax value is selected for being matched to the same region of interest more easily because the gray values of the same region of interest are very different.
Step S205: and selecting the matched segmented image slices according to the parallax value S, and performing three-dimensional reconstruction to display the focus area of the patient in a three-dimensional visual mode.
The three-dimensional reconstruction method comprises the following steps: based on the recombination of slice edges, firstly calculating the matching cost of edge pixels at one side of a cut image after the cutting of focus positions of two matched patients, limiting a set principle, calculating the total matching cost value, sequencing the total cost value, taking the minimum total cost value, repeating the steps, matching the four sides of the cut image, selecting the average value of the minimum total cost value of the four sides as a matching condition, screening out matching pictures meeting the condition, recombining after the matching and screening of all the slices, recovering each face of the slice according to the length and width information of the edges, and then splicing and recombining to form a three-dimensional image.
The graying and denoising processes are specifically a process of graying a color image and a process of gaussian noise and rice noise in an imaging process.
Graying treatment: according to the numerical values of the R, G and B components of the image, carrying out weighted average according to a certain weight to carry out gray conversion, wherein the weighted calculation formula is as follows:
Try=0.3*TR+0.59*TG+0.11TB
Wherein T ry represents the gray value of any one pixel in the color image of the lesion position of the patient, ry represents any one pixel in the color image of the lesion position of the patient, T R represents the R value of the pixel ry, T G represents the G value of the pixel ry, and T B represents the B value of the pixel ry.
The neural network model in step S101 is Densenet networks.
The attention mechanism in step S102 includes: channel attention mechanisms and spatial attention mechanisms.
The channel attention mechanism is used to perform weighting operations in the channel domain of the feature map.
The channel attention mechanism is understood from the perspective of digital image processing, a single convolution kernel is represented as a filter of a certain type of characteristics, an N-dimensional characteristic diagram is derived from N convolution operation outputs, so that a channel weighting-based mode is actually an operation of screening the characteristics, and the main channel attention method is completed based on a characteristic compression operation on a spatial domain, and a pooling mode is generally adopted. The channel attention mechanism screens the feature types and finds out the feature types with high task relevance, so that the expression capacity of the model is weighted and the influence caused by noise is further suppressed.
The spatial attention mechanism is used to transform various deformation data in space and automatically capture region features.
Spatial attention mechanisms are different from channel attention mechanisms, which are used to focus on the response of highly correlated locations under the same feature type, e.g., channel attention captures edge features of the entire image, then spatial attention is focused on edge pixels of the liver. The spatial attention is a complementary operation to the translational invariance operation, and the statistical property of the spatial distribution of the key features of the target can be learned. In addition, in the spatial attention mechanism, the attention mechanism based on Non-Local also has the capability of breaking Local receptive field limitation of the convolutional neural network, and can realize the association between remote pixels.
The three-dimensional reconstruction in step S205 performs three-dimensional reconstruction on the matched image slices using cubic convolution interpolation.
Example 3
The three-dimensional reconstruction method comprises the following steps:
1) The isosurface extraction algorithm approximates an isosurface in a three-dimensional discrete data domain through linear interpolation, firstly, a voxel is defined to represent a cube formed by 8 pixels which are sequentially arranged, each non-boundary voxel in the data is contained in 8 voxels, and three conditions exist in the polar (top) point value of each voxel: higher or equal values mean inside the surface and lower values mean outside the surface. Thus, according to equivalence, there are two states for poles in each voxel, 256 states for every two combined voxels, which can be divided into 15 basic configurations according to space translation invariance and symmetry, when a voxel corresponding to one of eight vertices of a cube is considered to be on a boundary, a triangle surface is created, the cube is divided into two parts to create a tetrahedral element, the spatial positions of the vertices of the triangular patches are calculated by linear interpolation according to the values of the isosurfaces and the values of the two vertices of the edge through which the three patches pass, each voxel will contain a plurality of triangular patches inside, and a final triangular mesh surface model is constructed by traversing all the voxels and combining all the triangular surface combinations. At this time, the outer layer of the hybrid mesh model is composed of tetrahedral units, the inner region is composed of hexahedral units, the two layers share the same group of nodes and are not connected, compliance of finite elements is maintained, all the contributed triangular patches are combined to form mesh surface data, and the actual graph effect generated by the current inner plane is approximately represented. The MC algorithm is relatively visual in principle, is mutually independent in theoretical construction, and is very suitable for three-dimensional reconstruction drawing of a regular structure, but a large number of triangular patches can cause the problem of ambiguity of topological connection between data in the operation process of the algorithm, and the problem needs to be solved by parallel calculation. For MRI images, the characteristics of digital imaging, the complexity of anatomy and tissue specificity, lead to unavoidable noise interference, and therefore the application of MC algorithms in the medical field is limited.
2) The three-dimensional imaging system based on the Mimics is composed of a plurality of modules capable of realizing functions in different image processing fields, the modules can be flexibly matched according to the requirements of users, and the application is very high. Based on medical image data, the hybrid can be used for assisting doctors in diagnosing a plurality of diseases, developing operation plans and simulation researches, particularly a module with a very perfect segmentation function for gray value images, can process any number of image slices (allowing rectangular images), and provides several segmentation and visualization tools for interfaces for processing the images. The visual interface of the Mimics is mainly divided into 4 views: axial, coronal, sagittal views, and 3D object views. And MIMICSSTL + modules provide interfaces of all rapid prototyping systems for triangular plate files, software automatically calls bilinear and median interpolation algorithms when generating files, and the modules are used for improving the accuracy of rapid prototyping models, and the calculated models can be exported to STL formats. Because the segmented image data only contains the region of interest, the three-dimensional reconstruction process is simple and easy to operate based on mimic, firstly, a mask is generated by right clicking a mouse at the right upper corner of the main interface, then, a custom is selected according to the divided HU value interval range, in order to ensure connectivity between voxels, fillholes and KEEPLARGEST options are checked, then, the mask is selected, the calculation 3D is selected by the right key, the Quality is selected as high, if a plurality of models are generated by prompting at the moment, the previous step is returned, after the corresponding mask is selected, the corresponding mask is clicked by using RegionGrowing tools, and the mask1 which is fully communicated is generated by using MultipleLayer and any connection mode is checked, then, the 3D model is calculated for the mask1, a Quality model is generated, and whether the post-processing of the model is performed or not is optional: surface smoothing (Remesh) and detection of selfing (DETECTSELF-intersections). The main purposes of the post-treatment include: the quality of the three-dimensional surface model is ensured, the grid number is simplified, and the calculated amount and the geometric error of the model are reduced. The three-dimensional surface model generated after a series of operations can be observed in a 3D view, and the right key of the 3D object generated by selection is stored and displayed by using STL+.
3) The light projection method is a slice-level three-dimensional reconstruction algorithm, and is a direct volume rendering method in order of image space, and is widely applied to the field of medical imaging. The light projection method mainly comprises the steps of storage, projection, interpolation, gradient estimation and the like. Unlike ray tracing algorithms, the basic idea is to project a parallel straight line along the eye's line of sight from a pixel point, which does not rest on the object surface, but instead penetrates the three-dimensional volume data field. Uniformly sampling along the ray at intervals, and calculating the optical property of the sampling point by using an interpolation algorithm. And then synthesizing all sampling points of equal distances according to a specific sequence (front to back or back to front), and finally calculating the pixel points on the two-dimensional screen corresponding to the light rays on a space coordinate system. The floodlight model is a simple empirical model only considering ambient light, only the plane shape of an object can be seen through the floodlight model, and the space volume sense can not be felt, and the calculation formula is as follows:
I en=Ka×Ia, wherein I en represents the intensity of reflected ambient light, K a represents the reflection coefficient of an object on the ambient light, I a represents the intensity of incident ambient light, and the biggest difference between a ray casting method and an MC algorithm is that the ray casting method obtains an opaque drawing result through setting parameters, so that the internal structure form of the object can be effectively reflected. The imaging quality of ray casting drawing is good, but the disadvantage that the ray casting method cannot orderly store the volume data according to the logic storage sequence, namely each volume element in the data needs to be traversed, and once the direction of a sampling point in the volume data field is slightly changed, resampling is needed. Therefore, it is difficult to increase the speed of three-dimensional reconstruction. Algorithms for improving ray projection methods such as binary tree, octree structure and pyramid structure of K-dimensional points all attempt to improve volume rendering efficiency through internal correlation, so that algorithm complexity is reduced, but effects are not ideal when the algorithms are applied to a real medical image.
Example 4
Referring to fig. 3, another embodiment of the present invention is provided: a three-dimensional reconstruction system for preoperative planning, comprising:
The image acquisition equipment is used for shooting focus position images of a patient;
The image processing unit is used for carrying out graying and denoising treatment on the focus position image of the patient, extracting the characteristics of the image subjected to the graying and denoising treatment, dividing the image of the focus position into a plurality of parts by combining an attention mechanism, solving the matching of the divided image slices by utilizing a three-dimensional reconstruction strategy, and carrying out fitting reconstruction by cost aggregation and parallax optimization so as to enable the focus region of the patient to be displayed in a three-dimensional visual mode;
The three-dimensional imaging device is used for displaying the three-dimensional reconstructed focus position image of the patient in a three-dimensional visual form.
The image acquisition apparatus includes: nuclear magnetic resonance equipment, CT equipment and PET equipment.
Example 5
Referring to fig. 4, an electronic device includes a memory and a processor, wherein the memory stores a computer program, and wherein the processor implements steps of a three-dimensional reconstruction method for preoperative planning when executing the computer program.
Example 6
Referring to fig. 4, a computer readable storage medium having stored thereon computer instructions which, when executed, perform the steps of a three-dimensional reconstruction method for preoperative planning.
It will be understood by those skilled in the art that the foregoing description is only a preferred embodiment of the invention and that, although the invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various modifications and equivalents may be made to the technical aspects described in the foregoing embodiments, and that all such modifications and equivalents are intended to be included within the spirit and principles of the invention.

Claims (11)

1. A three-dimensional reconstruction method for preoperative planning, comprising the specific steps of:
Collecting an image of the focus position of a patient, and carrying out graying and denoising treatment on the image;
Extracting features of the image subjected to graying and denoising treatment by utilizing an image segmentation strategy, and segmenting the image of the focus position after treatment into a plurality of parts by combining an attention mechanism;
Calculating the matching cost of the segmented image slices by utilizing a three-dimensional reconstruction strategy, and then carrying out fitting reconstruction on the segmented image slices by cost aggregation and parallax optimization to display a focus area of a patient in a three-dimensional visualization mode;
The specific method for the image segmentation strategy comprises the following steps:
Step S101: putting the image subjected to graying and denoising treatment into a trained neural network model, splicing shallow layer features and high layer features in the neural network model, then performing convolution operation to obtain a gray value feature value, and setting the extracted gray value feature value as Query;
Step S102: combining an attention mechanism, carrying out weighted calculation on the extracted gray value characteristic value Query, capturing the edge gray value characteristic of the whole image, and obtaining the whole gray value characteristic of the image;
Step S103: counting the number of each gray pixel in the image subjected to the graying and denoising treatment, setting n i as the number of pixels with the gray level of i in the image subjected to the graying and denoising treatment, and setting the probability of the gray level of i in the image subjected to the graying and denoising treatment as And/>Setting the gray value of the whole image after the graying and denoising treatment as m G, the gray threshold as k, the probability of dividing pixels into A as pg A (k), the average gray of pixels allocated to A as m A (k), the probability of dividing pixels into B as pg B (k), and the average gray of pixels allocated to B as m B (k);
Step S104: the maximized inter-class variance of the pixel is calculated as:
σ2=pgA(k)×pgB(k)(mA(k)-mB(k))2
Wherein,
According to the calculated variance, pixel classification is carried out, and the image of the focus position after graying and denoising treatment is segmented by combining the gray value characteristic value and the pixel classification;
the three-dimensional reconstruction strategy comprises the following specific steps:
Step S201: the single-sided edge pixel point sets of the segmented image slice T 1 and any image slice T 2 are set to P and Q,P=(p1,p2,...pi...pm),Q=(q1,q2,...qj...qn),, where P i represents the ith edge pixel point in slice T 1, q j represents the jth edge pixel point in slice T 2, the gray values are λ P and λ Q, Calculating the matching initial cost of the segmented image slices, wherein a cost value calculation formula is as follows:
Wherein, Representing a cost value control parameter, |·| representing an absolute value function, e representing the base of the natural logarithm;
Step S202: cost aggregation, setting a gray level difference threshold value as eta, a pixel distance difference threshold value as mu, and a single-side edge pixel point set difference threshold value of slices T 1 and T 2 as sigma, wherein a principle formula of pixel aggregation is as follows:
wherein d (p i,qj) represents the pixel distance between the pixel point p i and the pixel point q j;
step S203: according to the cost value, calculating an aggregated cost value, wherein a calculation formula is as follows:
wherein w represents the w-th edge pixel point in slices T 1 and T 2, p w represents the w-th edge pixel point in slice T 1, and q w represents the w-th edge pixel point in slice T 2;
Step S204: according to the steps S201-S203, the aggregate cost value of the four edges of the cut image slice T 1 is calculated, and the parallax value S selects the average value with the minimum cost value of four sides, namely Wherein, min C zs represents the aggregate minimum cost value of the upper edge of the segmented image slice T 1, min C zx represents the aggregate minimum cost value of the lower edge of the segmented image slice T 1, min C zl represents the aggregate minimum cost value of the left edge of the segmented image slice T 1, and min C zr represents the aggregate minimum cost value of the right edge of the segmented image slice T 1;
step S205: and selecting the matched segmented image slices according to the parallax value S, and performing three-dimensional reconstruction to display the focus area of the patient in a three-dimensional visual mode.
2. A three-dimensional reconstruction method for preoperative planning according to claim 1, wherein the graying and denoising processes are in particular graying of color images and gaussian and rice noise processing during imaging.
3. The three-dimensional reconstruction method for preoperative planning of claim 1, wherein the neural network model in step S101 is Densenet networks.
4. A three-dimensional reconstruction method for preoperative planning according to claim 1, wherein the attention mechanism in step S102 comprises: channel attention mechanisms and spatial attention mechanisms.
5. A three-dimensional reconstruction method for preoperative planning as defined in claim 4, wherein the channel attention mechanism is used to perform weighting operations in the channel domain of the feature map.
6. A three-dimensional reconstruction method for preoperative planning as defined in claim 4, wherein the spatial attention mechanism is used to spatially transform various deformation data and automatically capture regional features.
7. A three-dimensional reconstruction method for preoperative planning as defined in claim 1, wherein the three-dimensional reconstruction in step S205 uses cubic convolution interpolation to reconstruct three-dimensionally matched image slices.
8. A three-dimensional reconstruction system for preoperative planning, which is realized on the basis of a three-dimensional reconstruction method for preoperative planning as claimed in any one of claims 1 to 7, comprising:
The image acquisition equipment is used for shooting focus position images of a patient;
The image processing unit is used for carrying out graying and denoising treatment on the focus position image of the patient, extracting the characteristics of the image subjected to the graying and denoising treatment by utilizing an image segmentation strategy, segmenting the image of the focus position after the treatment into a plurality of parts by combining an attention mechanism, solving the matching of segmented image slices by utilizing a three-dimensional reconstruction strategy, and carrying out fitting reconstruction by cost aggregation and parallax optimization so as to enable the focus region of the patient to be displayed in a three-dimensional visual mode;
The three-dimensional imaging device is used for displaying the three-dimensional reconstructed focus position image of the patient in a three-dimensional visual form.
9. A three-dimensional reconstruction system for preoperative planning of claim 8, wherein the image acquisition device comprises: nuclear magnetic resonance equipment, CT equipment and PET equipment.
10. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of a three-dimensional reconstruction method for preoperative planning according to any one of claims 1-7.
11. A computer-readable storage medium, having stored thereon computer instructions, which when executed perform the steps of a three-dimensional reconstruction method for preoperative planning according to any one of claims 1-7.
CN202311173175.8A 2023-09-12 2023-09-12 Three-dimensional reconstruction system and method for preoperative planning Active CN117437350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311173175.8A CN117437350B (en) 2023-09-12 2023-09-12 Three-dimensional reconstruction system and method for preoperative planning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311173175.8A CN117437350B (en) 2023-09-12 2023-09-12 Three-dimensional reconstruction system and method for preoperative planning

Publications (2)

Publication Number Publication Date
CN117437350A CN117437350A (en) 2024-01-23
CN117437350B true CN117437350B (en) 2024-05-03

Family

ID=89550571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311173175.8A Active CN117437350B (en) 2023-09-12 2023-09-12 Three-dimensional reconstruction system and method for preoperative planning

Country Status (1)

Country Link
CN (1) CN117437350B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895364A (en) * 2017-10-31 2018-04-10 哈尔滨理工大学 A kind of three-dimensional reconstruction system for the preoperative planning of virtual operation
CN111508068A (en) * 2020-04-20 2020-08-07 华中科技大学 Three-dimensional reconstruction method and system applied to binocular endoscope image
WO2021244621A1 (en) * 2020-06-04 2021-12-09 华为技术有限公司 Scenario semantic parsing method based on global guidance selective context network
WO2021253939A1 (en) * 2020-06-18 2021-12-23 南通大学 Rough set-based neural network method for segmenting fundus retinal vascular image
CN114529505A (en) * 2021-12-28 2022-05-24 天翼电子商务有限公司 Breast lesion risk assessment system based on deep learning
CN116433605A (en) * 2023-03-16 2023-07-14 重庆邮电大学 Medical image analysis mobile augmented reality system and method based on cloud intelligence
CN116612174A (en) * 2023-06-02 2023-08-18 青岛大学附属医院 Three-dimensional reconstruction method and system for soft tissue and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895364A (en) * 2017-10-31 2018-04-10 哈尔滨理工大学 A kind of three-dimensional reconstruction system for the preoperative planning of virtual operation
CN111508068A (en) * 2020-04-20 2020-08-07 华中科技大学 Three-dimensional reconstruction method and system applied to binocular endoscope image
WO2021244621A1 (en) * 2020-06-04 2021-12-09 华为技术有限公司 Scenario semantic parsing method based on global guidance selective context network
WO2021253939A1 (en) * 2020-06-18 2021-12-23 南通大学 Rough set-based neural network method for segmenting fundus retinal vascular image
CN114529505A (en) * 2021-12-28 2022-05-24 天翼电子商务有限公司 Breast lesion risk assessment system based on deep learning
CN116433605A (en) * 2023-03-16 2023-07-14 重庆邮电大学 Medical image analysis mobile augmented reality system and method based on cloud intelligence
CN116612174A (en) * 2023-06-02 2023-08-18 青岛大学附属医院 Three-dimensional reconstruction method and system for soft tissue and computer storage medium

Also Published As

Publication number Publication date
CN117437350A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
US10713856B2 (en) Medical imaging system based on HMDS
US9317970B2 (en) Coupled reconstruction of hair and skin
CN107563378A (en) The method and its system of area-of-interest are extracted in volume data
CN107622492A (en) Lung splits dividing method and system
Cheng et al. A morphing-Based 3D point cloud reconstruction framework for medical image processing
JP2015080720A (en) Apparatus and method for computer-aided diagnosis
CN109598697A (en) The determination of two-dimentional mammary gland radiography data group
CN110782489B (en) Image data matching method, device and equipment and computer readable storage medium
CN110443839A (en) A kind of skeleton model spatial registration method and device
CN115546442A (en) Multi-view stereo matching reconstruction method and system based on perception consistency loss
CN103366348B (en) A kind of method and treatment facility suppressing skeletal image in X-ray image
WO2021030995A1 (en) Inferior vena cava image analysis method and product based on vrds ai
CN114387392A (en) Method for reconstructing three-dimensional human body posture according to human shadow
CN111476764B (en) Method for three-dimensional reconstruction of motion-blurred CT image
CN117437350B (en) Three-dimensional reconstruction system and method for preoperative planning
WO2020173054A1 (en) Vrds 4d medical image processing method and product
CN114340496A (en) Analysis method and related device of heart coronary artery based on VRDS AI medical image
WO2022163513A1 (en) Learned model generation method, machine learning system, program, and medical image processing device
CN111369662A (en) Three-dimensional model reconstruction method and system for blood vessels in CT (computed tomography) image
AU2019429940B2 (en) AI identification method of embolism based on VRDS 4D medical image, and product
CN112907733A (en) Method and device for reconstructing three-dimensional model and three-dimensional model acquisition and reconstruction system
AU2019430854B2 (en) VRDS 4D medical image-based artery and vein Ai processing method and product
RU2783364C1 (en) Device for creation of multidimensional virtual images of human respiratory organs and method for creation of volumetric images, using device
Khaleel et al. A Review paper of 3D Surface Reconstruction of Coronary Arteries From Cardiovascular Angiography
Liu et al. Study and application of medical image visualization technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant