CN112233791B - Mammary gland prosthesis preparation device and method based on point cloud data clustering - Google Patents

Mammary gland prosthesis preparation device and method based on point cloud data clustering Download PDF

Info

Publication number
CN112233791B
CN112233791B CN202011108680.0A CN202011108680A CN112233791B CN 112233791 B CN112233791 B CN 112233791B CN 202011108680 A CN202011108680 A CN 202011108680A CN 112233791 B CN112233791 B CN 112233791B
Authority
CN
China
Prior art keywords
point cloud
cloud data
image
magnetic resonance
nuclear magnetic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011108680.0A
Other languages
Chinese (zh)
Other versions
CN112233791A (en
Inventor
王之琼
信俊昌
胡玉平
王中阳
王腾绪
赵鸿硕
高撒
刘一汉
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202011108680.0A priority Critical patent/CN112233791B/en
Publication of CN112233791A publication Critical patent/CN112233791A/en
Application granted granted Critical
Publication of CN112233791B publication Critical patent/CN112233791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • G06T5/70
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]

Abstract

The invention provides a mammary gland prosthesis preparation device and method based on point cloud data clustering, comprising a smooth noise reducer, an image enhancer, a surface drawing reconstructor, a volume drawing reconstructor, a point cloud data extractor, a surface data fusion device and a data image display, wherein nuclear magnetic resonance images of mammary glands are respectively led into the surface drawing reconstructor and the volume drawing reconstructor to obtain three-dimensional model images after being preprocessed by the smooth noise reducer and the image enhancer, point cloud data sets are led into the surface data fusion device by the point cloud data extractor, fusion processing is carried out by the point cloud data sets, and finally a mammary gland prosthesis model is generated by the data image display. After the point cloud data set obtained by surface drawing and the point cloud data set obtained by volume drawing are fused, the invention not only can make up the outline of the missing outer surface, but also can ensure the integrity of the interior, so that the information of the functional nuclear magnetic resonance image is fully exerted, and the effect of better assisting the medical treatment is achieved.

Description

Mammary gland prosthesis preparation device and method based on point cloud data clustering
Technical Field
The invention relates to the technical field of computer aided diagnosis, in particular to a mammary gland prosthesis preparation device and method based on point cloud data clustering.
Background
The three-dimensional reconstruction technology is to take a series of tomographic two-dimensional images in clinical process, and the images are in DICOM format of medicine. By means of the related reconstruction technology, images of tissue structures of human organs can be output. Medical clinics realize the process of changing an image from a two-dimensional image to a three-dimensional stereoscopic image. At present, research on medical image reconstruction at home and abroad is mainly divided into two modes, namely surface drawing which is drawn by means of the surface of an object and volume drawing which is drawn by means of the whole area of the object. The surface drawing is also called an indirect drawing method, which describes that the three-dimensional structure of a space body can be spliced by using geometric figures, the edge contour line of a two-dimensional image is extracted, and the final reconstruction is completed by using a graphics algorithm. The volume rendering is also called direct rendering, which uses the principle of horizontal line vision, and generates a data every time the ray traverses the image, and combines each data by using an algorithm to complete the final image rendering.
The shape of the mammary prosthesis is typically scanned by a machine, and the image scanned by the machine is typically a two-dimensional image, which is rendered into a solid figure. Most of the methods adopt instrument scanning, the instruments adopt a light method, the method can lack the outer contour edge after drawing is completed, and sometimes the edge can cause the problem of fit adaptability between the excised breast area and the skin. The surface drawing time efficiency is higher, but the information in the surface drawing time is not used, so that the internal reconstruction of a target object is required, and the method is unreliable; the volume rendering resources are occupied greatly, the operation efficiency is low, and the interactivity is poor.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a device and a method for reconstructing an image to determine the shape of a mammary gland prosthesis based on the combination surface drawing and volume drawing of a visual technology of VTK (visualization toolkit is called VTK for short), wherein the prior method can lose the edge of an outer contour after drawing is finished, sometimes the edge can cause the problem of fit adaptability between the region of resected mammary gland and skin, and the invention can enable the outer contour to enclose the inner contour by another surface drawing method, so that the contour with the outer surface missing can exist and the internal integrity can be ensured.
In order to achieve the technical effects, the invention provides a mammary gland prosthesis preparation device based on point cloud data clustering, which comprises a smooth noise reducer, an image enhancer, a surface drawing reconstructor, a volume drawing reconstructor, a point cloud data extractor, a surface data fusion device and a data image display, wherein after nuclear magnetic resonance images of mammary glands are guided into the smooth noise reducer to be subjected to noise reduction treatment, the nuclear magnetic resonance images are guided into the image enhancer to be sequentially subjected to frequency enhancement treatment and spatial domain enhancement treatment, and then are respectively guided into the surface drawing reconstructor and the volume drawing reconstructor to respectively obtain a three-dimensional model image, two three-dimensional model images are simultaneously guided into the point cloud data extractor to respectively extract point cloud data sets of each three-dimensional model image, then two groups of point cloud data sets are guided into the surface data fusion device to be subjected to fusion treatment to obtain fused new point cloud data sets, and finally the obtained new point cloud data sets are guided into the data image display to construct a mammary gland prosthesis model;
the smooth noise reducer is used for carrying out self-adaptive filtering treatment on the original nuclear magnetic resonance image of the mammary gland to obtain a nuclear magnetic resonance image after noise reduction treatment;
the image enhancer is used for sequentially carrying out filtering domain enhancement and spatial domain enhancement on the image subjected to noise reduction treatment to obtain a nuclear magnetic resonance image subjected to enhancement treatment;
the surface drawing reconstructor is used for carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment to obtain a three-dimensional model image P1;
the volume rendering reconstructor is used for performing three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement processing to obtain a three-dimensional model image P2;
the point cloud data extractor is used for respectively extracting point cloud data sets S of the three-dimensional model images P1 and P2 1 、S 2
The surface data fusion device is used for fusing the point cloud data set S 1 、S 2 Performing data fusion and discarding the point cloud data set S 2 Points exceeding the edge contour line of the three-dimensional model image P1 and collecting the point cloud data set S 2 In a point cloud data set S 1 Deleting the same coordinate points in the new point cloud data set after fusion is finally obtained;
the data image display is used for generating a mammary prosthesis model according to the new point cloud data set.
A mammary gland prosthesis preparation method adopting a mammary gland prosthesis preparation device based on point cloud data clustering comprises the following steps of;
step 1: carrying out noise reduction treatment on the original nuclear magnetic resonance image of the mammary gland through a smooth noise reducer to obtain a nuclear magnetic resonance image after the noise reduction treatment;
step 2: sequentially carrying out filtering domain enhancement and spatial domain enhancement treatment on the nuclear magnetic resonance image subjected to noise reduction treatment through an image enhancer to obtain an enhanced nuclear magnetic resonance image;
step 3: carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment through a surface drawing reconstructor and a volume drawing reconstructor respectively to obtain three-dimensional model images P1 and P2;
step 4: the three-dimensional model image P1 is processed by a point cloud data extractor to obtain a point cloud data set S 1 The three-dimensional model image P2 is processed by a point cloud data extractor to obtain a point cloud data set S 2
Step 5: set point cloud data S 1 、S 2 Carrying out data fusion by a face data fusion device to obtain a new point cloud data set;
step 6: generating a mammary gland prosthesis model through a data image display by the new point cloud data set;
step 7: the resulting breast prosthesis model is used for the production of a prosthesis model for medical use.
In the step 3, the three-dimensional model image P1 is obtained by passing the enhanced nmr image through a surface rendering reconstructor, including:
step 3.1.1: marking voxel attribute values of the nuclear magnetic resonance image after noise reduction processing as 0 or 1 respectively, wherein the voxel attribute values which are larger than or equal to a set threshold value are marked as 1, and the voxel attribute values which are smaller than the set threshold value are marked as 0;
step 3.1.2: generating a triangle patch according to the marked voxel attribute value;
step 3.1.3: calculating the coordinate point of each voxel by using a linear interpolation method according to the voxels corresponding to all the qualified triangular patches, wherein the voxels corresponding to the qualified triangular patches are voxels with the number of intersecting edges of the triangular patches and the voxels being less than or equal to 4;
step 3.1.4: generating a three-dimensional model image P1 according to all the voxel coordinate points;
the enhanced nuclear magnetic resonance image is subjected to volume rendering reconstruction to obtain a three-dimensional model image P2, which comprises the following steps:
step 3.2.1: irradiating the nuclear magnetic resonance image subjected to noise reduction treatment by using a Lambert illumination model, and performing linear interpolation treatment on a transparency value and a color value generated by irradiation to obtain a coordinate point of each voxel;
step 3.2.2: and generating a three-dimensional model image P2 according to all the voxel coordinate points.
The step 5 comprises the following steps:
step 5.1: deleting a Point cloud dataset S 2 In a point cloud data set S 1 The same coordinate points in (a);
step 5.2: comparing point cloud data sets S 1 、S 2 Points of the same x coordinate, if x is satisfied i 2 =x 1 j And y is i 2 ≥y 1 j Or z i 2 ≥z 1 j Indicating interpolation deviation in the projection process of volume rendering rays, which is larger than the range of edge contour lines, y is needed to be calculated i 2 Deleting corresponding coordinate points, wherein y i 2 、z i 2 Respectively represent the point cloud data sets S 2 Is the ith coordinate point (x i 2 ,y i 2 ,z i 2 ) Corresponding ordinate, vertical coordinate, y 1 j 、z 1 j Respectively represent the point cloud data sets S 1 The j-th coordinate point (x) j 2 ,y j 2 ,z j 2 ) The corresponding ordinate, vertical, i=1, 2, …, n2, j=1, 2, …, n1, n2 represent the point cloud dataset S, respectively 1 、S 2 The number of the contained coordinates points;
step 5.3: after the deleting operation in the steps 5.1 to 5.2, the point cloud data set S is obtained 2 All the remaining points in (a) are inserted into the point cloud data set S 1 And (3) performing data fusion to generate a new point cloud data set.
The beneficial effects of the invention are as follows:
the invention provides a mammary gland prosthesis preparation device and method based on point cloud data clustering, which overcome the defects that the outer contour edge is missing after drawing is finished and the edge can cause fit adaptation between the excised mammary gland region and skin in the prior art.
Drawings
FIG. 1 is a block diagram of a mammary gland prosthesis preparing device based on point cloud data clustering in the present invention;
FIG. 2 is a flow chart of a method for preparing a mammary gland prosthesis by using a mammary gland prosthesis preparation device based on point cloud data clustering in the present invention;
FIG. 3 is a flow chart of smooth noise reduction in the present invention;
fig. 4 is a flow chart of frequency domain enhancement in the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples of specific embodiments.
As shown in fig. 1, a mammary gland prosthesis preparation device based on point cloud data clustering comprises a smooth noise reducer, an image enhancer, a surface drawing reconstructor, a volume drawing reconstructor, a point cloud data extractor, a surface data fusion device and a data image display, wherein after the nuclear magnetic resonance image of a mammary gland is guided into the smooth noise reducer to be subjected to noise reduction treatment, the nuclear magnetic resonance image is guided into the image enhancer to be sequentially subjected to frequency enhancement treatment and spatial domain enhancement treatment, and then is respectively guided into the surface drawing reconstructor and the volume drawing reconstructor to be respectively obtained into a three-dimensional model image, two three-dimensional model images are simultaneously guided into the point cloud data extractor to respectively extract a point cloud data set of each three-dimensional model image, each point coordinate data in the point cloud data set is a coordinate of each pixel in space, the three coordinate directions including an X axis coordinate value, a Y axis coordinate value and a Z axis coordinate value are sequentially listed from small to large, and as the mammary gland is not symmetrical, so that the absolute values of the maximum value and the minimum value of the X axis coordinate value in all points are not equal. Then, the two sets of point cloud data sets are imported into a facial data fusion device to be fused to obtain a new point cloud data set after fusion, and finally, the obtained new point cloud data set is imported into a data image display to construct a mammary gland prosthesis model;
the smooth noise reducer is used for carrying out self-adaptive filtering treatment on the original nuclear magnetic resonance image of the mammary gland to obtain a nuclear magnetic resonance image after noise reduction treatment;
the image enhancer is used for sequentially carrying out filtering domain enhancement and spatial domain enhancement on the image subjected to noise reduction treatment to obtain a nuclear magnetic resonance image subjected to enhancement treatment;
the surface drawing reconstructor is used for carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment to obtain a three-dimensional model image P1;
the volume rendering reconstructor is used for performing three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement processing to obtain a three-dimensional model image P2;
the point cloud data extractor is used for respectively extracting point cloud data sets S of the three-dimensional model images P1 and P2 1 、S 2
The surface data fusion device is used for fusing the point cloud data set S 1 、S 2 Data fusion is carried out, when the fusion operation is carried out, the information extracted by a surface and volume rendering reconstructor is represented and aligned by using a gridding point cloud, and then a point cloud data set S is omitted 2 Points exceeding the edge contour line of the three-dimensional model image P1 and collecting the point cloud data set S 2 In a point cloud data set S 1 Deleting the same coordinate points in the new point cloud data set after fusion is finally obtained;
the data image display is used for generating a mammary prosthesis model by utilizing MATLAB according to the new point cloud data set.
As shown in fig. 2, a method for preparing a mammary gland prosthesis by using a mammary gland prosthesis preparation device based on point cloud data clustering comprises the following steps:
step 1: carrying out noise reduction treatment on the original nuclear magnetic resonance image of the mammary gland through a smooth noise reducer to obtain a nuclear magnetic resonance image after the noise reduction treatment;
the smooth noise reduction adopts self-adaptive median filtering, and the working flow is as shown in figure 3: the self-adaptive median filtering implementation step is consistent with the window value which is set by the traditional median filtering method, the size of an odd fixed window is also initialized, the size of the initialized window is unchanged in the median filtering process, and the self-adaptive median filtering is capable of converting different data into different window sizes. In another difference, the adaptive median filtering only processes noise points, and does not participate in the processing process on non-noise points, so that the effects are better after the adaptive median filtering processing of the image processed by the common median filtering on the basis of the median filtering, and the formula is as follows:
wherein Z is min Expressing the minimum value of the pixel gray level of the window, Z max Expressing the maximum value of pixel gray scale in window, Z med Express the median value of gray values in window, Z xy Representing the central coordinate value. Initializing window to slide, executing formula (2.1), when A1>0 and A2<0, executing formula (2.2), increasing the current window gray value S xy If the current window gray value S xy Less than the current window gray maximum S max Then return to executing equation (2.1), when B2<0, then Z xy The gray value of the current center point is equal to or larger than 0, otherwise, the gray value of the center point is Z med And when the sliding window coincides with the lower right corner edge of the image, the self-adaptive median filtering is finished.
Step 2: the noise-reduced nuclear magnetic resonance image is subjected to filter domain enhancement and space domain enhancement processing sequentially through an image enhancer, the enhancement processing is realized in a frequency domain, then in a space domain, finally, the preprocessed nuclear magnetic resonance image is obtained, as shown in fig. 4, a process diagram of the frequency domain enhancement is obtained, after the image to be processed is subjected to preprocessing, the preprocessing comprises smoothing noise reduction processing, and then the frequency domain image F (u, v) is obtained by adopting FFT (fast Fourier transform) and wavelet transform. Then the frequency domain image F (u, v) is processed by a filter function H (u, v), wherein the filter function H (u, v) is obtained by Fourier transformation of an operator H (x, y), and finally the frequency domain image F (u, v) is output to a spatial domain after inverse transformation and then is subjected to enhancement processing of the spatial domain, and finally an image g (x, y) is obtained, wherein the post processing comprises enhancement processing of the spatial domain.
The gray level conversion method is applied to the contrast of the image, the pixel size of the image is changed by using the contrast, the interested feature is more obvious, and the contrast of the image can increase the definition of the image compared with the original image. The gray value of each point is input mainly through a given enhancement function, the corresponding gray value is output based on the operation of each point, and the gray transformation can be expressed:
g(x,y)=T[f(x,y)] (2.3)
in the formula 2.3, f (x, y) represents an input image to be processed, g (x, y) represents an image processed in a spatial domain, T is an operation on an f original image, and is mainly used for representing a relationship between an input end and an output end of the image, and the image input r and s are designated in a region of a point (x, y) as an output image, and the contrast enhancement formula is obtained by changing pixels in the whole operation process:
s=T[r] (2.4)
the method comprises the steps of preprocessing images in the MATLAB in the steps 1-2, inputting the preprocessed images into a VTK (visualization toolkit is called VTK for short, a visual tool function library) environment, carrying out surface drawing reconstruction and volume drawing reconstruction in the VTK environment, and adopting a pipeline way in a visual flow of the VTK, mainly converting two-dimensional image drawing into a three-dimensional image drawing process, extracting data from the two-dimensional image, and rendering the data to manufacture a three-dimensional stereoscopic image. Rendering data is also the process of rendering a pipeline through which different parts are interrelated.
Step 3: carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment through a surface drawing reconstructor and a volume drawing reconstructor respectively to obtain three-dimensional model images P1 and P2;
the enhanced nuclear magnetic resonance image is passed through a surface drawing reconstructor to obtain a three-dimensional model image P1, which comprises the following steps:
step 3.1.1: marking voxel attribute values of the nuclear magnetic resonance image after noise reduction processing as 0 or 1 respectively, wherein the voxel attribute values which are larger than or equal to a set threshold value are marked as 1, and the voxel attribute values which are smaller than the set threshold value are marked as 0;
through the input of an image set, pixel points in a two-dimensional image are converted into space coordinate points, corresponding 8 angular points are constructed to form a complete voxel model, and through the comparison of attribute values of voxels with preset thresholds, the relation rule of the thresholds and the attribute values is as follows:
1) If the set threshold isovalue is greater than or equal to the voxel attribute value e, the vertex is positioned outside the isosurface, and the voxel attribute value is 1.
2) If the set threshold isovalue < voxel attribute value e, the vertex is positioned in the isosurface, and the voxel attribute value is 0.
The attributes of the voxels are labeled with two states, 0 and 1, then the voxels may be divided into 256 states. Since the voxels are regular cubes, there are symmetrical images, the state can be simplified to 128 kinds, and since the cubes are rotationally symmetrical, the voxels can be simplified to 15 kinds; the contour extraction operation, namely, determining the contour which remains after comparing the voxel attribute value with the initial threshold, realizing the final topological structure by utilizing the triangle contour, requiring the vertex coordinates and the normal vector of the contour, and then performing a linear interpolation method;
step 3.1.2: generating a triangle patch according to the marked voxel attribute value;
step 3.1.3: calculating the coordinate point of each voxel by using a linear interpolation method according to the voxels corresponding to all the qualified triangular patches, wherein the voxels corresponding to the qualified triangular patches are voxels with the number of intersecting edges of the triangular patches and the voxels being less than or equal to 4;
according to the linear interpolation method, the gray level of pixels in an image is linearly changed, and according to the probability calculation method, the gray level to be obtained is the proportion of all gray levels, and then the coordinate value of a target is obtained.
Wherein P represents the coordinate of intersection of the triangular patch and the normal vector, P 1 And P 2 Representing two point coordinate values, isovalue representing a set threshold, V 1 And V 2 Respectively representing gray values of two points, N represents normal vector, N 1 And N 2 Representing normal vectors of two points, respectively.
Because the isosurfaces are two-dimensional curved surfaces, the values of any point in the curved surfaces in three directions of gradients at the tangent positions of the curved surfaces are 0, and only qualified triangular patches can be constructed by using the intersection relation of the triangular patches, and the normal vector g (x, y, z) of any point in the isosurfaces is calculated by using a gradient formula given by a formula (3.3);
g(x,y,z)=gradf(x,y,z) (3.3)
during the fitting of the patches, there is a possibility that two adjacent patches are not horizontal. Therefore, an included angle is formed in the three-dimensional space, if the problem of the included angle is not processed, the gap between the brightness of the topological image finally formed by the completion of fitting is overlarge, and the phenomenon of light and shade alternation occurs. Therefore, the Goldcolor model is adopted to reduce the occurrence of the phenomenon of brightness alternation, and the method mainly relies on normal vectors of three vertexes of the surface patch to continue the operation of the other triangular surface patch, so that a new surface patch is used to replace the original surface patch, and the defect of uneven brightness among each surface patch is eliminated. In this way, the gradient of three vertexes on the triangle patch can be obtained by attribute values in the voxels, namely, linear interpolation processing is performed by depending on attribute values of two adjacent voxels, 8 vertexes in the voxels can be obtained according to the 8 vertexes, and the normal vector can be solved by using a central difference method, wherein the formula of the central difference calculation voxel is as follows:
in the formula g x Represents the gradient in the x-axis direction, g y Represents the gradient in the y-axis direction, g z Represents a gradient in the z-axis direction, O (x u ,y v ,z k ) Representing the coordinates as (x u ,y v ,z k ) Attribute value of voxel, (x) u+1 ,y v ,z k )、(x u-1 ,y v ,z k ) Coordinate values representing adjacent voxels in the x-axis direction, (x) u ,y v+1 ,z k )、(x u ,y v-1 ,z k ) Coordinate values representing adjacent voxels in the y-axis direction, (x) u ,y v ,z k+1 )、(x u ,y v-1 ,z k-1 ) Coordinate values representing adjacent voxels in the z-axis direction;
step 3.1.4: generating a three-dimensional model image P1 according to all the voxel coordinate points;
when a surface drawing reconstructor is built under the VTK environment to reconstruct a three-dimensional image, a mobile cube algorithm flow class library packaged in the VTK is directly utilized, and the specific process is as follows:
s1.1: the member function vtkDICIOMimage reader of the class firstly reads the data in the DICOM format of the medical image, the data is imported into a visualization pipeline, then an Update function is called, and the SetDirectyName member function prescribes a data storage path position;
s1.2: creating a ContourFilter object, wherein the object can store a plurality of isosurface information forming a triangle patch, and the isosurface information can be used for screening voxels corresponding to the qualified triangle patch;
s1.3: the input of vtkMarchingCubes is provided by vtkdata object data by the SetInputConnection function;
s1.4: the function SetValue can set an initial threshold value, and is compared with the initial value of the generating values member function to obtain an isosurface of the region of interest;
s1.5: the vtkmachingcubes function is used to obtain the conforming triangular patches in the pixel as input parameters. Creating an object of a vtkMapper class, and creating an object of a subclass by a member function vtkPolyDataMapper because the vtkPolyDataMapper can perform rendering operations on triangle patches;
s1.6: creating a vtkActor object, and rendering the data obtained by processing the vtkPolyDataMapper function as the input of the SetMapper function;
s1.7: creating an object of the vtkRenderWindow and the vtkRenderer, creating a vtkRenderner-Window object, taking a member function AddRenderner in the class as input data of the vtkRenderner function, and obtaining a vtkActor object through a function AddActor of the vtkRenderner object;
s1.8: a vtkRenderWindownteractor object is created and the vtkRenderWindow is subject to add interactions by the function SetRenderWindow.
The enhanced nuclear magnetic resonance image is subjected to volume rendering reconstruction to obtain a three-dimensional model image P2, which comprises the following steps:
step 3.2.1: irradiating the nuclear magnetic resonance image subjected to noise reduction treatment by using a Lambert illumination model, and performing linear interpolation treatment on a transparency value and a color value generated by irradiation to obtain a coordinate point of each voxel;
the Lambert illumination model principle is that rays emitted from any angle intersect pixels in a two-dimensional space, and attribute values of pixel points, namely transparency and color values of sampling points, are obtained in the intersection. Unlike indirect rendering, which takes all pixels on an image as input, each sample point contains each pixel point, and each pixel point needs to participate in calculation, and is typically synthesized by using a forward synthesis algorithm, where the forward synthesis algorithm starts counterclockwise from the sample point at the farthest end of the pixel, synthesizes the transparency value and the color value of the pixel, and performs the operation along the direction in which the ray is emitted until the sample point closest to the pixel is reached, and combines the transparency value and the transparency value of all the sample points.
Step 3.2.2: generating a three-dimensional model image P2 according to all the voxel coordinate points;
when a volume rendering reconstructor is built in a VTK environment to reconstruct a three-dimensional image, a light projection algorithm packaged in the VTK is directly utilized, and the specific process is as follows:
s2.1: the member function vtkDICIOMimage reader of the class firstly reads the data in the DICOM format of the medical image, the data is imported into a visualization pipeline, then an Update function is called, and the SetDirectyName member function prescribes a data storage path position;
s2.2: the breast site was reconstructed in the experiment in order to be able to reveal the tissue structure in the skin of the breast. The gray value of the skin tissue of the medical image is about 600, a low threshold value is set to be a default value of 0, a corresponding piecewise linear function can be provided in the vtkpieewisef section class, wherein the member function AddPoint sets the opacity value of the low threshold value to be 0, and 0.1 is set to be the gray value opacity of the skin region, so that the gray value retention between 0 and 600 can be ensured, and the linear relation distribution is processed;
s2.3: enhancing the region of interest, mapping the vtkColorTransferFunction into an opaque value and an RGB value as parameters, and assigning the opaque value and the RGB value to the vtkPieceweiseF unit;
s2.4: creating a vtkVolumeProperty object, and assigning two initialized vtkPieceewiseFunit parameters and vtkColorTransferFunction parameters to the functions SetScalarOpacity and SetColor;
s2.5: creating a vtkMapper sub-class object and implementing an algorithm of the vtkfixedpointvollumenraycastmapper class. In the implementation process, before using the Render function, the parent class corresponding to the dependency is called, the vtkMapper pipeline receives the corresponding request, the data is read first, the dependency vtkataobject class library is completed by the GetOutput function, and finally, the light result accumulation and summation are completed by the vtkfixedPointVolumeRayCastmapper function;
s2.6: creating a vtkVolume object, rendering data by using a vtkFixedPointVolumeRayCastMapper function as an input of a SetMapper function;
s2.7: creating an object of the vtkRenderWindow and the vtkRenderer, obtaining an object of the vtkRenderer from the member function AddRenderer, and obtaining a vtkActor object through a function AddActor of the vtkRenderer object;
s2.8: a vtkRenderWindownteractor object is created and interactive operations are added to the vtkRenderWindow by the function SetRenderWindow.
Step 4: the three-dimensional model image P1 is processed by a point cloud data extractor to obtain a point cloud data set S 1 The three-dimensional model image P2 is processed by a point cloud data extractor to obtain a point cloud data set S 2
The experiment adopts a gridding point cloud. The gridding point cloud is characterized in that points in the point cloud set are distributed in a cube grid, the interior of the grid is in one-to-one correspondence, the point cloud set is formed through an image, and the image can be formed through the point cloud set, and the specific operation process is as follows:
step 4-1: extracting point cloud data of the result of a moving cube algorithm in face drawing, wherein the extracted point coordinate data is the coordinate of each pixel in space, so that the three coordinate directions of an X axis, a Y axis and a Z axis are provided;
step 4-2: and extracting point cloud data of the result of a ray projection algorithm in volume rendering, wherein the extracted point coordinate data is the coordinate of each pixel in space, and has the same data type as the surface rendering.
Step 5: set point cloud data S 1 、S 2 Number of passing surfaceData fusion is carried out according to the fusion device to obtain a new point cloud data set, which comprises the following steps:
step 5.1: the data of the two results are aligned using equation (5.1) to form a new dataset, which is ordered from small to large on the X-axis. Not only does the volume rendering data classify the container, but the opposite dataset also performs a corresponding data classification. Selecting data on the Y axis and the Z axis by taking the X axis as a keyword, taking the X axis coordinate value of the surface drawing as an auxiliary value, if an auxiliary value point exists in the volume drawing, judging the range of the Y axis coordinate value and the Z axis coordinate value of the point, and deleting the point cloud data set S 2 Coordinate points out of range and point cloud data set S 2 In a point cloud data set S 1 The same coordinate points in (a);
wherein a, b, c, d represent coefficients whose values are derived from a transformation matrix of the input two-dimensional plane to the output three-dimensional planeParameters of (x) e1 ,y e1 ) Representing a dataset S 2 Coordinates of the known point e1, +.>Representation mapping to data set S 1 Corresponding point coordinates of (x) e2 ,y e2 ) Representing a dataset S 2 Mid origin mapping to data set S 1 Coordinates of (x) e3 ,y e3 ) Representing a dataset S 1 Coordinates of the original points;
step 5.2: comparing point cloud data sets S 1 、S 2 Points of the same x-coordinate if satisfiedAnd->Or (b)Indicating that interpolation deviation occurs in the projection process of volume rendering rays, which is larger than the range of edge contour lines, the +.>Deleting the corresponding coordinate point, wherein->Respectively represent the point cloud data sets S 2 I coordinate point +.>Corresponding ordinate, vertical, +.> Respectively represent the point cloud data sets S 1 J coordinate points of (a)>The corresponding ordinate, vertical, i=1, 2, …, n2, j=1, 2, …, n1, n2 represent the point cloud dataset S, respectively 1 、S 2 The number of the contained coordinates points;
step 5.3: after the deleting operation in the steps 5.1 to 5.2, the point cloud data set S is obtained 2 All the remaining points in (a) are inserted into the point cloud data set S 1 Performing data fusion to generate a new point cloud data set;
the extracted point coordinate data sets are converted from three-dimensional space coordinates corresponding to the two-dimensional images, each row of data represents the point coordinates in space, each row of data is composed of three values, the first number represents the data of the space X-axis coordinates, the second number represents the space Y-axis coordinates, and the third row of data represents the Z-axis coordinates in space.
Because of the unique characteristics of surface drawing and volume drawing, the point coordinate data extracted by surface drawing completely covers the point coordinate data set extracted by volume drawing, so the idea of the algorithm is to input the point coordinate data set in volume drawing into a Multimap container, the X coordinate value in the X axis is used as a keyword, the value corresponding to the keyword is a Pair array, two uncorrelated values are represented in the Pair array, and the two values respectively represent the coordinate value in the Y axis and the coordinate value in the Z axis. Since the point coordinate set is centered on the image, there are 8 states of data, X, Y, Z corresponds to positive, negative, positive, negative, the state value corresponds to a value in the spatial coordinates, therefore, it is to be classified, so that it is not possible to use one container, build a plurality of Multimap containers, distinguish coordinate states using 8 containers, input data into M1, M2, M3, M4, M5, M6, M7, M8 containers, respectively, process classification is as shown in table 1,
table 1 container process data partitioning table
Since the data is classified, the point coordinate data in the auxiliary surface drawing also needs to be processed correspondingly, and the data is divided into 8 data types, and the 8 containers are processed separately. Let data in the dataset in the surface rendering be t (x 1 ,y 1 ,z 1 ) Inputting coordinate data meeting the data requirement of an M1 container into the container, taking X-axis coordinate points as keywords, putting Y-axis coordinate points and Z-axis coordinate points into a Pair array, and performing the following experimental algorithm:
step one: first, the keyword x is searched in the container 1 If the key is not present, the t-coordinate value is entered into a new container M, which is kept consistent with the M1 container.
Step two: if keyword x 1 If so, the values fr s1 and fr 1 in the Pair data set corresponding to the data key in the surface drawing are comparedMaximum range value in sed1, indicated by frs1max for the current frs1 maximum coordinate point, indicated by sed1max for the current sed1 maximum coordinate point, if y 1 >frs1max, the data in the container will be deleted, and at the same time, z is satisfied 1 If not more than sed1max, the data can be reserved, otherwise, only one condition is met, and the coordinate data in the container is still deleted.
Step three: the data in the new container M is input into the container M1, and the steps are repeated to finish the processing of the data in the other 7 containers.
Step four: the data processed by M1, M2, M3, M4, M5, M6, M7, M8 is added to a container and is input into a text file.
The final surface rendered dataset and volume rendered combined dataset result is saved in a text format and the surface rendered dataset is found to be all valid by comparison to the surface rendered dataset and the volume rendered dataset. Discarding points which are already beyond the edge contour line of the surface drawing in the volume drawing, and reserving points which are not beyond the edge contour line; inserting the pruned point cloud data set directly into the point cloud data set S 1 Is a kind of medium.
Step 6: converting the new point cloud data set into a three-dimensional image through a data image display (Matlab) to obtain a mammary gland prosthesis model;
step 7: the obtained mammary gland prosthesis model is applied to the computer-aided medical diagnosis and is used for the production of the prosthesis model for medical treatment.

Claims (4)

1. The mammary gland prosthesis preparation device based on the point cloud data clustering is characterized by comprising a smooth noise reducer, an image enhancer, a surface drawing reconstructor, a volume drawing reconstructor, a point cloud data extractor, a surface data fusion device and a data image display, wherein after the nuclear magnetic resonance image of a mammary gland is guided into the smooth noise reducer to be subjected to noise reduction treatment, the nuclear magnetic resonance image is guided into the image enhancer to be sequentially subjected to frequency enhancement treatment and spatial domain enhancement treatment, and then respectively guided into the surface drawing reconstructor and the volume drawing reconstructor to respectively obtain a three-dimensional model image, two three-dimensional model images are simultaneously guided into the point cloud data extractor to respectively extract a point cloud data set of each three-dimensional model image, then two groups of point cloud data sets are guided into the surface data fusion device to be subjected to fusion treatment to obtain a fused new point cloud data set, and finally the obtained new point cloud data set is guided into the data image display to construct a mammary gland prosthesis model;
the smooth noise reducer is used for carrying out self-adaptive filtering treatment on the original nuclear magnetic resonance image of the mammary gland to obtain a nuclear magnetic resonance image after noise reduction treatment;
the image enhancer is used for sequentially carrying out filtering domain enhancement and spatial domain enhancement on the image subjected to noise reduction treatment to obtain a nuclear magnetic resonance image subjected to enhancement treatment;
the surface drawing reconstructor is used for carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment to obtain a three-dimensional model image P1;
the volume rendering reconstructor is used for performing three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement processing to obtain a three-dimensional model image P2;
the point cloud data extractor is used for respectively extracting point cloud data sets S of the three-dimensional model images P1 and P2 1 、S 2
The surface data fusion device is used for fusing the point cloud data set S 1 、S 2 Performing data fusion and discarding the point cloud data set S 2 Points exceeding the edge contour line of the three-dimensional model image P1 and collecting the point cloud data set S 2 In a point cloud data set S 1 Deleting the same coordinate points in the new point cloud data set after fusion is finally obtained;
the data image display is used for generating a mammary prosthesis model according to the new point cloud data set.
2. A method for preparing a mammary gland prosthesis by using the mammary gland prosthesis preparation device based on point cloud data clustering as claimed in claim 1, which is characterized by comprising the following steps:
step 1: carrying out noise reduction treatment on the original nuclear magnetic resonance image of the mammary gland through a smooth noise reducer to obtain a nuclear magnetic resonance image after the noise reduction treatment;
step 2: sequentially carrying out filtering domain enhancement and spatial domain enhancement treatment on the nuclear magnetic resonance image subjected to noise reduction treatment through an image enhancer to obtain an enhanced nuclear magnetic resonance image;
step 3: carrying out three-dimensional reconstruction on the nuclear magnetic resonance image after the enhancement treatment through a surface drawing reconstructor and a volume drawing reconstructor respectively to obtain three-dimensional model images P1 and P2;
step 4: the three-dimensional model image P1 is processed by a point cloud data extractor to obtain a point cloud data set S 1 The three-dimensional model image P2 is processed by a point cloud data extractor to obtain a point cloud data set S 2
Step 5: set point cloud data S 1 、S 2 Carrying out data fusion by a face data fusion device to obtain a new point cloud data set;
step 6: generating a mammary gland prosthesis model through a data image display by the new point cloud data set;
step 7: the resulting breast prosthesis model is used for the production of a prosthesis model for medical use.
3. The method of preparing a breast prosthesis according to claim 2, wherein in step 3, the three-dimensional model image P1 is obtained by passing the enhanced nmr image through a surface rendering reconstructor, comprising:
step 3.1.1: marking voxel attribute values of the nuclear magnetic resonance image after noise reduction processing as 0 or 1 respectively, wherein the voxel attribute values which are larger than or equal to a set threshold value are marked as 1, and the voxel attribute values which are smaller than the set threshold value are marked as 0;
step 3.1.2: generating a triangle patch according to the marked voxel attribute value;
step 3.1.3: calculating the coordinate point of each voxel by using a linear interpolation method according to the voxels corresponding to all the qualified triangular patches, wherein the voxels corresponding to the qualified triangular patches are voxels with the number of intersecting edges of the triangular patches and the voxels being less than or equal to 4;
step 3.1.4: generating a three-dimensional model image P1 according to all the voxel coordinate points;
the enhanced nuclear magnetic resonance image is subjected to volume rendering reconstruction to obtain a three-dimensional model image P2, which comprises the following steps:
step 3.2.1: irradiating the nuclear magnetic resonance image subjected to noise reduction treatment by using a Lambert illumination model, and performing linear interpolation treatment on a transparency value and a color value generated by irradiation to obtain a coordinate point of each voxel;
step 3.2.2: and generating a three-dimensional model image P2 according to all the voxel coordinate points.
4. The method for preparing a mammary prosthesis according to claim 2, wherein the step 5 comprises:
step 5.1: deleting a Point cloud dataset S 2 In a point cloud data set S 1 The same coordinate points in (a);
step 5.2: comparing point cloud data sets S 1 、S 2 Points of the same x-coordinate if satisfiedAnd->Or->Indicating that interpolation deviation occurs in the projection process of volume rendering rays, which is larger than the range of edge contour lines, the +.>Deleting the corresponding coordinate point, wherein->Respectively represent the point cloud data sets S 2 I coordinate point +.>Corresponding ordinate, vertical, +.> Respectively represent the point cloud data sets S 1 J coordinate points of (a)>The corresponding ordinate, vertical, i=1, 2, …, n2, j=1, 2, …, n1, n2 represent the point cloud dataset S, respectively 1 、S 2 The number of the contained coordinates points;
step 5.3: after the deleting operation in the steps 5.1 to 5.2, the point cloud data set S is obtained 2 All the remaining points in (a) are inserted into the point cloud data set S 1 And (3) performing data fusion to generate a new point cloud data set.
CN202011108680.0A 2020-10-16 2020-10-16 Mammary gland prosthesis preparation device and method based on point cloud data clustering Active CN112233791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011108680.0A CN112233791B (en) 2020-10-16 2020-10-16 Mammary gland prosthesis preparation device and method based on point cloud data clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011108680.0A CN112233791B (en) 2020-10-16 2020-10-16 Mammary gland prosthesis preparation device and method based on point cloud data clustering

Publications (2)

Publication Number Publication Date
CN112233791A CN112233791A (en) 2021-01-15
CN112233791B true CN112233791B (en) 2023-12-29

Family

ID=74118841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011108680.0A Active CN112233791B (en) 2020-10-16 2020-10-16 Mammary gland prosthesis preparation device and method based on point cloud data clustering

Country Status (1)

Country Link
CN (1) CN112233791B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114451636B (en) * 2022-02-09 2023-09-12 河北经贸大学 Conformal insole generation method based on rotary 3D foot scanner

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495929A (en) * 2011-12-07 2012-06-13 大连交通大学 Digitalization design and manufacture system for titanium alloy skull prosthetic replacement
CN105096372A (en) * 2007-06-29 2015-11-25 3M创新有限公司 Synchronized views of video data and three-dimensional model data
CN105877875A (en) * 2016-05-27 2016-08-24 华南理工大学 Personalized thyroid cartilage prosthesis and production method thereof
CN105912874A (en) * 2016-04-29 2016-08-31 青岛大学附属医院 Liver three-dimensional database system constructed on the basis of DICOM (Digital Imaging and Communications in Medicine) medical image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096372A (en) * 2007-06-29 2015-11-25 3M创新有限公司 Synchronized views of video data and three-dimensional model data
CN102495929A (en) * 2011-12-07 2012-06-13 大连交通大学 Digitalization design and manufacture system for titanium alloy skull prosthetic replacement
CN105912874A (en) * 2016-04-29 2016-08-31 青岛大学附属医院 Liver three-dimensional database system constructed on the basis of DICOM (Digital Imaging and Communications in Medicine) medical image
CN105877875A (en) * 2016-05-27 2016-08-24 华南理工大学 Personalized thyroid cartilage prosthesis and production method thereof

Also Published As

Publication number Publication date
CN112233791A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Weber et al. Topology-controlled volume rendering
Shi et al. A survey of GPU-based medical image computing techniques
CN109584349B (en) Method and apparatus for rendering material properties
US20020009224A1 (en) Interactive sculpting for volumetric exploration and feature extraction
Cheng et al. A morphing-Based 3D point cloud reconstruction framework for medical image processing
US11810243B2 (en) Method of rendering a volume and a surface embedded in the volume
Huang et al. Visualizing industrial CT volume data for nondestructive testing applications
US20130135306A1 (en) Method and device for efficiently editing a three-dimensional volume using ray casting
Huang et al. 3D reconstruction and visualization from 2D CT images
US20220230408A1 (en) Interactive Image Editing
Wang et al. Three-dimensional reconstruction of jaw and dentition CBCT images based on improved marching cubes algorithm
CN112233791B (en) Mammary gland prosthesis preparation device and method based on point cloud data clustering
CN111383233A (en) Volume rendering optimization with known transfer functions
Chen et al. Manipulating, deforming and animating sampled object representations
Song et al. Breast tissue 3D segmentation and visualization on MRI
Bornik et al. Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment
Chen et al. Deforming and animating discretely sampled object representations.
Vyatkin et al. A method for visualizing multivolume data and functionally defined surfaces using gpus
Çalışkan et al. Overview of Computer Graphics and algorithms
Ning et al. Interactive 3D medical data cutting using closed curve with arbitrary shape
CN104616344A (en) Method for 3D JAVA realization of texture mapping volume rendering
Rahmawati et al. Modification Rules for Improving Marching Cubes Algorithm to Represent 3D Point Cloud Curve Images.
Wang et al. Detection and reconstruction of an implicit boundary surface by adaptively expanding a small surface patch in a 3D image
i Bartrolı et al. Visualization techniques for virtual endoscopy
Liu et al. Study and application of medical image visualization technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant