CN117653162A - Sparse projection CBCT reconstruction method, device, equipment and readable storage medium - Google Patents

Sparse projection CBCT reconstruction method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN117653162A
CN117653162A CN202311646892.8A CN202311646892A CN117653162A CN 117653162 A CN117653162 A CN 117653162A CN 202311646892 A CN202311646892 A CN 202311646892A CN 117653162 A CN117653162 A CN 117653162A
Authority
CN
China
Prior art keywords
projection
reconstruction
tissue
gray data
tissue mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311646892.8A
Other languages
Chinese (zh)
Inventor
李梦寻
涂杰
夏桂松
黄翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202311646892.8A priority Critical patent/CN117653162A/en
Publication of CN117653162A publication Critical patent/CN117653162A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A sparse projection CBCT reconstruction method, device, equipment and readable storage medium, comprising the following steps: acquiring X-ray projections of a target patient at a plurality of angles; obtaining a whole tissue mask prediction projection for reconstruction and a hard tissue mask prediction projection for reconstruction according to a preset tissue projection prediction model and X-ray projections; according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; and obtaining the whole tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and forming the three-dimensional image to be reconstructed. Different tissues in the CBCT are modeled through the multi-unit network, so that the interpretability and the controllability of the neural field network model are improved, and the method has stronger practical significance.

Description

Sparse projection CBCT reconstruction method, device, equipment and readable storage medium
Technical Field
The application relates to the field of CBCT, in particular to a sparse projection CBCT reconstruction method, device and equipment and a readable storage medium.
Background
Cone beam computed tomography (Cone-beam computed tomography, CBCT) is a medical imaging technique for acquiring three-dimensional stereoscopic images of a human body or object, the imaging portion of which consists essentially of an X-ray source and a planar shaped detector, which are simultaneously rotated around the patient during image acquisition, the X-ray source emitting a number of Cone-shaped X-ray beams, the detector capturing the remaining X-ray signals after absorption by the patient from a plurality of angles, and these captured X-ray projections being used to reconstruct a volumetric image. CBCT has been widely used in the clinical practice of stomatology, which can significantly improve the diagnosis and treatment efficiency of various diseases, however frequent shooting of CBCT can bring potential radiation injury to patients. The overall radiation dose can be reduced by reducing the single Zhang Fushe dose or limiting the number of original projections, but reducing the single Zhang Fushe dose or reducing the number of projections inevitably leaves the information about the target object largely absent, resulting in a reconstruction result comprising a large number of artefacts.
Due to the pathological (Ill-packed) nature of the problem of sparse projection reconstruction, classical medical image reconstruction methods such as Filtered back-projection (FBP) and algebraic reconstruction (Algebraic reconstruction technique, ART) cannot achieve reliable reconstruction effects. Conventional solutions typically add regularization constraints or apply techniques to denoising the reconstructed image during the reconstruction optimization process, but these manually designed methods have certain limitations.
With the development of deep learning, the deep learning method is also gradually applied to the sparse projection CT reconstruction problem. However, end-to-end deep learning of the codec makes it difficult to capture the imaging physics, resulting in class methods that typically require an extremely large amount of data to pre-train, and the reconstruction effect is limited. Thus, many deep learning methods are still improved based on classical medical image reconstruction methods.
In recent years, a coordinate-based neural rendering three-dimensional visual method is rapidly developed in the field of computers, and the method is used for fitting supervision under a two-dimensional limited visual angle through a coordinate-based neural field by simulating an imaging rendering process so as to learn three-dimensional space information. The method can obtain a better learning effect under the sparse input condition, but most of related methods are designed in natural scenes, for example, some models try to integrate physical constraints such as depth, visibility and the like into a three-dimensional reconstruction process, and the learning effect under the sparse visual angle can be better improved, but the current technical means cannot be effectively matched with the image reconstruction process in the medical field. Therefore, it is very necessary to construct a method based on neural rendering and medical image constraint, which is used for solving the technical problem that the reconstruction effect of the medical image is poor under the sparse projection condition.
Disclosure of Invention
The application provides a sparse projection CBCT reconstruction method, device and equipment and a readable storage medium, which can solve the technical problems in the background technology.
In a first aspect, an embodiment of the present application provides a sparse projection CBCT reconstruction method, which adopts the following technical scheme:
a sparse projection CBCT reconstruction method comprising the steps of:
acquiring X-ray projections of a target patient at a plurality of angles;
obtaining an integral tissue mask prediction projection and a hard tissue mask prediction projection of a target patient according to a preset tissue projection prediction model and the X-ray projection, and taking the integral tissue mask prediction projection and the hard tissue mask prediction projection as an integral tissue mask prediction projection for reconstruction and a hard tissue mask prediction projection for reconstruction;
according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
and obtaining the whole tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and forming the three-dimensional image to be reconstructed.
With reference to the first aspect, in an implementation manner, the monitoring the training set initial model of the neural field network according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction to obtain a neural field network model corresponding to the target patient includes:
in the iterative training process, obtaining a loss function of the neural field network model after each iteration according to the X-ray projection, the whole tissue mask prediction projection for reconstruction, the hard tissue mask prediction projection for reconstruction, the whole tissue gray data, the whole tissue mask gray data and the hard tissue mask gray data;
judging whether the loss function is smaller than a set value or not;
and if the training parameters are smaller than the predetermined threshold, training the neural network initial model to obtain a neural network model corresponding to the target patient.
With reference to the first aspect, in an embodiment, in the iterative training process, a loss function of the neural network model after each iteration is obtained according to the X-ray projection, the global tissue mask prediction projection for reconstruction, the hard tissue mask prediction projection for reconstruction, the global tissue gray data, the global tissue mask gray data and the hard tissue mask gray data, and the following calculation formula is adopted:
wherein L is 1 Is the average absolute error, L 2 As the average of the square error of the signal,gt respectively corresponding to the gray data of the whole tissue under the coordinate point and the gray data of the X-ray projection, < >>A pred Gray data of the whole tissue under the corresponding coordinate points and gray data of the projection predicted by the whole tissue mask for reconstruction are respectively +.>B pred The gray data of the hard tissue mask under the corresponding coordinate point and the gray data of the projection predicted by the hard tissue mask for reconstruction are respectively, and lambda (t) is an influence parameter function which is negatively related to the iteration times.
In combination with the first aspect, the influence parameter function λ (t) adopts the following calculation formula:
wherein y, T respectively represents the current iteration and the target total iteration times, lambda 0 K is (0, 1) as the initial value of the parameter.
With reference to the first aspect, in one implementation manner, the tissue projection prediction model is obtained according to the following steps:
acquiring a plurality of CBCT three-dimensional image data from different objects as three-dimensional image data for training;
obtaining X-ray projection for training and real projection of an integral tissue mask and real projection of a hard tissue mask corresponding to the X-ray projection for training according to the three-dimensional image data for training;
according to the real projection of the whole tissue mask and the tissue projection prediction model set by the real projection supervision training of the hard tissue mask; the tissue projection prediction model is configured to obtain an overall tissue mask prediction projection and a hard tissue mask prediction projection according to the input X-ray projection.
With reference to the first aspect, in one implementation manner, the method obtains, according to the training three-dimensional image data, a training X-ray projection and a real projection of an entire tissue mask and a real projection of a hard tissue mask corresponding to the training X-ray projection,
and dividing the training three-dimensional image data by observing the numerical distribution condition of the volume image and the set two thresholds so as to obtain the real projection of the whole tissue mask and the real projection of the hard tissue mask.
With reference to the first aspect, in one embodiment, the neural field network model is configured to generate, according to coordinate information of any coordinate point in a three-dimensional space, gray data of an entire texture mask, gray data of a hard texture mask, hard texture values, and soft texture values at the coordinate point, and obtain the entire texture gray data at the coordinate point based on the gray data of the entire texture mask, the gray data of the hard texture mask, the hard texture values, and the soft texture values.
In a second aspect, an embodiment of the present application provides a sparse projection CBCT reconstruction device, which adopts the following technical scheme:
a sparse projection CBCT reconstruction device, the sparse projection CBCT reconstruction device comprising:
an acquisition module configured to acquire X-ray projections of a target patient at a plurality of angles;
the training module is configured to obtain integral tissue mask prediction projection and hard tissue mask prediction projection of a target patient according to a preset tissue projection prediction model and the X-ray projection, and the integral tissue mask prediction projection and the hard tissue mask prediction projection are used as integral tissue mask prediction projection for reconstruction and hard tissue mask prediction projection for reconstruction; according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
the reconstruction module is configured to obtain integral tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and form the three-dimensional image to be reconstructed.
In a third aspect, an embodiment of the present application provides a sparse projection CBCT reconstruction device, which adopts the following technical scheme:
a sparse projection CBCT reconstruction device comprising a processor, a memory, and a sparse projection CBCT reconstruction program stored on the memory and executable by the processor, wherein the sparse projection CBCT reconstruction program, when executed by the processor, implements the steps of a sparse projection CBCT reconstruction method as described above.
In a fourth aspect, embodiments of the present application provide a readable storage medium, which adopts the following technical solutions:
a readable storage medium having stored thereon a sparse projection CBCT reconstruction program, wherein the sparse projection CBCT reconstruction program, when executed by a processor, implements the steps of a sparse projection CBCT reconstruction method as described above.
The beneficial effects that technical scheme that this application embodiment provided include:
in the training process of the neural field network model for constructing the three-dimensional image to be reconstructed, the neural field network model is trained in three aspects based on the X-ray projection of the target patient and the whole tissue mask prediction projection and the hard tissue mask prediction projection obtained from the X-ray projection, and in the execution process, the neural field network model is built for carrying out whole tissue gray data based on gray data of a plurality of tissue units, namely, different tissue features in CBCT are modeled through a plurality of special unit networks, so that the interpretability and controllability of the neural field network model are improved, the method has stronger practical significance, and the effectiveness of the neural network model in reconstructing the three-dimensional image is improved finally.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a sparse projection CBCT reconstruction method according to the present application;
FIG. 2 is a schematic diagram of functional modules of an embodiment of a sparse projection CBCT reconstruction device according to the present disclosure;
fig. 3 is a schematic hardware structure of a sparse projection CBCT reconstruction device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In a first aspect, embodiments of the present application provide a sparse projection CBCT reconstruction method.
In an embodiment, referring to fig. 1, fig. 1 is a flowchart of a first embodiment of a sparse projection CBCT reconstruction method according to the present application. As shown in fig. 1, the sparse projection CBCT reconstruction method includes:
s100, acquiring X-ray projections of a target patient under a plurality of angles;
s200, according to a preset tissue projection prediction model and the X-ray projection, obtaining an overall tissue mask prediction projection and a hard tissue mask prediction projection of a target patient, wherein the overall tissue mask prediction projection and the hard tissue mask prediction projection are used as an overall tissue mask prediction projection for reconstruction and a hard tissue mask prediction projection for reconstruction;
s300, supervising a neural network initial model set by training according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction to obtain a neural network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
s400, obtaining the whole tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and forming the three-dimensional image to be reconstructed.
In this embodiment, because the neural field network model used for constructing the three-dimensional image to be reconstructed in the present application is based on the X-ray projection of the target patient and the whole tissue mask prediction projection and the hard tissue mask prediction projection obtained from the X-ray projection in the training process, the training of the neural field network model is realized from three aspects, and in the execution process, the neural field network model is built for the whole tissue gray data based on the gray data of a plurality of tissue units, that is, different tissue features in CBCT are modeled through a plurality of special unit networks, so that the interpretability and controllability of the neural field network model are improved, the method has a stronger practical significance, and the effectiveness of the neural network model in reconstructing the three-dimensional image is effectively improved.
Further, in an embodiment, step S300, the step of supervising the training set initial model of the neural network according to the X-ray projection, the whole tissue mask prediction projection for reconstruction, and the hard tissue mask prediction projection for reconstruction to obtain a neural network model corresponding to the target patient includes:
s310, in the iterative training process, obtaining a loss function of the neural field network model after each iteration according to the X-ray projection, the whole tissue mask prediction projection for reconstruction, the hard tissue mask prediction projection for reconstruction, the whole tissue gray data, the whole tissue mask gray data and the hard tissue mask gray data;
s320, judging whether the loss function is smaller than a set value;
and S330, if the model is smaller than the preset threshold, training the initial neural network model to obtain a neural network model corresponding to the target patient.
Specifically, the step S310 adopts the following calculation formula:
wherein, L is 1 Is the average absolute error, L 2 As the average of the square error of the signal,gt respectively corresponding to the gray data of the whole tissue under the coordinate point and the gray data of the X-ray projection, < >>A pred Gray data of the whole tissue under the corresponding coordinate points and gray data of the projection predicted by the whole tissue mask for reconstruction are respectively +.>B pred The gray data of the hard tissue mask under the corresponding coordinate point and the gray data of the projection predicted by the hard tissue mask for reconstruction are respectively, and lambda (t) is an influence parameter function which is negatively related to the iteration times.
The influence parameter function λ (t) adopts the following calculation formula:
wherein T, T respectively represent the current iteration and the target total iteration times, lambda 0 K is (0, 1) as the initial value of the parameter.
Further, in an embodiment, the tissue projection prediction model is obtained according to the following steps:
f100, acquiring a plurality of CBCT three-dimensional image data from different objects as three-dimensional image data for training;
f200, obtaining X-ray projection for training and integral tissue mask real projection and hard tissue mask real projection corresponding to the X-ray projection for training according to the three-dimensional image data for training;
f300, supervising the training set tissue projection prediction model according to the real projection of the whole tissue mask and the real projection of the hard tissue mask; the tissue projection prediction model is configured to obtain an overall tissue mask prediction projection and a hard tissue mask prediction projection according to the input X-ray projection.
Further, in the step F200, in the step of obtaining the training X-ray projection and the whole tissue mask real projection and the hard tissue mask real projection corresponding to the training X-ray projection according to the training three-dimensional image data,
and dividing the training three-dimensional image data by observing the numerical distribution condition of the volume image and the set two thresholds so as to obtain the real projection of the whole tissue mask and the real projection of the hard tissue mask.
Specifically, for clinically obtaining a plurality of CBCT three-dimensional image data, firstly establishing X-ray projection corresponding to the CBCT three-dimensional image data. And then, by observing the numerical distribution condition of the volume image in the CBCT three-dimensional image data, selecting two thresholds to divide the three-dimensional image so as to obtain the real projection of the whole tissue mask and the real projection of the hard tissue mask corresponding to the CBCT three-dimensional image data.
Further, in an embodiment, the neural field network model is configured to generate gray data of the whole tissue mask, gray data of the hard tissue mask, hard tissue texture values and soft tissue texture values under the coordinate points according to coordinate information of any coordinate point in the three-dimensional space, and obtain the whole tissue gray data under the coordinate points based on the gray data of the whole tissue mask, the gray data of the hard tissue mask, the hard tissue texture values and the soft tissue texture values.
Specifically, the CBCT image blocking field is first decoupled into 4 different intensity fields and a global scalar, the formula of which is as follows
σ(x)=(α(x)+ε)(β(x)v b (x)+v s (x)),
Wherein alpha represents gray data of the whole tissue mask, gray data of the hard tissue mask, v b And v s Texture values for hard and soft tissues, respectively. Sigma is a small, acceptably location-independent global scalar to ensure that low intensity values outside alpha (e.g., air, etc.) are well defined.
The decoupling representation has the following advantages and features: the formula can divide the intensity field into 4 contextually significant components: alpha, beta, v b And v s The method and the device respectively correspond to the separation of soft tissues and hard tissues and the appearance and the material, increase the interpretability of the network, and are easy to control the output content so as to better control the reconstruction process. In the local v b ,v s Under milder conditions, bone tissue will generally exhibit locally higher strength values than soft tissue, i.e. sigma α=1,β=1 >σ α=1,β=0 . This is because the bone tissue portion can be regarded as being superimposed on the soft tissue material, σ α=1,β=0 When the above formula is degenerated to include only soft tissue portions. This representation conforms to the numerical distribution of the soft and hard tissue gray data, and more readily achieves the desired gray distribution characteristics.
Meanwhile, on the premise that the tissue segmentation part is classified into two categories and an ideal segmentation result can be obtained, the numerical value of the material representation is relatively gentle. Prediction of dichotomy compared to direct prediction of sigmaClass output (alpha, beta) and low frequency signal (v b ,v s ) Easier for network learning.
Particularly, in clinic, the X-ray skull is one of the conventional tools in the orthodontic diagnosis process of the lateral plate, clear and detailed hard tissue images can be generated and are often used for evaluating the osseous structure, the tissue decoupling fully utilizes the clinical experience and the advantages, and the soft and hard tissue decoupling is used for carrying out additional supervision on the neural network training, so that the learning efficiency of the model and the reconstruction quality of the images can be greatly improved under the condition of sparse input.
In order to fully utilize the advantages of decoupling representation and facilitate supervision of the training process, the neural field network model provided by the invention is mainly based on the following design principles: first, the network needs to output four elements, alpha, beta and v b And v s Corresponding to soft and hard tissue separation and topography and texture, respectively, however, simply adjusting the last layer of the MLP to a 4-channel output and adding tissue-based supervision is not sufficient to produce optimal results. Because the different shapes and tissue materials are not identical in nature, fully connected MLPs have difficulty learning this diverse representation, with all features always being shared from layer to layer. In particular, when strongly parameterized position coding is employed, high frequency details learn more within the position coding, making it difficult for the network to learn semantically diverse representations. And the network only supervises the shape branches alpha, beta, for the texture branch v b And v s Without any limitation, this causes the network to produce predicted results on these branches that may contradict low frequency results. Thus, the method employs 4 separate, embedded, small MLPs to construct the network, where each MLP has its own set of weights and errors for learning the decoupled features separately. In addition, the smaller network is introduced, the complexity of the network can be reduced, the network parameter number is reduced, and the training speed of the network is accelerated. In particular, grid-based representation (grid-based representations) can reduce network size, speed up network training, and yield better reconstruction results in neural implicit scene representation. Thus, the method uses a hash grid as bitsCompared with other related methods, the method has the advantages of faster convergence speed and better effect under the sparse view condition.
The constructed neural field network model deconstructs are specifically as follows: a quaternary network with depth of 4 and width of 32, using Sigmoid as final activation function, the quaternary network output is 4 channels, which are respectively alpha, beta and v after decoupling b And v s The structure of the model constructed by the invention is as follows: the model is based on a 4-channel characteristic hash grid position encoder and is connected with a moduleList, the moduleList comprises four Sequential modules, each module is a sub-network and is used for processing different characteristic types, and four-dimensional characteristics of different scales output by the hash grid are respectively input into the four modules. The first two Sequential modules both contain two parts: one MLP and one Sigmoid layer. The MLP is a multi-layer perceptron that maps input features to an output value. The Sigmoid layer is an activation function that compresses the output value between 0 and 1, indicating a probability or confidence. The last two Sequential modules contain only two single layers: a Linear layer and a Sigmoid layer. The Linear layer is a Linear transformation that maps the input features to a one-dimensional output value. The Sigmoid layer functions as above. Finally, a combination module is provided, which only contains global weight epsilon, and is used for combining the output values of the four sub-networks into a final output value to output the prediction result of the whole network.
In a second aspect, embodiments of the present application further provide a sparse projection CBCT reconstruction device.
In an embodiment, referring to fig. 2, fig. 2 is a schematic functional block diagram of an embodiment of a sparse projection CBCT reconstruction device according to the present application. As shown in fig. 2, the sparse projection CBCT reconstruction device includes:
an acquisition module configured to acquire X-ray projections of a target patient at a plurality of angles;
the training module is configured to obtain integral tissue mask prediction projection and hard tissue mask prediction projection of a target patient according to a preset tissue projection prediction model and the X-ray projection, and the integral tissue mask prediction projection and the hard tissue mask prediction projection are used as integral tissue mask prediction projection for reconstruction and hard tissue mask prediction projection for reconstruction; according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
the reconstruction module is configured to obtain integral tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and form the three-dimensional image to be reconstructed.
The functional implementation of each module in the sparse projection CBCT reconstruction device corresponds to each step in the sparse projection CBCT reconstruction method embodiment, and the functions and implementation processes thereof are not described in detail herein.
In a third aspect, embodiments of the present application provide a sparse projection CBCT reconstruction device, which may be a device with a data processing function, such as a personal computer (personal computer, PC), a notebook computer, a server, or the like.
Referring to fig. 3, fig. 3 is a schematic hardware structure of a sparse projection CBCT reconstruction device according to an embodiment of the present application. In an embodiment of the present application, the sparse projection CBCT reconstruction device may include a processor, a memory, a communication interface, and a communication bus.
The communication bus may be of any type for implementing the processor, memory, and communication interface interconnections.
The communication interfaces include input/output (I/O) interfaces, physical interfaces, logical interfaces, and the like for implementing device interconnections inside the sparse projection CBCT reconstruction device, and interfaces for implementing interconnection of the sparse projection CBCT reconstruction device with other devices (e.g., other computing devices or user devices). The physical interface may be an ethernet interface, a fiber optic interface, an ATM interface, etc.; the user device may be a Display, a Keyboard (Keyboard), or the like.
The memory may be various types of storage media such as random access memory (randomaccess memory, RAM), read-only memory (ROM), nonvolatile RAM (non-volatileRAM, NVRAM), flash memory, optical memory, hard disk, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (electrically erasable PROM, EEPROM), and the like.
The processor may be a general-purpose processor, and the general-purpose processor may call a sparse projection CBCT reconstruction program stored in the memory, and execute the sparse projection CBCT reconstruction method provided in the embodiment of the present application. For example, the general purpose processor may be a central processing unit (central processing unit, CPU). The method performed when the sparse projection CBCT reconstruction program is called may refer to various embodiments of the sparse projection CBCT reconstruction method of the present application, and will not be described herein.
Those skilled in the art will appreciate that the hardware configuration shown in fig. 3 is not limiting of the application and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In a fourth aspect, embodiments of the present application also provide a readable storage medium.
The sparse projection CBCT reconstruction program is stored on a readable storage medium, and when the sparse projection CBCT reconstruction program is executed by a processor, the steps of the sparse projection CBCT reconstruction method are realized.
The method implemented when the sparse projection CBCT reconstruction program is executed may refer to various embodiments of the sparse projection CBCT reconstruction method of the present application, and will not be described herein.
It should be noted that, the foregoing embodiment numbers are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments.
The terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the foregoing drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. The terms "first," "second," and "third," etc. are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order, and are not limited to the fact that "first," "second," and "third" are not identical.
In the description of embodiments of the present application, "exemplary," "such as," or "for example," etc., are used to indicate an example, instance, or illustration. Any embodiment or design described herein as "exemplary," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
In some of the processes described in the embodiments of the present application, a plurality of operations or steps occurring in a particular order are included, but it should be understood that these operations or steps may be performed out of the order in which they occur in the embodiments of the present application or in parallel, the sequence numbers of the operations merely serve to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the processes may include more or fewer operations, and the operations or steps may be performed in sequence or in parallel, and the operations or steps may be combined.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising several instructions for causing a terminal device to perform the method described in the various embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (10)

1. The sparse projection CBCT reconstruction method is characterized by comprising the following steps of:
acquiring X-ray projections of a target patient at a plurality of angles;
obtaining an integral tissue mask prediction projection and a hard tissue mask prediction projection of a target patient according to a preset tissue projection prediction model and the X-ray projection, and taking the integral tissue mask prediction projection and the hard tissue mask prediction projection as an integral tissue mask prediction projection for reconstruction and a hard tissue mask prediction projection for reconstruction;
according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
and obtaining the whole tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and forming the three-dimensional image to be reconstructed.
2. The sparse projection CBCT reconstruction method of claim 1, wherein said supervising the training set neural network initial model from the X-ray projections, the whole tissue mask predicted projections for reconstruction, and the hard tissue mask predicted projections for reconstruction to obtain a neural network model corresponding to the target patient, comprising:
in the iterative training process, obtaining a loss function of the neural field network model after each iteration according to the X-ray projection, the whole tissue mask prediction projection for reconstruction, the hard tissue mask prediction projection for reconstruction, the whole tissue gray data, the whole tissue mask gray data and the hard tissue mask gray data;
judging whether the loss function is smaller than a set value or not;
and if the training parameters are smaller than the predetermined threshold, training the neural network initial model to obtain a neural network model corresponding to the target patient.
3. The sparse projection CBCT reconstruction method of claim 2, wherein in the iterative training process, a loss function of the neural field network model after each iteration is obtained according to the X-ray projection, the global tissue mask prediction projection for reconstruction, the hard tissue mask prediction projection for reconstruction, the global tissue gray data, the global tissue mask gray data, and the hard tissue mask gray data, and the following calculation formula is adopted:
wherein L is 1 Is the average absolute error, L 2 As the average of the square error of the signal,gt respectively corresponding to the gray data of the whole tissue under the coordinate point and the gray data of the X-ray projection, < >>A pred Gray data of the whole tissue under the corresponding coordinate points and gray data of the projection predicted by the whole tissue mask for reconstruction are respectively +.>B pred The gray data of the hard tissue mask under the corresponding coordinate point and the gray data of the projection predicted by the hard tissue mask for reconstruction are respectively, and lambda (t) is an influence parameter function which is negatively related to the iteration times.
4. A sparse projection CBCT reconstruction method according to claim 3 wherein the influencing parameter function λ (t) is calculated using the following formula:
wherein T, T respectively represent the current iteration and the target total iteration times, lambda 0 K is (0, 1) as the initial value of the parameter.
5. The sparse projection CBCT reconstruction method of claim 1, wherein the tissue projection prediction model is derived from:
acquiring a plurality of CBCT three-dimensional image data from different objects as three-dimensional image data for training;
obtaining X-ray projection for training and real projection of an integral tissue mask and real projection of a hard tissue mask corresponding to the X-ray projection for training according to the three-dimensional image data for training;
according to the real projection of the whole tissue mask and the tissue projection prediction model set by the real projection supervision training of the hard tissue mask; the tissue projection prediction model is configured to obtain an overall tissue mask prediction projection and a hard tissue mask prediction projection according to the input X-ray projection.
6. The sparse projection CBCT reconstruction method of claim 5, wherein the training X-ray projection and the whole tissue mask real projection and the hard tissue mask real projection corresponding to the training X-ray projection are obtained according to the training three-dimensional image data,
and dividing the training three-dimensional image data by observing the numerical distribution condition of the volume image and the set two thresholds so as to obtain the real projection of the whole tissue mask and the real projection of the hard tissue mask.
7. The sparse projection CBCT reconstruction method of claim 1, wherein the neural field network model is configured to generate gray data of an entire tissue mask, gray data of a hard tissue mask, hard tissue texture values, and soft tissue texture values at any one coordinate point in a three-dimensional space according to coordinate information of the coordinate point, and obtain the entire tissue gray data at the coordinate point based on the gray data of the entire tissue mask, the gray data of the hard tissue mask, the hard tissue texture values, and the soft tissue texture values.
8. A sparse projection CBCT reconstruction device, comprising:
an acquisition module configured to acquire X-ray projections of a target patient at a plurality of angles;
the training module is configured to obtain integral tissue mask prediction projection and hard tissue mask prediction projection of a target patient according to a preset tissue projection prediction model and the X-ray projection, and the integral tissue mask prediction projection and the hard tissue mask prediction projection are used as integral tissue mask prediction projection for reconstruction and hard tissue mask prediction projection for reconstruction; according to the X-ray projection, the whole tissue mask prediction projection for reconstruction and the hard tissue mask prediction projection for reconstruction, supervising a neural field network initial model set by training to obtain a neural field network model corresponding to the target patient; the neural field network model is configured to generate gray data of a plurality of organization units including gray data of an overall organization mask and gray data of a hard organization mask under a coordinate point according to coordinate information of any coordinate point in a three-dimensional space, and obtain the overall organization gray data under the coordinate point based on the gray data of the plurality of organization units;
the reconstruction module is configured to obtain integral tissue gray data of each coordinate point in the three-dimensional image to be reconstructed according to the neural field network model and the coordinate information of each coordinate point in the three-dimensional image to be reconstructed of the target patient, and form the three-dimensional image to be reconstructed.
9. A sparse projection CBCT reconstruction device comprising a processor, a memory, and a sparse projection CBCT reconstruction program stored on the memory and executable by the processor, wherein the sparse projection CBCT reconstruction program, when executed by the processor, implements the steps of the sparse projection CBCT reconstruction method of any one of claims 1 to 7.
10. A readable storage medium, wherein a sparse projection CBCT reconstruction program is stored on the readable storage medium, wherein the sparse projection CBCT reconstruction program, when executed by a processor, implements the steps of the sparse projection CBCT reconstruction method of any one of claims 1 to 7.
CN202311646892.8A 2023-11-30 2023-11-30 Sparse projection CBCT reconstruction method, device, equipment and readable storage medium Pending CN117653162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311646892.8A CN117653162A (en) 2023-11-30 2023-11-30 Sparse projection CBCT reconstruction method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311646892.8A CN117653162A (en) 2023-11-30 2023-11-30 Sparse projection CBCT reconstruction method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117653162A true CN117653162A (en) 2024-03-08

Family

ID=90080253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311646892.8A Pending CN117653162A (en) 2023-11-30 2023-11-30 Sparse projection CBCT reconstruction method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117653162A (en)

Similar Documents

Publication Publication Date Title
JP7039153B2 (en) Image enhancement using a hostile generation network
AU2019449137B2 (en) sCT image generation using cyclegan with deformable layers
CN109697741B (en) PET image reconstruction method, device, equipment and medium
CN112424835B (en) System and method for image reconstruction
CN110728729B (en) Attention mechanism-based unsupervised CT projection domain data recovery method
CN108564553A (en) Low-dose CT image noise suppression method based on convolutional neural networks
CN110363797B (en) PET and CT image registration method based on excessive deformation inhibition
JP2021013736A (en) X-ray diagnostic system, image processing apparatus, and program
CN107958471A (en) CT imaging methods, device, CT equipment and storage medium based on lack sampling data
CN112258423A (en) Deartifact method, device, equipment and storage medium based on deep learning
CN115330615A (en) Method, apparatus, device, medium, and program product for training artifact removal model
La Rosa A deep learning approach to bone segmentation in CT scans
Chan et al. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction
Davamani et al. Biomedical image segmentation by deep learning methods
Poonkodi et al. 3D-MedTranCSGAN: 3D medical image transformation using CSGAN
Liang et al. A self-supervised deep learning network for low-dose CT reconstruction
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN117653162A (en) Sparse projection CBCT reconstruction method, device, equipment and readable storage medium
Balashova et al. 3D organ shape reconstruction from Topogram images
Leonardi et al. 3D reconstruction from CT-scan volume dataset application to kidney modeling
CN116420165A (en) Detection of anatomical anomalies by segmentation results with and without shape priors
Kock et al. Hepatic artery segmentation with 3D convolutional neural networks
CN113614788A (en) Deep reinforcement learning for computer-aided reading and analysis
Luo et al. Research on several key problems of medical image segmentation and virtual surgery
KR102661917B1 (en) Acquisition method of CT-like image using MRI image and computing device performing the same method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination