CN111179366A - Low-dose image reconstruction method and system based on anatomical difference prior - Google Patents

Low-dose image reconstruction method and system based on anatomical difference prior Download PDF

Info

Publication number
CN111179366A
CN111179366A CN201911312709.4A CN201911312709A CN111179366A CN 111179366 A CN111179366 A CN 111179366A CN 201911312709 A CN201911312709 A CN 201911312709A CN 111179366 A CN111179366 A CN 111179366A
Authority
CN
China
Prior art keywords
image
low
dose image
network
dose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911312709.4A
Other languages
Chinese (zh)
Other versions
CN111179366B (en
Inventor
胡战利
梁栋
黄振兴
杨永峰
刘新
郑海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911312709.4A priority Critical patent/CN111179366B/en
Publication of CN111179366A publication Critical patent/CN111179366A/en
Application granted granted Critical
Publication of CN111179366B publication Critical patent/CN111179366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides a low-dose image reconstruction method and system based on anatomical difference prior. The method comprises the following steps: determining the weights of different parts in the low-dose image according to the prior information of the anatomical structure difference; constructing a generation network, taking a low-dose image as an input extraction feature, fusing the weights of different parts in the feature extraction process, and outputting a predicted image; constructing a discrimination network, taking the predicted image and the standard dose image as input, distinguishing the truth of the predicted image and the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, and training the generation network and the discrimination network in a combined manner to obtain the mapping relation between the low dose image and the standard dose image; and performing low-dose image reconstruction by using the obtained mapping relation. The invention can obtain more accurate high-definition images.

Description

Low-dose image reconstruction method and system based on anatomical difference prior
Technical Field
The invention relates to the technical field of medical image processing, in particular to a low-dose image reconstruction method and system based on anatomical structure difference prior.
Background
Computed Tomography (CT) is an important imaging means for obtaining internal structural information of an object in a nondestructive manner, has many advantages of high resolution, high sensitivity, multiple levels and the like, and is widely applied to the field of medical clinical examination. However, as the use of X-rays is required during CT scanning, the problem of CT radiation dose is becoming more and more important as people become increasingly aware of the potential hazards of radiation. The rationale for using Low doses (As Low As reasonable Achievable, ALARA) requires that the radiation dose to the patient be minimized while meeting the clinical diagnosis. Therefore, the research and development of a new low-dose CT imaging method can ensure the CT imaging quality and reduce the harmful radiation dose, and has important scientific significance and application prospect in the field of medical diagnosis.
The existing low-dose image reconstruction method has the main problems that full sampling is usually required, so that the CT scanning time is long; the image reconstruction speed is slow due to the large data volume acquired by full sampling; due to the long scanning time, artifacts caused by the motion of the patient occur; because most algorithms are designed based on a few parts, the algorithm robustness is poor; the patient is exposed to a high dose of CT radiation. In addition, in the prior art, when the problem of low-dose CT imaging is solved, the great difference of the anatomical structures of the low-dose images is ignored, for example, the difference of the anatomical structures of the cranium and the abdomen is obvious, and the definition of the reconstructed images is influenced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a low-dose image reconstruction method and system based on anatomical structure difference.
According to a first aspect of the invention, a low dose image reconstruction method based on an anatomical difference prior is provided. The method comprises the following steps:
determining the weights of different parts in the low-dose image according to the prior information of the anatomical structure difference;
constructing a generation network, taking a low-dose image as an input extraction feature, fusing the weights of different parts in the feature extraction process, and outputting a predicted image;
constructing a discrimination network, taking the predicted image and the standard dose image as input, distinguishing the truth of the predicted image and the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, and training the generation network and the discrimination network in a combined manner to obtain the mapping relation between the low dose image and the standard dose image;
and performing low-dose image reconstruction by using the obtained mapping relation.
In one embodiment, the determining the weights of the different parts in the low-dose image based on a priori information of anatomical differences comprises the sub-steps of:
constructing a weight prediction module comprising a plurality of convolutional layers and a Sigmod activation function;
and carrying out unique thermal coding on different parts of the low-dose image, sequentially inputting the different parts into the plurality of convolution layers, and further generating the weights of the different parts by using the Sigmod activation function.
In one embodiment, the generation network includes a plurality of cascaded attribute augmentation modules, and the cascaded attribute augmentation modules are used for multiplying features extracted from the input low-dose image by the weights of the different parts to obtain weight features, and fusing the extracted features and the weight features, wherein each attribute augmentation module sequentially includes a down-sampling layer, a ReLU layer, an up-sampling layer, a feature combination layer, and a feature fusion layer.
In one embodiment, the discriminative network includes a plurality of convolutional layers and two fully-connected layers.
In one embodiment, D { (x) for a given training data set1,y1),(x2,y2),…,(xn,yn) Where x ═ x1,x2,...,xnIs an image block extracted from the low dose image, y ═ y1,y2,...,ynIs an image block extracted from the standard dose image, a ═ a1,a2,...,anThe weights corresponding to different parts are used, n is the total number of training samples, and in the process of joint training, the parameters in the generation network are obtained by a mean square error minimization objective function and are expressed as:
Figure BDA0002324967550000021
where Θ represents a parameter of the generating network and G represents a mapping of the generating network.
In one embodiment, the penalty function for the first optimization objective is set to:
Figure BDA0002324967550000022
wherein E represents the desired calculation, β represents the balance factor, DdIndicating the process of distinguishing authenticity.
In one embodiment, the penalty function for the second optimization objective is set to:
LAttribute=Ex(Da(x)-a)+Ey(Da(G(y;a;Θ))-a)
where E represents the desired calculation, DaShowing the process of discriminating the site attribute.
According to a second aspect of the invention, a low dose image reconstruction system based on an anatomical difference prior is provided. The system comprises:
a weight prediction module: for determining the weights of different parts in the low-dose image from a priori information of anatomical differences;
a network construction and training module: the method is used for constructing a generation network, taking a low-dose image as an input extraction feature, fusing the weights of different parts in the feature extraction process and outputting a prediction image; the method comprises the steps of establishing a judgment network, taking the predicted image and a standard dose image as input, distinguishing the truth of the predicted image and the truth of the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, training the generation network and the judgment network in a combined mode, and obtaining the mapping relation between a low dose image and the standard dose image;
an image reconstruction module: which is used for low dose image reconstruction using the obtained mapping relation.
Compared with the prior art, the invention has the advantages that: the image content information and the part information are fused by utilizing the difference of anatomical structures, so that the generation capacity of the network on the anatomical structures is improved; based on the confrontation network, attribute constraint is added, and the perception of the network to the anatomical structure is improved. The invention improves the network performance, ensures that the reconstructed image well retains the image details and has a clearer structure.
Drawings
The invention is illustrated and described only by way of example and not by way of limitation in the scope of the invention as set forth in the following drawings, in which:
FIG. 1 is a flow diagram of a low dose image reconstruction method based on anatomical difference priors according to one embodiment of the present invention;
FIG. 2 is an architecture diagram of a weight prediction module according to one embodiment of the invention;
FIG. 3 is an architectural diagram of generating a countermeasure network according to one embodiment of the invention;
FIG. 4 is a schematic diagram of a reference standard image according to one embodiment of the present invention;
FIG. 5 is a schematic illustration of a sparsely sampled low dose image according to one embodiment of the invention;
FIG. 6 is a schematic diagram of a reconstructed image according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions, design methods, and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not as a limitation. Thus, other examples of the exemplary embodiments may have different values.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In short, the low-dose image reconstruction method based on anatomical difference prior provided by the embodiment of the present invention considers different anatomical structures of input images, and adds the prior information (attribute) of the anatomical region into a network framework in a form of weight by introducing. The same anatomical regions have the same weight and different anatomical regions have different weights. In this way, data for multiple sites can be integrated on a unified model framework. In order to improve the visual effect of the image, Wasserstein generation countermeasure network (WGAN) is introduced, and in consideration of the fact that the low-dose image and the estimated normal-dose image are derived from the same anatomical part, attribute loss is proposed to define the attribute numerical distance between the estimated image and the real image. Through various loss constraints, the low-dose image reconstruction method can obtain clearer images.
Specifically, referring to fig. 1, the low-dose image reconstruction method according to the embodiment of the present invention includes the following steps:
step S110, determining the weight of different parts in the low-dose image according to the prior information of the anatomical structure difference.
The weights of the different locations are determined, for example, according to the weight prediction module of fig. 2. Each input low dose image has a corresponding attribute (region) that is first one-hot encoded. Using 6 convolutional layer (convolutional kernel 1x1) kernels, finally using Sigmod activation function to generate a weight mask with 64 channels. Similar to the U-net structure, compression and expansion of channels are done over the channels and convolutional layers of the same number of channels are connected using short connections to retain more contextual information, e.g., the first layer (1x1x64) and the fifth layer (1x1x64) from bottom to top in fig. 2 use short connections and the second layer (1x1x32) and the fourth layer (1x1x32) use short connections. The weight prediction module may generate a weight corresponding to each part based on the input attribute.
The structural sites referred to herein include, for example, the skull, orbit, sinuses, neck, lung cavity, abdomen, pelvic cavity, knee, and lumbar, among others.
It should be noted that, for the weight prediction module of fig. 2, those skilled in the art may make appropriate modifications according to the actual application scenario, for example, more or less convolution layers are used, other types of activation functions are used, or more or less channels are set according to the number of the parts included in the low-dose image, for example, a weight mask with 128 channels is generated. In addition, in another embodiment, the weights of different parts can also be simply and directly set, and only the different parts are subjected to differential identification.
And step S120, constructing a generation countermeasure network, wherein the generation network takes the low-dose image as an input extraction feature, fuses weights of different parts in the feature extraction process and outputs a prediction image.
Referring to fig. 3, generating the countermeasure network as a whole includes generating the network and discriminating the network, where the generating network includes a feature extraction layer 210, a plurality of cascaded attribute augmentation modules (e.g., set to 15), and a reconstruction layer 270. Each attribute augmentation module includes a downsampling layer 220, a ReLU layer 230, an upsampling layer 240, a feature federation layer 250, and a feature fusion layer 260. The attribute augmentation module completes feature extraction through a down-sampling layer 220, a ReLu layer 230 and an up-sampling layer 240, and further obtains a part weight according to step S110, and multiplies the extracted feature by the weight to obtain a weight feature. In order to prevent the loss of the original extracted features, the original features and the weighted features are combined by using a combination layer and subjected to final feature fusionLayer 260 (e.g., a convolutional layer) performs feature fusion. Symbol in fig. 3
Figure BDA0002324967550000051
The addition of the points is shown,
Figure BDA0002324967550000052
indicating a dot product.
In one embodiment, the parameter settings for the attribute augmentation module are as follows in Table 1.
Table 1: image augmentation module
Unit cell Operation of Parameter(s)
Downsampling layer Convolution with a bit line 3x3x64
Upper sampling layer Deconvolution 3x3x64
Feature fusion layer Convolution with a bit line 1x1x64
The input of the generated network constructed by the invention is a low-dose image, the input of the weight prediction module is an attribute corresponding to the low-dose image, and the output of the weight prediction module is the predicted weight of each part, wherein the weight of each part is multiplied by the originally extracted feature in the generated network, and finally the network output prediction image is generated.
In the embodiment of the invention, the attribute augmentation module and the weight prediction module are arranged to apply the prior information of anatomical structure difference to the reconstruction of the low-dose image, so that the characteristics of all parts are maintained, the difference of all parts is increased, and the predicted image is closer to a real image. The invention does not limit the number of cascaded attribute augmentation modules.
And step S130, regarding the constructed discrimination network in the generation countermeasure network, taking the predicted image and the standard dose image as input, and taking discrimination of authenticity of the input image and discrimination of the attribute value of the input image as optimization targets.
Since the input low dose image and the final estimated image have the same attribute, the discrimination network needs to discriminate the attribute value (i.e., the part) of the input image in addition to the authenticity of the input image. The input image that is generated across the framework of the antagonistic network is an image block, for example 64x64 in size. The training and test sets include images of multiple sites including, for example, the skull, eye sockets, sinuses, neck, lung cavity, abdomen, pelvic (male), pelvic (female), knee, and lumbar, among others.
In one embodiment, the discriminant network includes 7 convolutional layers and 2 fully-connected layers, with specific parameter settings as seen in table 2 below.
Table 2: discriminating network parameters
Unit cell Convolution step Convolution kernel
Convolutional layer 1 2 64
Convolutional layer 2 1 128
Convolutional layer 3 2 128
Convolutional layer 4 1 256
Convolutional layer 5 2 256
Convolutional layer 6 1 512
Convolutional layer 7 2 512
Full connection layer 1 - 1
Full connection layer 2 - 10
The input of the discrimination network is a prediction image and a normal dose image obtained by the generation network, and the output of the discrimination network comprises two aspects, namely discrimination of authenticity of the input image and identification of an attribute value of the input image, namely, the objective of the discrimination network is to distinguish the prediction image and a real image generated by the generation network as much as possible and accurately identify the attribute of the input image.
And step S140, training to generate a confrontation network, and obtaining a mapping relation from a low-dose image to a standard-dose image.
For example, given a training data set D { (x)1,y1),(x2,y2),…,(xn,yn) Where x ═ x1,x2,…,xnAre image blocks extracted from low-dose CT images, y ═ y1,y2,...,ynIs an image block extracted from a standard dose CT image (i.e., a normal dose image), a ═ a1,a2,...,anIs the corresponding attribute and n is the number of training samples.
The pre-trained supervision model, the parameters in the mapping G (generating network) can be obtained by means of a mean square error minimization objective function, expressed as:
Figure BDA0002324967550000071
where Θ represents a network parameter (e.g., weight, bias, etc.).
In order to improve the visual effect, a resistance loss function is introduced to optimize the model so as to improve the accuracy of identifying the authenticity of the input image, wherein the resistance loss function is expressed as:
Figure BDA0002324967550000072
where E represents the desired calculation and β represents a balance factor to balance the challenge loss and the gradient penalty term, e.g., set to 10, DdIndicating a process of discriminating the authenticity of the input image.
Further, for the process of identifying the input image attributes, since the input low dose image and the estimated image possess the same attributes, an attribute penalty is introduced to define the attribute distance between the estimated image and the original image, the attribute penalty being expressed as:
LAttribute=Ex(Da(x)-a)+Ey(Da(G(y;a;Θ))-a) (3)
where E represents the desired calculation, DaIndicating the process of discriminating the attribute.
It should be noted that, in the process of jointly training the generation network and the discrimination network, optimization can be performed by using an optimizer in the prior art, for example, an Adam optimizer is used for optimization corresponding to supervised learning (generation network), and an SGD (random gradient descent) optimizer is used for optimization for generation of the confrontation model. During training, image block pairs and corresponding attribute values are extracted from the data sets of the standard dose CT image and the low dose CT image as the whole network input. In addition, other forms of loss functions may be employed for training.
And generating a confrontation network through training to obtain a mapping relation G from the low-dose image to the standard-dose image, and reconstructing a new low-dose image by using the mapping relation, thereby obtaining a clear image closer to a real image.
Accordingly, the present invention provides a low dose image reconstruction system based on anatomical difference priors for implementing one or more aspects of the above method. For example, the system comprises a weight prediction module for determining weights of different sites in the low-dose image from a priori information of anatomical differences; the network construction and training module is used for constructing a generation network, taking a low-dose image as input extraction characteristics, fusing weights of different parts in the characteristic extraction process and outputting a prediction image; the method comprises the steps of establishing a judgment network, taking a predicted image and a standard dose image as input, distinguishing the truth of the predicted image and the truth of the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, and performing combined training to generate the network and the judgment network to obtain the mapping relation between a low dose image and the standard dose image; and the image reconstruction module is used for carrying out low-dose image reconstruction by using the obtained mapping relation. The modules in the system provided by the invention can be realized by adopting a processor or a logic circuit.
It should be noted that, in addition to being applied to CT image reconstruction, the present invention, after being appropriately deformed, may also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction, or other image reconstruction based on sparse projection sampling.
By verification, the image reconstruction method can be used for reconstructing the image, and the image which is clearer and contains more details can be obtained. Referring to fig. 4 to 6, fig. 4 is a reference standard image, fig. 5 is a sparsely sampled low dose image, and fig. 6 is a reconstructed or restored image.
In summary, the invention converts the attribute value into the weight mask through the weight prediction module, and completes the fusion of the original image characteristic and the attribute characteristic by setting the attribute augmentation module in the generation network; and defining attribute loss based on the fact that the original low-dose image and the estimated image have the same attribute value, so that the constraint on generation of a countermeasure network is strengthened, and a more accurate high-definition image is obtained.
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer non-transitory readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of low dose image reconstruction based on anatomical difference priors, comprising the steps of:
determining the weights of different parts in the low-dose image according to the prior information of the anatomical structure difference;
constructing a generation network, taking a low-dose image as an input extraction feature, fusing the weights of different parts in the feature extraction process, and outputting a predicted image;
constructing a discrimination network, taking the predicted image and the standard dose image as input, distinguishing the truth of the predicted image and the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, and training the generation network and the discrimination network in a combined manner to obtain the mapping relation between the low dose image and the standard dose image;
and performing low-dose image reconstruction by using the obtained mapping relation.
2. The method of anatomical difference prior-based low-dose image reconstruction according to claim 1, wherein the determining weights of different regions in the low-dose image according to the prior information of anatomical difference comprises the sub-steps of:
constructing a weight prediction module comprising a plurality of convolutional layers and a Sigmod activation function;
and carrying out unique thermal coding on different parts of the low-dose image, sequentially inputting the different parts into the plurality of convolution layers, and further generating the weights of the different parts by using the Sigmod activation function.
3. The anatomical difference prior-based low-dose image reconstruction method according to claim 1, wherein the generation network includes a plurality of cascaded attribute augmentation modules for multiplying features extracted from the input low-dose image by the weights of the different regions to obtain weighted features and fusing the extracted features and the weighted features, wherein each attribute augmentation module sequentially includes a down-sampling layer, a ReLU layer, an up-sampling layer, a feature combination layer and a feature fusion layer.
4. The anatomical difference prior-based low-dose image reconstruction method of claim 1, wherein the discriminative network comprises a plurality of convolutional layers and two fully-connected layers.
5. The anatomical difference prior-based low-dose image reconstruction method of claim 1, wherein D { (x) for a given training dataset1,y1),(x2,y2),…,(xn,yn) Where x ═ x1,x2,…,xnIs an image block extracted from the low dose image, y ═ y1,y2,...,ynIs an image block extracted from the standard dose image, a ═ a1,a2,...,anThe weights corresponding to different parts are used, n is the total number of training samples, and in the process of joint training, the parameters in the generation network are obtained by a mean square error minimization objective function and are expressed as:
Figure FDA0002324967540000021
where Θ represents a parameter of the generating network and G represents a mapping of the generating network.
6. The method of claim 5, wherein the first optimization objective loss function is set to:
Figure FDA0002324967540000022
wherein E represents the desired calculation, β represents the balance factor, DdIndicating the process of distinguishing authenticity.
7. The method of claim 5, wherein the second optimization objective loss function is set as:
LAttribute=Ex(Da(x)-a)+Ey(Da(G(y;a;Θ))-a)
where E represents the desired calculation, DaShowing the process of discriminating the site attribute.
8. A low dose image reconstruction system based on an anatomical difference prior, comprising:
a weight prediction module: determining weights of different parts in the low-dose image according to the prior information of the anatomical difference;
a network construction and training module: the system is used for constructing a generation network, taking a low-dose image as an input extraction feature, fusing the weights of different parts in the feature extraction process and outputting a prediction image; the method comprises the steps of establishing a judgment network, taking the predicted image and a standard dose image as input, distinguishing the truth of the predicted image and the truth of the standard dose image as a first optimization target, identifying different parts of the predicted image as a second optimization target, training the generation network and the judgment network in a combined mode, and obtaining the mapping relation between a low dose image and the standard dose image;
and the image reconstruction module is used for reconstructing a low-dose image by using the obtained mapping relation.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which is executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the processor executes the program.
CN201911312709.4A 2019-12-18 2019-12-18 Anatomical structure difference priori based low-dose image reconstruction method and system Active CN111179366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312709.4A CN111179366B (en) 2019-12-18 2019-12-18 Anatomical structure difference priori based low-dose image reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312709.4A CN111179366B (en) 2019-12-18 2019-12-18 Anatomical structure difference priori based low-dose image reconstruction method and system

Publications (2)

Publication Number Publication Date
CN111179366A true CN111179366A (en) 2020-05-19
CN111179366B CN111179366B (en) 2023-04-25

Family

ID=70646393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312709.4A Active CN111179366B (en) 2019-12-18 2019-12-18 Anatomical structure difference priori based low-dose image reconstruction method and system

Country Status (1)

Country Link
CN (1) CN111179366B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488951A (en) * 2020-12-07 2021-03-12 深圳先进技术研究院 Training method of low-dose image denoising network and denoising method of low-dose image
CN112541871A (en) * 2020-12-07 2021-03-23 深圳先进技术研究院 Training method of low-dose image denoising network and denoising method of low-dose image
CN113628144A (en) * 2021-08-25 2021-11-09 厦门美图之家科技有限公司 Portrait restoration method and device, electronic equipment and storage medium
WO2021253295A1 (en) * 2020-06-17 2021-12-23 深圳高性能医疗器械国家研究院有限公司 Low-dose ct image reconstruction method
WO2022027327A1 (en) * 2020-08-05 2022-02-10 深圳高性能医疗器械国家研究院有限公司 Image reconstruction method and application thereof
WO2022027595A1 (en) * 2020-08-07 2022-02-10 深圳先进技术研究院 Method for reconstructing low-dose image by using multiscale feature sensing deep network
CN115393534A (en) * 2022-10-31 2022-11-25 深圳市宝润科技有限公司 Deep learning-based cone beam three-dimensional DR reconstruction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN109658469A (en) * 2018-12-13 2019-04-19 深圳先进技术研究院 A kind of neck joint imaging method and device based on the study of depth priori
CN109949215A (en) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 A kind of low-dose CT image simulation method
CN110033410A (en) * 2019-03-28 2019-07-19 华中科技大学 Image reconstruction model training method, image super-resolution rebuilding method and device
CN110555834A (en) * 2019-09-03 2019-12-10 明峰医疗系统股份有限公司 CT bad channel real-time detection and reconstruction method based on deep learning network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166161A (en) * 2018-07-04 2019-01-08 东南大学 A kind of low-dose CT image processing system inhibiting convolutional neural networks based on noise artifacts
CN109658469A (en) * 2018-12-13 2019-04-19 深圳先进技术研究院 A kind of neck joint imaging method and device based on the study of depth priori
CN110033410A (en) * 2019-03-28 2019-07-19 华中科技大学 Image reconstruction model training method, image super-resolution rebuilding method and device
CN109949215A (en) * 2019-03-29 2019-06-28 浙江明峰智能医疗科技有限公司 A kind of low-dose CT image simulation method
CN110555834A (en) * 2019-09-03 2019-12-10 明峰医疗系统股份有限公司 CT bad channel real-time detection and reconstruction method based on deep learning network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
高净植;刘;张权;桂志国;: "改进深度残差卷积神经网络的LDCT图像估计" *
黄玉蕾;罗晓霞;刘笃仁;: "MFSC系数特征局部有限权重共享CNN语音识别" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253295A1 (en) * 2020-06-17 2021-12-23 深圳高性能医疗器械国家研究院有限公司 Low-dose ct image reconstruction method
WO2022027327A1 (en) * 2020-08-05 2022-02-10 深圳高性能医疗器械国家研究院有限公司 Image reconstruction method and application thereof
WO2022027595A1 (en) * 2020-08-07 2022-02-10 深圳先进技术研究院 Method for reconstructing low-dose image by using multiscale feature sensing deep network
CN112488951A (en) * 2020-12-07 2021-03-12 深圳先进技术研究院 Training method of low-dose image denoising network and denoising method of low-dose image
CN112541871A (en) * 2020-12-07 2021-03-23 深圳先进技术研究院 Training method of low-dose image denoising network and denoising method of low-dose image
WO2022120883A1 (en) * 2020-12-07 2022-06-16 深圳先进技术研究院 Training method for low-dose image denoising network and denoising method for low-dose image
CN112541871B (en) * 2020-12-07 2024-07-23 深圳先进技术研究院 Training method of low-dose image denoising network and denoising method of low-dose image
CN113628144A (en) * 2021-08-25 2021-11-09 厦门美图之家科技有限公司 Portrait restoration method and device, electronic equipment and storage medium
CN115393534A (en) * 2022-10-31 2022-11-25 深圳市宝润科技有限公司 Deep learning-based cone beam three-dimensional DR reconstruction method and system

Also Published As

Publication number Publication date
CN111179366B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN111179366B (en) Anatomical structure difference priori based low-dose image reconstruction method and system
US11861501B2 (en) Semantic segmentation method and apparatus for three-dimensional image, terminal, and storage medium
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
CN110111313B (en) Medical image detection method based on deep learning and related equipment
Zheng et al. 3-D consistent and robust segmentation of cardiac images by deep learning with spatial propagation
Dangi et al. A distance map regularized CNN for cardiac cine MR image segmentation
Sander et al. Automatic segmentation with detection of local segmentation failures in cardiac MRI
CN111080584B (en) Quality control method for medical image, computer device and readable storage medium
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
Guo et al. Dual attention enhancement feature fusion network for segmentation and quantitative analysis of paediatric echocardiography
WO2021120069A1 (en) Low-dose image reconstruction method and system on basis of a priori differences between anatomical structures
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
Yao et al. Pneumonia Detection Using an Improved Algorithm Based on Faster R‐CNN
CN115578404B (en) Liver tumor image enhancement and segmentation method based on deep learning
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN110751187A (en) Training method of abnormal area image generation network and related product
Gaweł et al. Automatic spine tissue segmentation from MRI data based on cascade of boosted classifiers and active appearance model
CN114387317A (en) CT image and MRI three-dimensional image registration method and device
CN113192031B (en) Vascular analysis method, vascular analysis device, vascular analysis computer device, and vascular analysis storage medium
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN113160142A (en) Brain tumor segmentation method fusing prior boundary
Su et al. Res-DUnet: A small-region attentioned model for cardiac MRI-based right ventricular segmentation
Arzhaeva et al. Automated estimation of progression of interstitial lung disease in CT images
CN115439652A (en) Focal region segmentation method and device based on normal tissue image information comparison
Kockelkorn et al. Interactive lung segmentation in abnormal human and animal chest CT scans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant