CN110853742A - Image reconstruction method, system, device and storage medium - Google Patents

Image reconstruction method, system, device and storage medium Download PDF

Info

Publication number
CN110853742A
CN110853742A CN201911102058.6A CN201911102058A CN110853742A CN 110853742 A CN110853742 A CN 110853742A CN 201911102058 A CN201911102058 A CN 201911102058A CN 110853742 A CN110853742 A CN 110853742A
Authority
CN
China
Prior art keywords
image
raw data
reconstruction
intermediate image
angle range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911102058.6A
Other languages
Chinese (zh)
Other versions
CN110853742B (en
Inventor
曹文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201911102058.6A priority Critical patent/CN110853742B/en
Publication of CN110853742A publication Critical patent/CN110853742A/en
Priority to US17/096,936 priority patent/US11763498B2/en
Application granted granted Critical
Publication of CN110853742B publication Critical patent/CN110853742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Abstract

The present application relates to the field of medical image processing technologies, and in particular, to an image reconstruction method, system, apparatus, and storage medium. The image reconstruction method comprises the following steps: acquiring a first set of raw data and a second set of raw data, the first set of raw data being associated with a first scanning angle range and the second set of raw data being associated with a second scanning angle range; obtaining a first image by performing a first reconstruction procedure based on the first set of raw data; obtaining a second image by performing a second reconstruction procedure based on the second set of raw data; based on the first image and the second image, a target image is determined. The image reconstruction method has better time resolution on the heart image obtained by reconstructing the heart, and can obviously reduce the coronary artery blood vessel motion artifact.

Description

Image reconstruction method, system, device and storage medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to an image reconstruction method, system, apparatus, and storage medium.
Background
Image reconstruction techniques are widely used for disease diagnosis. For some reconstruction procedures, such as the reconstruction of cardiac images, a scan of the heart is typically completed on a rapidly rotating CT device with a large cone angle (e.g., a CT device with a rotational speed within 0.3s of a turn), and the image is then reconstructed using the scan data. In order to improve the time resolution of the image, the conventional filtered back-projection reconstruction is generally reconstructed by using a reconstruction mode of a local scan (e.g. 240 °). However, the projection data of each angle under the local scan is inconsistent (because the heart is constantly moving), which may cause motion artifacts. Therefore, it is desirable to provide an image reconstruction method, system, apparatus, and storage medium having better temporal resolution while reducing motion artifacts.
Disclosure of Invention
The application aims to provide an image reconstruction method, an image reconstruction system, an image reconstruction device and a storage medium based on a forward and backward projection model, and the effects of better time resolution and artifact reduction compared with the traditional filtering back projection reconstruction method are achieved.
One of the embodiments of the present application provides an image reconstruction method, including: acquiring a first set of raw data and a second set of raw data, the first set of raw data being associated with a first scanning angle range and the second set of raw data being associated with a second scanning angle range; obtaining a first image by performing a first reconstruction procedure based on the first set of raw data; obtaining a second image by performing a second reconstruction procedure based on the second set of raw data; based on the first image and the second image, a target image is determined.
One of the embodiments of the present application provides an image reconstruction system, including: the system comprises an original data acquisition module, a first image acquisition module, a second image acquisition module and a target image determination module; the raw data acquisition module is configured to acquire a first set of raw data and a second set of raw data, the first set of raw data being associated with a first scanning angle range, the second set of raw data being associated with a second scanning angle range. The first image acquisition module is used for acquiring a first image by executing a first reconstruction process based on the first set of raw data. The second image acquisition module is used for acquiring a second image by executing a second reconstruction process based on the second set of raw data. The target image determination module is configured to determine a target image based on the first image and the second image.
One of the embodiments of the present application provides an image reconstruction apparatus, which includes a processor, where the processor is configured to execute the image reconstruction method.
One of the embodiments of the present application provides a computer-readable storage medium, where the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the image reconstruction method.
Drawings
The present application will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of an exemplary image reconstruction system according to some embodiments of the present application;
FIG. 2 is an exemplary flow diagram of image reconstruction according to some embodiments of the present application;
FIG. 3 is an exemplary flow diagram illustrating the determination of a target image according to some embodiments of the present application;
FIG. 4 is an exemplary flow chart illustrating the acquisition of a second intermediate image according to some embodiments of the present application;
FIG. 5 is yet another exemplary flow chart illustrating acquisition of a second intermediate image according to some embodiments of the present application;
FIG. 6 is a block diagram of an exemplary image reconstruction system according to some embodiments of the present application;
FIG. 7 is a cross-sectional view of a cardiac vessel obtained in accordance with a conventional image reconstruction approach;
FIG. 8 is a cross-sectional view of a cardiac vessel obtained by an image reconstruction method according to some embodiments of the present application;
FIG. 9 is a VR map of a cardiac vessel from a conventional image reconstruction approach; and
FIG. 10 is a VR map of a cardiac vessel from an image reconstruction method according to some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The image reconstruction method disclosed in the present application can be applied to a variety of medical scanning imaging devices, including but not limited to one or any combination of a Computed Radiography (CR), a Digital Radiography (DR), a Computed Tomography (CT), a screen X-ray machine, a mobile X-ray device (such as a mobile C-arm machine), a digital subtraction angiography scanner (DSA), a linear accelerator, an Emission Computed Tomography (ECT), and the like. For illustrative purposes only, the present application will take a CT system as an example to describe the disclosed technical solution in detail, and is not intended to limit the scope of the present invention.
In one aspect, the present invention relates to a method and system for image reconstruction. With the continuous development of computer technology and the demand of low-dose imaging, Iterative Reconstruction (IR) algorithm gets more and more attention. The basic principle is to compare the difference between the estimated forward projection data and the original projection data and update the estimated forward projection data based on the difference, and the process is repeated until the results of two adjacent iterations are infinitely close. For certain specific scan objects, such as the heart, motion of the heart causes inconsistency in the projection data obtained at various scan angles, resulting in reconstructed images obtained using iterative reconstruction that are more severe in terms of motion artifacts than reconstructed images obtained using conventional FBP reconstruction algorithms using the same projection data. Meanwhile, for the CT reconstruction of the heart, the time resolution is an important index. In order to obtain a reconstructed image with high temporal resolution, the projection data at a smaller scan angle range is used for reconstruction. At this time, a problem of insufficient data may occur. Therefore, the application provides an image reconstruction method, which uses projection data in different scanning angle ranges, obtains reconstructed images with different characteristics by using an iterative reconstruction algorithm, and can obtain a clear reconstructed image with high time resolution after final fusion.
FIG. 1 is a schematic diagram illustrating an application scenario of an exemplary image reconstruction system according to some embodiments of the present application. In some embodiments, the image reconstruction system may acquire raw projection data of a scanned object and acquire a reconstructed image of the scanned object based on the methods disclosed herein. As shown in fig. 1, the image reconstruction system 100 may include a scanning device 110, a network 120, one or more terminals 130, a processing device 140, and a storage device 150.
The scanning device 110 may include a gantry 111, a detector 112, a detection region 113, a scanning bed 114, and a radiation scanning source 115. The gantry 111 may support a detector 112 and a radiation scanning source 115. The gantry 111 may rotate, for example, clockwise or counterclockwise about a gantry rotation axis. The radiation scanning source 115 may rotate with the gantry 111. The scan object may be placed on the scan bed 114 to be scanned. The radiation scanning source 115 can emit a radiation beam to a scanned object. The detector 112 can detect radiation beams (e.g., gamma photons) emitted from the detection region 113, and upon receiving the radiation beams that have passed through the scanned object, the radiation beams can be converted by the detector 112 into visible light, photoelectrically converted into electrical signals, converted into digital information by an analog/digital converter, input to a computing device (e.g., a computer) for processing, or transmitted to a storage device for storage. In some embodiments, the detector 112 may include one or more detector cells. The detector units may include scintillation detectors (e.g., cesium iodide detectors), other detectors, and the like. The detector unit may comprise a single row of detectors and/or a plurality of rows of detectors.
Processing device 140 may process data and/or information obtained from scanning device 110, terminal 130, and/or storage device 150. For example, the processing device 140 may determine scan data required for image reconstruction. In some embodiments, the processing device 140 may be used to process the read data, such as data pre-processing, image reconstruction, post-reconstruction processing, and the like. In some embodiments, the processing device 140 may be a single server or a group of servers. The server group may be centralized or distributed. In some embodiments, the processing device 140 may be local or remote. For example, the processing device 140 may access information and/or data from the scanning device 110, the terminal 130, and/or the storage device 150 via the network 120. As another example, the processing device 140 may be directly connected to the scanning device 110, the terminal 130, and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, and the like.
The terminal 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, and the like, or any combination thereof. In some embodiments, the terminal 130 may interact with other components in the image reconstruction system 100 over a network. For example, the terminal 130 may send one or more control instructions to the scanning device 110 to control the scanning device 110 to scan according to the instructions. As another example, terminal 130 may also receive results of processing by processing device 140, e.g.And (4) reconstructing the image. In some embodiments, the mobile device 131 may include smart home devices, wearable devices, mobile devices, virtual reality devices, augmented reality devices, and the like, or any combination thereof. In some embodiments, the smart home devices may include smart lighting devices, smart appliance control devices, smart monitoring devices, smart televisions, smart cameras, interphones, and the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footwear, glasses, helmet, watch, clothing, backpack, smart accessory, and the like, or any combination thereof. In some embodiments, the mobile device may comprise a mobile phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a POS device, a laptop, a tablet, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include a Google GlassTM、Oculus RiftTM、HoloLensTMOr Gear VRTMAnd the like. In some embodiments, the terminal 130 may be part of the processing device 140.
Storage device 150 may store data (e.g., scan data for a target object), instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the scanning device 110, the terminal 130, and/or the processing device 140, e.g., the storage device 150 may store scan data of a scan object obtained from the scanning device 110. In some embodiments, storage device 150 may store data and/or instructions for execution or use by processing device 140 to perform the example methods described herein. In some embodiments, the storage device 150 may include one or a combination of mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like. Mass storage may include magnetic disks, optical disks, solid state drives, removable storage, and the like. The removable memory may include a flash drive, floppy disk, optical disk, memory card, ZIP disk, magnetic tape, or the like. The volatile read and write memory may include Random Access Memory (RAM). The RAM may include Dynamic Random Access Memory (DRAM), double data rate synchronous dynamic random access memory (DDR-SDRAM), Static Random Access Memory (SRAM), silicon controlled random access memory (T-RAM), zero capacitance random access memory (Z-RAM), and the like. The ROM may include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile discs, and the like. In some embodiments, the storage device 150 may be implemented by a cloud platform as described herein. For example, the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, and the like.
In some embodiments, the storage device 150 may be connected to the network 120 to enable communication with one or more components (e.g., the processing device 140, the terminal 130, etc.) in the image reconstruction system 100. One or more components in the image reconstruction system 100 may read data or instructions in the storage device 150 over the network 120. In some embodiments, the storage device 150 may be part of the processing device 140 or may be separate and directly or indirectly coupled to the processing device 140.
The network 120 may include any suitable network capable of facilitating information and/or data exchange for the image reconstruction system 100. In some embodiments, one or more components of the image reconstruction system 100 (e.g., the scanning device 110, the terminal 130, the processing device 140, the storage device 150, etc.) may exchange information and/or data with one or more components of the image reconstruction system 100 via the network 120. For example, processing device 140 may obtain planning data from a data processing planning system via network 120. Network 120 may include a public network (e.g., the internet), a private network (e.g., a Local Area Network (LAN), a Wide Area Network (WAN)), etc., a wired network (e.g., ethernet), a wireless network (e.g., an 802.11 network, a wireless Wi-Fi network, etc.), a cellular network (e.g., a Long Term Evolution (LTE) network), a frame relay network, a Virtual Private Network (VPN), a satellite network, a telephone network, etcOne or a plurality of combinations of routers, hubs, server computers and the like. For example, network 120 may include a wireline network, a fiber optic network, a telecommunications network, a local area network, a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), Bluetooth, and/or the likeTMNetwork, ZigBeeTMNetwork, Near Field Communication (NFC) network, and the like. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired and/or wireless network access points, such as base stations and/or internet exchange points, through which one or more components of the image reconstruction system 100 may connect to the network 120 to exchange data and/or information.
FIG. 2 illustrates an exemplary flow diagram for image reconstruction according to some embodiments of the present application. In some embodiments, flow 200 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more operations of the flow 200 for image reconstruction illustrated in fig. 2 may be implemented by the processing device 140 illustrated in fig. 1. For example, the process 200 may be stored in the storage device 150 in the form of instructions and executed and/or invoked by the processing device 140.
As shown in fig. 2, the method of image reconstruction may include the following operations. Step 210, a first set of raw data and a second set of raw data are obtained. Step 210 may be performed by raw data acquisition module 610.
In some embodiments, the first set of raw data and/or the second set of raw data may be scan projection data acquired by a detection device, such as the detector 112 shown in fig. 1, which may be used for image reconstruction. For example, the first set of raw data and/or the second set of raw data may be data obtained by preprocessing, such as analog-to-digital conversion, CT value signals received by the detector 112. In some embodiments, the first set of raw data and/or the second set of raw data may be obtained directly from the detector 112 or may be obtained from the storage device 150. For example, the first set of raw data and/or the second set of raw data may be scan projection data of a scan object obtained directly from the scanning device 110, or may be data stored in the storage device 150. In an embodiment of the present application, the first set of raw data may be associated with a first scan angle range, and the second set of raw data may be associated with a second scan angle range. In connection with fig. 1, it will be appreciated that during a scan, the radiation scanning source 115 rotates around the object being scanned, and that based on the scanning protocol, the radiation scanning source 115 may emit a radiation beam at a particular time while the detector 112 detects the radiation beam passing through the object being scanned, such as a patient, to acquire scan projection data. Each particular time instant is referred to as a data acquisition time instant. Because the radiation scanning source 115 is moving, each data acquisition instant may correspond to a data acquisition angle. The data acquisition angle may be understood as a rotation angle of the radiation scanning source 115 at the data acquisition time or a rotation angle of the gantry 111 at the data acquisition time, such as 10 °, 15 °, 90 °, 180 °, or the like. Thus, each data acquisition angle may correspond to a slice of scanned projection data. As will be appreciated based on the above description, the radiation scanning source 115 may have a rotational angle range during a scan, for example, 0 ° to 360 ° or 720 °, and the first scanning angle range and/or the second scanning angle range may be a segment within the rotational angle range. The first set of raw data and/or the second set of raw data may be a set of scan projection data corresponding to all data acquisition angles within the first scan angle range and/or the second scan angle range. In some embodiments, the selection of the first set of raw data and/or the second set of raw data may be arbitrary. For example, the first set of raw data and/or the second set of raw data may be arbitrarily selected from two segments of scanning projection data within a scanning angle range. In some embodiments, the selection of the first set of raw data and/or the second set of raw data may be determined based on a particular scanned object. For example, assuming that the scanning object is a moving part of the heart, lung, rib or abdominal cavity of a patient, the data acquisition time corresponding to the target phase of the scanning object (for example, when the heart is in a relatively gentle motion state) may be first determined, and then the corresponding data acquisition angle may be acquired according to the data acquisition time. And selecting the scanning projection data in a section of scanning angle range as the first group of original data and/or the second group of original data by taking the data angle as a center.
In some embodiments, the first scan angle range and the second scan angle range are not coincident. In the present application, the first scanning angle range may be smaller than the second scanning angle range. For example, the first scanning angle range may be 90 ° to 240 °, 60 ° to 240 °, 30 ° to 270 °, or the like. The second scanning angle range can be 0-360 degrees, 60-720 degrees or 0-720 degrees. In some embodiments, the first scan angle range may be a subset of the second scan angle range. The present application does not limit the specific values of the first scanning angle range and the second scanning angle range.
Step 220, a first image is acquired by performing a first reconstruction process based on the first set of raw data. Step 220 may be performed by the first image determination module 620. In some embodiments, the first reconstruction process may include using a first iterative reconstruction algorithm to acquire the first image based on the first set of raw data. Exemplary first iterative reconstruction algorithms may include a Synchronous Algebraic Reconstruction Technique (SART), a Synchronous Iterative Reconstruction Technique (SIRT), an ordered subset convex technique (OSC), an ordered subset maximum likelihood (ml-ml), an Ordered Subset Expectation Maximization (OSEM) method, an adaptive statistical iterative reconstruction technique (ASIR) method, a least squares QR method, an Expectation Maximization (EM) method, an ordered subset-separable paraboloid substitution technique (OS-SPS), an Algebraic Reconstruction Technique (ART), a Kacsmarz reconstruction technique, or any other iterative reconstruction technique or method that meets the requirements of a particular application. In the present application, the objective function of the first iterative reconstruction algorithm used may be represented as follows:
Figure BDA0002270161390000071
in equation (1), X may be image domain data, i.e., data of a current reconstructed generated image, Y may be projection data obtained based on a first set of raw data obtained by the raw data obtaining module 610, a may be a system matrix, W may be a weight parameter of the projection data, and each piece of projection data corresponds to one weight parameter, r (X) may be a regularization function, which may play a role in suppressing overall noise of the reconstructed image during an iteration process, and β may be a regularization coefficient for balancing projection deviation and image smoothness.
In some embodiments, the number of iterations of the first iterative reconstruction algorithm may be M, and M may be an integer greater than or equal to 1. It will be appreciated that the different scan angle ranges may include different amounts of information in the scanned projection data. Because the first scanning angle range is smaller, the corresponding scanning time range is also smaller. For reconstruction of certain scanned objects, such as the heart, a high temporal resolution reconstructed image may be acquired that is less noisy and, in some embodiments, may be nearly zero (the heart may be considered stationary for a short time frame). This is advantageous for cardiac reconstruction. But at the same time, the problem of data shortage arises due to the small scan angle range, and complete noise-free is not optimal for medical images. Therefore, additional operations are required to solve the above problems.
Step 230, a second reconstruction process is performed to obtain a second image based on the second set of raw data. Step 230 may be performed by the second image determination module 630. In some embodiments, the second reconstruction process may include using a second iterative reconstruction algorithm to obtain the second image based on the second set of raw data. Exemplary second iterative reconstruction may include a Synchronous Algebraic Reconstruction Technique (SART), a Synchronous Iterative Reconstruction Technique (SIRT), an ordered subset convex technique (OSC), an ordered subset maximum likelihood (ml-ml), an Ordered Subset Expectation Maximization (OSEM) method, an adaptive statistical iterative reconstruction technique (ASIR) method, a least squares QR method, an Expectation Maximization (EM) method, an ordered subset-separable parabolic substitution technique (OS-SPS), an Algebraic Reconstruction Technique (ART), a Kacsmarz reconstruction technique, or any other iterative reconstruction technique or method that meets the requirements of a particular application. In some embodiments, the second iterative reconstruction algorithm may be the same as or different from the first iterative reconstruction algorithm. In the following, the objective function of the second iterative reconstruction algorithm is the same as the objective function of the first iterative reconstruction algorithm. For example, the objective function of the second iterative reconstruction algorithm may be represented as follows:
Figure BDA0002270161390000081
in equation (2), X may be image domain data, that is, data of an image generated by current reconstruction, Y may be projection data acquired based on a second set of original data acquired by the original data acquisition module 610, a may be a system matrix, W may be a weight parameter of the projection data, each piece of projection data corresponds to a weight parameter, r (X) may be a regularization function, which may play a role in suppressing overall image noise in an iterative process, β may be a regularization coefficient for balancing projection deviation and image smoothing degree, in some embodiments, the regularization coefficient β may be a constant, and may be selected within a set value range, for example, any constant may be selected in a traversal manner.
In some embodiments, the second iterative reconstruction algorithm has a number of iterations N, where N is an integer greater than or equal to 1. In some embodiments, N may be less than M. In some embodiments, N may also be equal to M. The second scanning angle range is larger, and the corresponding scanning time is also longer, so that the corresponding scanning projection data comprises global characteristics and is sufficient. And because it is not part of the angular range, the noise of the second image reconstructed based on the second set of raw data does not have directionality. These properties compensate well for the disadvantages of the first image.
Step 240, determining a target image based on the first image and the second image. Step 240 may be performed by target image determination module 640. For obtaining a high temporal resolution and sharp target image, the target image determination module 640 may process the first image with zero noise and high temporal resolution and the second image with uniform noise and no directionality to determine the target image. The temporal resolution characteristic of the target image is from the first image, the data sufficiency and noise characteristic is from the second image. In some embodiments, the target image determining module 640 may first perform an image operation on the first image and the second image, and perform a filtering, denoising, or smoothing operation on a result of the image operation to obtain a processed image. And then, mixing the first image and the processed image to obtain a final target image. For a detailed description of determining the target image, reference may be made to other parts of the present application, for example, fig. 3-5, which are not described herein again.
It should be noted that the above description related to the flow 200 is only for illustration and explanation, and does not limit the applicable scope of the present application. Various modifications and changes to flow 200 will be apparent to those skilled in the art in light of this disclosure. However, such modifications and variations are intended to be within the scope of the present application.
FIG. 3 illustrates an exemplary flow chart for determining a target image according to some embodiments of the present application. In some embodiments, flow 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations in the flow 300 for determining a target image illustrated in fig. 3 may be implemented by the processing device 140 illustrated in fig. 1. For example, the process 300 may be stored in the storage device 150 in the form of instructions and executed and/or invoked by the processing device 140. In some embodiments, the process 300 may be performed by the target image determination module 640.
As shown in fig. 3, the method of determining a target image may include the following operations. Step 310, determining a first intermediate image based on the first image and the second image.
In some embodiments, the first intermediate image may be an image obtained by performing an image operation on the second image and the first image. The image operation may be an arithmetic operation between two images including image addition, image subtraction, image multiplication, image division, and the like. In some embodiments, the image operation may be an image subtraction. Subtracting the first image from the second image may result in the first intermediate image. The data (e.g., gray value) of the corresponding pixel/voxel in the first image is subtracted from the data (e.g., gray value) of the pixel/voxel in the second image. The correspondence may be that each pixel/voxel in the first and second images has the same physical location in both images. Since the first image is an image with less noise, in some embodiments, the noise may be almost zero, and the second image is an image with natural non-directional noise, the first intermediate image obtained by subtracting the images may be an image containing more noise information.
In step 320, based on the first intermediate image, a noise distribution adjustment operation is performed to obtain a second intermediate image. In some embodiments, the noise distribution adjusting operation may be to adjust noise information included in the first intermediate image or other images generated based on the first intermediate image directly, so as to adjust and/or eliminate an influence, such as an artifact, caused by a structure of a scanning object or a motion of the scanning object. As an example, the noise distribution adjustment operation may be performed using a weight matrix or a weight array. The weight matrix or array of weights is the same size as the first intermediate image or other image generated based on the first intermediate image. The values in the weight matrix or weight array represent the adjusted weights of noise for the corresponding first intermediate image or positions in other images generated based on the first intermediate image. The image obtained by adjusting the noise distribution, that is, the second intermediate image, may be obtained by multiplying the weight matrix or the weight array by the first intermediate image or another image generated based on the first intermediate image. In some embodiments, each element of the weight matrix or weight array may be a numerical value between 0 and 1. In some embodiments, the second intermediate image may be an image that is noise-uniformly adjusted. For example, the values of different regions in the weight matrix or the weight array are different, so that the noise information contained in each pixel/voxel of the obtained second intermediate image is more balanced than the noise distribution before adjustment. In some embodiments, the element in the weight matrix or the weight array may be a small number, such as selected from the range of 0.4-0.6, so as to reduce the noise information in the second image completely and avoid introducing too much noise information in the first intermediate image into the target image. For a detailed description of obtaining the second intermediate image, reference may be made to other parts of the present application, for example, fig. 4 and 5, which are not described herein again.
Step 330, blending the first image and the second intermediate image to determine the target image. It is known that the first image is an almost noise-free high temporal resolution image, the second intermediate image is a noise-processed image, for example, to eliminate the effect of structure or motion artifacts of the scanned object, and the noise level is appropriately processed. It can be understood that the noise information of the second intermediate image also carries a part of structural information of the scanned object, and the two are mixed to obtain a clear image with high time resolution. In some embodiments, the blending may be performed on the first image and the second intermediate image to obtain the target image. The image operation may be an arithmetic operation between two images including image addition, image subtraction, image multiplication, image division, and the like. In some embodiments, the image operation may be an image addition. The blending may be performing an image addition operation on the first image and the second intermediate image. The target image determined by the operation can have better time resolution and can reduce artifacts, and a high-resolution and blood vessel-clear target image can be obtained by taking cardiac CT as an example, so that a doctor can make an accurate diagnosis on diseases according to the target image.
It should be noted that the above description related to the flow 300 is only for illustration and explanation, and does not limit the applicable scope of the present application. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of this disclosure. However, such modifications and variations are intended to be within the scope of the present application.
FIG. 4 illustrates an exemplary flow chart for acquiring a second intermediate image according to some embodiments of the present application. In some embodiments, flow 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations in the flow 400 for determining a target image illustrated in fig. 4 may be implemented by the processing device 140 illustrated in fig. 1. For example, the flow 400 may be stored in the storage device 150 in the form of instructions and executed and/or invoked by the processing device 140. In some embodiments, the flow 400 may be performed by the target image determination module 640.
As shown in fig. 4, the method of acquiring the second intermediate image may include the following operations. In step 410, high-pass filtering is performed on the first intermediate image to obtain a third intermediate image. Since the first intermediate image contains not only noise information but also influences due to structures or motion artifacts of the scanned object, which are mainly reflected in the low-frequency part of the image. Thus, in some embodiments, the above-described effects may be eliminated or mitigated by high-pass filtering. The high pass filter used may include an Ideal High Pass Filter (IHPF), a Butterworth High Pass Filter (BHPF), a Gaussian High Pass Filter (GHPF), the like, or any combination thereof. The resulting third intermediate image may be the same size as the second intermediate image, and the effect may be eliminated.
Step 420, performing a noise distribution adjustment operation on the third intermediate image to obtain the second intermediate image. The description of the noise profile adjustment operation may refer to the description of step 320 in flow 300. Meanwhile, since step 410 is performed, noise information included in the resulting third image may affect the final image quality. For example, too much noise may be carried into the image blending operation (step 330). Therefore, a suitable weight matrix or weight array may be selected, and noise distribution adjustment may be performed on the third intermediate image (the adjustment process may refer to the related description of step 320), so as to uniformly adjust the noise included in the third intermediate image, and obtain a second intermediate image.
It should be noted that the above description related to the flow 400 is only for illustration and explanation, and does not limit the applicable scope of the present application. Various modifications and changes to flow 400 may occur to those skilled in the art in light of the teachings herein. However, such modifications and variations are intended to be within the scope of the present application.
FIG. 5 illustrates yet another exemplary flow chart for acquiring a second intermediate image according to some embodiments of the present application. In some embodiments, flow 500 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), etc., or any combination thereof. One or more of the operations in the flow 500 for determining a target image illustrated in fig. 5 may be implemented by the processing device 140 illustrated in fig. 1. For example, the process 500 may be stored in the storage device 150 in the form of instructions and executed and/or invoked by the processing device 140. In some embodiments, the process 500 may be performed by the target image determination module 640.
Likewise, in order to eliminate the effect of structure or motion artifacts of the scanned object, other approaches besides performing a high-pass filtering operation may be used. Such as smoothing and its associated operations. As shown in fig. 5, the method of acquiring the second intermediate image may include the following operations.
Step 510, performing a smoothing operation on the first intermediate image to obtain a fourth intermediate image. In some embodiments, the smoothing operation may employ a fully-variant model. The Total Variation model (TV function) may be a model for smoothing the first intermediate image to obtain a noise-reduced image. The fully-variant model may significantly smooth noise reduction for regions of the first intermediate image where the image gradient is small (e.g., regions of the image where the difference between adjacent pixels is small), while weakly smooth noise reduction for regions of the first intermediate image where the image gradient is large (e.g., image contours). Based on the characteristics of the fully variant model, the fourth intermediate image obtained after the processing can be an image which is obviously subjected to noise reduction and is kept at the boundary.
Step 520, acquiring a fifth intermediate image based on the first intermediate image and the fourth intermediate image. In some embodiments, the fifth intermediate image is obtained by performing an image operation on the first intermediate image and the fourth intermediate image. The image operation may be an arithmetic operation between two images including image addition, image subtraction, image multiplication, image division, and the like. In some embodiments, the image operation may be an image subtraction, subtracting the fourth intermediate image from the first intermediate image to obtain the fifth intermediate image. The fifth intermediate image may be an image containing pure noise.
Step 530, performing a noise distribution adjustment operation on the fifth intermediate image to obtain the second intermediate image. The description of the noise distribution adjustment operation may refer to the description of step 320 in the process 300, and is not repeated here.
It should be noted that the above description related to the flow 500 is only for illustration and explanation, and does not limit the applicable scope of the present application. Various modifications and changes to flow 500 may occur to those skilled in the art upon review of the present application. However, such modifications and variations are intended to be within the scope of the present application.
FIG. 6 is a block diagram of an exemplary image reconstruction system according to some embodiments of the present application. As shown in fig. 6, the image reconstruction system 100 may include a raw data acquisition module 610, a first image determination module 620, a second image determination module 630, and a target image determination module 640.
The raw data acquisition module 610 may be configured to acquire a first set of raw data and a second set of raw data. The first set of raw data and/or the second set of raw data may be scan projection data acquired by a detection device, which may be used for image reconstruction. In some embodiments, the first set of raw data is associated with a first scan angle range and the second set of raw data is associated with a second scan angle range. The first scanning angle range and/or the second scanning angle range may be a rotation angle range of the scanning device during scanning. The first set of raw data and/or the second set of raw data may be a set of scan projection data corresponding to all data acquisition angles within the first scan angle range and/or the second scan angle range.
The first image determination module 620 may be configured to obtain a first image by performing a first reconstruction procedure based on the first set of raw data. The first reconstruction process may include using a first iterative reconstruction algorithm to acquire the first image based on the first set of raw data. The objective function of the first iterative reconstruction algorithm used may be, for example,
Figure BDA0002270161390000121
the iteration number can be M, and M can be an integer greater than or equal to 1.
The second image determination module 630 may be configured to obtain a second image by performing a second reconstruction procedure based on the second set of raw data. The second reconstruction process may include using a second iterative reconstruction algorithm to obtain the second image based on the second set of raw data. The objective function of the second iterative reconstruction algorithm used may be, for example,
Figure BDA0002270161390000123
Figure BDA0002270161390000124
the iteration number may be N, N is an integer greater than or equal to 1, and N may be less than M.
The target image determination module 640 may be configured to determine a target image based on the first image and the second image. In some embodiments, the target image determination module 640 may process the zero-noise high temporal resolution first image and the noise uniform and non-directional second image to determine the target image. The temporal resolution characteristic of the target image is from the first image, the data sufficiency and noise characteristic is from the second image. In some embodiments, the target image determining module 640 may first perform an image operation on the first image and the second image, and perform a filtering, denoising, or smoothing operation on a result of the image operation to obtain a processed image. And then, mixing the first image and the processed image to obtain a final target image.
With regard to the description of the above modules, reference may be made to the flow chart section of the present application.
It should be understood that the system and its modules shown in FIG. 6 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above descriptions of the candidate item display and determination system and the modules thereof are only for convenience of description, and are not intended to limit the present application within the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, for example, the raw data acquiring module 610, the first image determining module 620, the second image determining module 630 and the target image determining module 640 disclosed in fig. 6 may be different modules in a system, or may be a module that implements the functions of two or more modules described above. For example, the first image determining module 620 and the second image determining module 630 may be two modules, or one module may be used to acquire the first image and the second image at the same time. For example, each module may share one memory module, and each module may have its own memory module. Such variations are within the scope of the present application.
Compare fig. 7 and 8, and fig. 9 and 10, respectively. Fig. 7 is a cross-sectional view of a cardiac blood vessel obtained based on a conventional image reconstruction method, and fig. 8 is a cross-sectional view of a cardiac blood vessel obtained by an image reconstruction method according to some embodiments of the present application. Fig. 9 shows a VR diagram of a cardiac blood vessel obtained based on a conventional image reconstruction method. FIG. 10 illustrates a VR map of a cardiac vessel obtained from an image reconstruction method according to some embodiments of the present application.
As can be seen from comparison between fig. 7 and 8 and fig. 9 and 10, when the positions marked by white arrows in the reference image are compared, compared with the conventional image reconstruction method, the image reconstruction method according to the embodiment of the present application is used to reconstruct the heart, and the obtained heart image has better time resolution and can significantly reduce coronary blood vessel motion artifacts.
The beneficial effects that may be brought by the embodiments of the present application include, but are not limited to: after the projection data in different scanning angle ranges are used for carrying out iterative reconstruction with different iteration times to obtain reconstructed images with different characteristics, the reconstructed images with different characteristics can be mixed based on an image mixing process to obtain a clear target reconstructed image with high time resolution. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of image reconstruction, the method comprising:
acquiring a first set of raw data and a second set of raw data, the first set of raw data being associated with a first scanning angle range and the second set of raw data being associated with a second scanning angle range;
obtaining a first image by performing a first reconstruction procedure based on the first set of raw data;
obtaining a second image by performing a second reconstruction procedure based on the second set of raw data;
based on the first image and the second image, a target image is determined.
2. The method of claim 1, wherein the first reconstruction process comprises using a first iterative reconstruction algorithm to obtain the first image based on the first set of raw data; the iteration times of the first iterative reconstruction algorithm are M, and M is an integer greater than or equal to 1;
the second reconstruction process includes using a second iterative reconstruction algorithm to obtain a second image based on the second set of raw data; the iteration number of the second iterative reconstruction algorithm is N, N is an integer greater than or equal to 1, and N is smaller than or equal to M.
3. The method of claim 2, wherein the first scan angle range is less than the second scan angle range.
4. The method of claim 1, wherein determining a target image based on the first image and the second image comprises:
determining a first intermediate image based on the first image and the second image;
based on the first intermediate image, performing a noise distribution adjustment operation to obtain a second intermediate image;
and mixing the first image and the second intermediate image to determine the target image.
5. The method of claim 4, wherein performing a noise profile adjustment operation based on the first intermediate image to obtain a second intermediate image comprises:
performing high-pass filtering on the first intermediate image to obtain a third intermediate image;
and executing noise distribution adjustment operation on the third intermediate image to obtain the second intermediate image.
6. The method of claim 4, wherein performing a noise profile adjustment operation based on the first intermediate image to obtain a second intermediate image comprises:
performing smoothing operation on the first intermediate image to obtain a fourth intermediate image;
acquiring a fifth intermediate image based on the first intermediate image and the fourth intermediate image;
and executing noise distribution adjustment operation on the fifth intermediate image to acquire the second intermediate image.
7. The method of claim 4, wherein said blending the first image and the second intermediate image to determine the target image comprises:
and performing image addition operation on the first image and the second intermediate image to obtain the target image.
8. An image reconstruction system, characterized in that the system comprises: the system comprises an original data acquisition module, a first image acquisition module, a second image acquisition module and a target image determination module;
the raw data acquisition module is used for acquiring a first group of raw data and a second group of raw data, wherein the first group of raw data is related to a first scanning angle range, and the second group of raw data is related to a second scanning angle range;
the first image acquisition module is used for acquiring a first image by executing a first reconstruction process based on the first set of raw data;
the second image acquisition module is used for acquiring a second image by executing a second reconstruction process based on the second set of raw data;
the target image determination module is configured to determine a target image based on the first image and the second image.
9. An image reconstruction apparatus comprising a processor, wherein the processor is configured to perform the image reconstruction method of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions, wherein when the computer instructions in the storage medium are read by a computer, the computer performs the image reconstruction method according to any one of claims 1 to 7.
CN201911102058.6A 2019-11-12 2019-11-12 Image reconstruction method, system, device and storage medium Active CN110853742B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911102058.6A CN110853742B (en) 2019-11-12 2019-11-12 Image reconstruction method, system, device and storage medium
US17/096,936 US11763498B2 (en) 2019-11-12 2020-11-12 Systems and methods for image reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911102058.6A CN110853742B (en) 2019-11-12 2019-11-12 Image reconstruction method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN110853742A true CN110853742A (en) 2020-02-28
CN110853742B CN110853742B (en) 2023-03-28

Family

ID=69601439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911102058.6A Active CN110853742B (en) 2019-11-12 2019-11-12 Image reconstruction method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN110853742B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112368738A (en) * 2020-05-18 2021-02-12 上海联影医疗科技股份有限公司 System and method for image optimization
WO2021232194A1 (en) * 2020-05-18 2021-11-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image reconstruction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011148277A2 (en) * 2010-05-27 2011-12-01 Koninklijke Philips Electronics N.V. Improved reconstruction for cone-beam computed tomography imaging with off-center flat panel detector
CN104240270A (en) * 2013-06-14 2014-12-24 同方威视技术股份有限公司 CT imaging method and system
CN105913398A (en) * 2015-06-11 2016-08-31 沈阳东软医疗系统有限公司 Method and device for processing CT reconstructed image
KR20180003400A (en) * 2016-06-30 2018-01-09 삼성전자주식회사 Apparatus and method for processing medical image
CN108921913A (en) * 2018-06-29 2018-11-30 上海联影医疗科技有限公司 The system and method for image reconstruction
CN109272562A (en) * 2018-08-21 2019-01-25 上海联影医疗科技有限公司 A kind of system and method for iterative approximation
CN110136217A (en) * 2019-03-28 2019-08-16 青岛大学附属医院 CT image for liver enhances processing method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011148277A2 (en) * 2010-05-27 2011-12-01 Koninklijke Philips Electronics N.V. Improved reconstruction for cone-beam computed tomography imaging with off-center flat panel detector
CN104240270A (en) * 2013-06-14 2014-12-24 同方威视技术股份有限公司 CT imaging method and system
CN105913398A (en) * 2015-06-11 2016-08-31 沈阳东软医疗系统有限公司 Method and device for processing CT reconstructed image
US20160364856A1 (en) * 2015-06-11 2016-12-15 Shenyang Neusoft Medical Systems Co. Ltd. Process for computed tomography image
KR20180003400A (en) * 2016-06-30 2018-01-09 삼성전자주식회사 Apparatus and method for processing medical image
CN108921913A (en) * 2018-06-29 2018-11-30 上海联影医疗科技有限公司 The system and method for image reconstruction
CN109272562A (en) * 2018-08-21 2019-01-25 上海联影医疗科技有限公司 A kind of system and method for iterative approximation
CN110136217A (en) * 2019-03-28 2019-08-16 青岛大学附属医院 CT image for liver enhances processing method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GONGCHANGCHENG等: ""Image reconstruction model for limited-angle CT based on prior image induced relative total variation"", 《APPLIED MATHEMATICAL MODELLING》 *
田兴飞等: "一种改进的离散代数重建算法(DART)", 《信息技术》 *
邓开发等: "《激光技术与应用》", 31 January 2002, 国防科技大学出版社 *
陈梓嘉: ""欠采样条件下的CT图像重建算法研究"", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112368738A (en) * 2020-05-18 2021-02-12 上海联影医疗科技股份有限公司 System and method for image optimization
WO2021232194A1 (en) * 2020-05-18 2021-11-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image reconstruction
WO2021233316A1 (en) * 2020-05-18 2021-11-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image reconstruction
US11847763B2 (en) 2020-05-18 2023-12-19 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image reconstruction
CN112368738B (en) * 2020-05-18 2024-01-16 上海联影医疗科技股份有限公司 System and method for image optimization

Also Published As

Publication number Publication date
CN110853742B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN109493951B (en) System and method for reducing radiation dose
CN109741284B (en) System and method for correcting respiratory motion-induced mismatches in PET imaging
CN110809782B (en) Attenuation correction system and method
CN110751702B (en) Image reconstruction method, system, device and storage medium
US20200327662A1 (en) Systems and methods for image generation
Hsieh et al. Recent advances in CT image reconstruction
CN110151210B (en) Medical image processing method, system, device and computer readable medium
US20230085203A1 (en) Systems and methods for image reconstruction
US20210142476A1 (en) Systems and methods for image optimization
US10657621B2 (en) Moving structure motion compensation in imaging
CN111462020A (en) Method, system, storage medium and device for correcting motion artifact of heart image
CN107909618B (en) Image reconstruction system and method
CN110866959B (en) Image reconstruction method, system, device and storage medium
CN110807821A (en) Image reconstruction method and system
US20230342939A1 (en) Systems and methods for image reconstruction and processing
CN109242923B (en) System and method for iterative reconstruction
CN110853742B (en) Image reconstruction method, system, device and storage medium
US20190029626A1 (en) Systems and methods for motion signal correction
CN111369636B (en) Image reconstruction method, system, device and storage medium
EP4330862A1 (en) System and method for medical imaging
CN110084866B (en) Computed tomography method and device
US11763498B2 (en) Systems and methods for image reconstruction
CN111784794A (en) CT image reconstruction method, device, computer equipment and readable storage medium
US20230190216A1 (en) Systems and methods for correcting motion artifacts in images
CN109363695B (en) Imaging method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co.,Ltd.

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant