CN113936068A - Artifact correction method, artifact correction device and storage medium - Google Patents

Artifact correction method, artifact correction device and storage medium Download PDF

Info

Publication number
CN113936068A
CN113936068A CN202010672813.0A CN202010672813A CN113936068A CN 113936068 A CN113936068 A CN 113936068A CN 202010672813 A CN202010672813 A CN 202010672813A CN 113936068 A CN113936068 A CN 113936068A
Authority
CN
China
Prior art keywords
image
pixel
correction
corrected
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010672813.0A
Other languages
Chinese (zh)
Inventor
李俊杰
李山奎
郭新路
黄灿鸿
高成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202010672813.0A priority Critical patent/CN113936068A/en
Publication of CN113936068A publication Critical patent/CN113936068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an artifact correction method, an artifact correction device and a storage medium. The method comprises the steps that after a plurality of threads are called to execute in parallel, a preset number of weights are adopted to fuse a first correction image and an error image into a preset number of virtual fusion images, the information entropy of the neighborhood where each pixel of each virtual fusion image is located under each weight is obtained, the target weight is determined for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights, then the first correction image and the error image are fused according to the target weight to obtain a second correction image, and finally the image subjected to artifact correction of the image to be corrected is obtained through the second correction image; the method realizes the parallelization operation of the data streams of a plurality of single threads in the calculation process, greatly improves the calculation performance, saves the calculation time and further improves the artifact correction efficiency.

Description

Artifact correction method, artifact correction device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an artifact correction method, an artifact correction apparatus, and a storage medium.
Background
The artifact is a picture of various forms appearing on an image without the scanned object. The types of artifacts include a wide variety, e.g., metal artifacts, motion artifacts, etc.
The image quality is affected by the existence of artifacts, so the artifacts in the image need to be corrected, and taking Metal artifacts as an example, a Metal Artifact Correction (MAC) algorithm may be used to correct the Metal artifacts on the image. The MAC algorithm is a method for eliminating metal artifacts on images, can effectively reduce image noise introduced by metal implants, and recovers image contents to a certain extent. However, the algorithm relates to a complex image processing process, and the calculation logic coupling degree is high, so that the processing time of a single image is too long, and the requirement of film reading diagnosis is difficult to meet, and therefore, the problem of low artifact correction efficiency exists in the artifact correction process.
Disclosure of Invention
In view of the above, it is desirable to provide an artifact correction method, an artifact correction apparatus, and a storage medium capable of improving artifact correction efficiency.
In a first aspect, an embodiment of the present application provides an artifact correction method, where the method includes:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more pixels;
the corrective override comprises:
fusing the first correction image and the error image into a preset number of virtual fusion images by adopting a preset number of weights; the first correction image represents an image corresponding to the projection data of the image to be corrected after removing part or all of artifact components caused by the interference source, and the error image represents an image corresponding to the projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
for each weight, acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of a preset number of weights;
fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
In one embodiment, the fusing the first corrected image and the error image into the virtual fused image with the preset number of weights includes:
sequentially adopting one weight in a preset number of weights to fuse the first corrected image and the error image to obtain a preset number of virtual fused images;
and when the first correction image and the error image are fused each time, each thread in the plurality of threads correspondingly processes the pixels at the corresponding positions in the first correction image and the error image.
In one embodiment, the obtaining the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining the target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights includes:
acquiring a probability distribution function corresponding to a neighborhood where each pixel of each virtual fusion image is located;
determining the information entropy of the neighborhood where each pixel of each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located;
and determining the corresponding weight of the minimum information entropy in the information entropies of the neighborhoods where the pixels are located under different weights in a preset number of weights as the target weight.
In one embodiment, the fusing the first corrected image and the error image to obtain the second corrected image according to the target weight includes:
and acquiring the weighted sum of the pixels at the corresponding positions in the first corrected image and the error image according to the target weight of each pixel in the error image to obtain a second corrected image.
In one embodiment, the number of the threads is equal to or greater than the number of pixels included in the error image, and the plurality of threads simultaneously determine the target weight for the corresponding pixels.
In one embodiment, the number of threads is less than the number of pixels included in the error image, and at least one thread determines the target weight for at least two pixels.
In one embodiment, the method further comprises:
segmenting a high-frequency partial image of the image to be corrected and a low-frequency partial image of the second correction image;
and fusing the high-frequency partial image and the low-frequency partial image to obtain a third corrected image, wherein the third corrected image is an image subjected to artifact correction of the image to be corrected.
In a second aspect, an embodiment of the present application provides an artifact correction apparatus, including:
the calling module is used for calling a plurality of thread modules to execute correction operation in parallel, and one thread module correspondingly processes one or more pixels;
a thread module for performing a corrective operation;
the thread module comprises: the fusion unit is used for fusing the first correction image and the error image into a preset number of virtual fusion images by adopting a preset number of weights; the first correction image represents an image corresponding to the projection data of the image to be corrected after removing part or all of artifact components caused by the interference source, and the error image represents an image corresponding to the projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
the determining unit is used for acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located for each weight, and determining the target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of a preset number of weights;
the correcting unit is used for fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
In a third aspect, an embodiment of the present application provides an artifact correction method, where the method includes:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more projection data;
the correction operation includes:
acquiring projection data of CT scanning;
generating first corrected projection data by removing at least a portion of the artifact components caused by the interference sources from the projection data of the CT scan; subtracting the first correction projection data from the projection data of the CT scanning to generate projection data of an error image;
applying a preset number of weights to each data point of the projection data of the error image and the corresponding data point of the first correction projection data to obtain a fusion data point;
for each fused data point, respectively calculating the information entropy of the neighborhood where the fused data point is located;
determining target weight for each data point of the projection data of the error image according to the information entropy of the neighborhood where each fused data point is located;
and fusing the first correction projection data and the projection data of the error image by adopting the target weight to obtain a plurality of synthetic data points, wherein the plurality of synthetic data points form the projection data of the second correction image.
In a fourth aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the methods provided in the first aspect when executing the computer program.
In a fifth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the methods provided in the first and third aspects.
According to the artifact correction method, the artifact correction device, the computer equipment and the storage medium, a plurality of threads are called to execute in parallel, a first correction image and an error image are fused into a preset number of virtual fusion images by adopting a preset number of weights, information entropies of neighborhoods where each pixel of each virtual fusion image is located under each weight are obtained, target weights are determined for each pixel of the error image according to the information entropies of the neighborhoods where each pixel is located under different weights, then the first correction image and the error image are fused according to the target weights to obtain a second correction image, and finally the image subjected to artifact correction of the image to be corrected is obtained through the second correction image; the method is carried out in a mode that one thread correspondingly processes one pixel or a plurality of adjacent pixels in the process of executing and obtaining the image after the artifact correction of the image to be corrected, so that a plurality of single threads of a calculation process are parallelly operated to run data streams by taking a single pixel as a unit and carrying out multi-thread parallel processing, the calculation performance is greatly improved, the calculation time is saved, and the artifact correction efficiency is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for artifact correction;
FIG. 1a is a schematic diagram of an exemplary CT scanning system;
FIG. 2 is a flow diagram illustrating an exemplary method for artifact correction;
FIG. 2a is a schematic diagram illustrating a pre-and post-artifact correction comparison in one embodiment;
FIG. 3 is a flowchart illustrating an exemplary method for artifact correction according to another embodiment;
FIG. 4 is a flowchart illustrating an exemplary method for artifact correction according to another embodiment;
FIG. 4a is a diagram illustrating single-pixel, single-thread processing in one embodiment;
FIG. 5 is a flowchart illustrating an exemplary method for artifact correction according to another embodiment;
FIG. 6 is a flowchart illustrating an exemplary method for artifact correction according to another embodiment;
FIG. 7 is a schematic illustration of a method of artifact correction in one embodiment;
FIG. 8 is a block diagram showing the structure of an artifact correction device according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The artifact correction method provided by the application can be applied to the application environment shown in fig. 1. The artifact correction method is applied to the computer device shown in fig. 1, where the computer device may be a server, and its internal structure is shown in fig. 1. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data for artifact correction. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of artifact correction.
The embodiment of the application provides an artifact correction method, an artifact correction device, computer equipment and a storage medium, and can improve artifact correction efficiency. The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It should be noted that, in the artifact correction method provided in the present application, the execution subjects of fig. 2 to fig. 7 are computer devices. The execution subjects of fig. 2 to 7 may also be artifact correction means, which may be implemented as part of or all of a computer device by software, hardware, or a combination of software and hardware.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
The artifact correction method provided by the application can be applied to an image domain and can also be applied to a raw data domain (such as a projection data domain) corresponding to an image.
In an embodiment, taking an example that the artifact correction method is applied to the projection data field and the projection data is data of a CT scan, the artifact correction process includes the following steps:
first, projection data for a CT scan is acquired. Specifically, fig. 1a is a schematic view of a component structure of a CT scanning system, which mainly includes a CT bulb, a collimator, a high voltage generator, a detector, a gantry, a scanning bed, a central controller, and an image reconstruction system. Wherein the CT bulb, the collimator, the detector and the preamplifier are positioned in the frame. The data acquisition of the CT scanner is mainly completed by a CT bulb tube, a detector, a collimator, a filter, an analog-digital conversion module and an interface circuit; the image reconstruction stage is completed by a computer, and finally, an image obtained by scanning reconstruction is displayed on a display and is stored and recorded on a hard disk or other storage media; the whole working process of the CT scanner is controlled by a central controller. The projection data in the embodiments of the present application may be raw data that is not reconstructed, and the raw data may be directly obtained by data acquisition of a CT scanner.
Second, at least a part of artifact components caused by interference sources are removed from the projection data of the CT scanning to generate first correction projection data; and subtracting the first corrected projection data once from the projection data of the CT scan to generate projection data of an error image. Wherein each data point of the projection data of the error image corresponds to a data point of the first corrected projection data one to one.
Thirdly, applying a preset number of weights to each data point of the projection data of the error image and the corresponding data point of the first correction projection data to obtain a fusion data point.
Fourthly, for each fused data point, the information entropy of the neighborhood where the fused data point is located is calculated respectively.
Fifthly, determining target weight for each data point of the projection data of the error image according to the information entropy of the neighborhood where each fused data point is located; the target weight is one of a preset number of weights; alternatively, in one embodiment, the position of each data point of the projection data of the error image is uniquely determined, and the fused data points are obtained by applying different weights, which corresponds to a predetermined number of fused data points for each data point of the projection data of the error image. Then, for a data point at one position of the projection data of the error image, the target weight may be set to a weight corresponding to the minimum information entropy in the information entropies of the neighborhoods where the preset number of fused data points are located; the final determined target weight of each data point of the projection data of the error image may be the same or different.
And sixthly, fusing the first correction projection data and the projection data of the error image by adopting the target weight to obtain a plurality of synthetic data points, wherein the plurality of synthetic data points form the projection data of the second correction image. Specifically, when the first corrected projection data and the projection data of the error image are fused, the target weight of each data point of the projection data of the error image is adopted, and a data point corresponding to the target weight of the projection data of the error image and a data point at a corresponding position in the first corrected projection data are fused to obtain a plurality of synthesized data points.
It should be noted that, in the above embodiment, for the case that the number of processor threads is large, the obtaining of the target weight of one projected data point can be completed by one thread operation, which significantly shortens the data processing time. Of course, in order to save the number of threads of the processor, the obtaining of the target weights of the adjacent multiple projection data points can be completed by using one thread operation. The processor in the embodiment selects a Graphics Processing Unit (GPU), and the processor can process multiple threads simultaneously, so that the parallel multithreading Processing mode obviously improves the data Processing speed, reduces the data Processing time, can well meet the requirement of doctor diagnosis for reading, and improves the user experience.
In an embodiment, as shown in fig. 2, an artifact correction method is provided, which is exemplified by applying the method to the computer device in fig. 1, where the embodiment relates to a specific process that the computer device fuses a first corrected image and an error image into a preset number of virtual fused images in parallel in a manner that one of a plurality of threads corresponds to one pixel, determines a target weight according to an information entropy of a neighborhood where each pixel of each virtual fused image is located, and then fuses the first corrected image and the error image according to the target weight to obtain a second corrected image for obtaining an image to be corrected after artifact correction, and the embodiment includes the following steps:
s101, calling a plurality of threads to execute correction operation in parallel, and processing one or more pixels correspondingly by one thread.
The number of threads specifically included by the multiple threads is the same as the number of pixels in the image to be corrected, so that one thread is responsible for processing one pixel when the correction operation is performed, that is, all pixels in the image to be corrected perform the same operation simultaneously in the process of correcting the artifact in the image to be corrected, and each pixel operates on a single thread by taking the pixel as a unit, so that the parallel operation of all pixels is realized, and the artifact correction efficiency is improved. Alternatively, when the number of threads is limited, the number of threads is smaller than the number of pixels included in the error image, and one thread corresponds to two or more pixels.
Optionally, the multiple threads may be implemented by a multithread concurrent execution technology of a GPU, may also be implemented by a multithread concurrent execution technology of a Field-Programmable Gate Array (FPGA), and may also be implemented by a multithread concurrent execution technology of a customized Application Specific Integrated Circuit (ASIC) parallel chip, which is not limited in this embodiment. Therefore, by adopting the GPU, the FPGA or the ASIC chip to realize the multithreading concurrent execution technology, the valuable computing resources at the CPU end in the computer equipment can be saved, and the computing performance is greatly improved compared with the prior method, thereby improving the clinical value of the metal artifact correction algorithm. In this embodiment, a plurality of threads of the GPU are called to execute the correction operation in parallel, taking the CT image as an example for explanation, and the time for processing a single CT image by using the GPU is reduced to within one second, compared with the prior art in which artifact removal is performed by using each image as a unit, the operation performance can be improved by 50 times, the requirement of the doctor on the real-time performance of film reading diagnosis is met, and the experience degree of the application is improved.
S102, the correcting operation comprises the following steps: s1021, fusing the first correction image and the error image into a preset number of virtual fusion images by adopting a preset number of weights; the first correction image represents an image corresponding to projection data of an image to be corrected after removing part or all of artifact components caused by an interference source, and the error image represents an image corresponding to projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time.
The image to be corrected is an image which needs to be corrected currently, for example, the image to be corrected is obtained by reconstructing projection data of CT scan. In practical applications, there are often interference sources when CT scans a target imaging region, and the interference sources may be different for different target imaging regions. Optionally, in one embodiment, the target imaging site is water, and calcified tissue, bone, or other material with high attenuation characteristics relative to water is the source of the interference. In another embodiment, the target imaging site is a human organ, and a substance having high attenuation characteristics relative to the human organ, such as a metal implant, is an interference source. It will be appreciated that the source of the interference is more attenuated than the target imaging region. Taking the target imaging part as a human organ and metal as an interference source as an example, if there is a metal artifact in the projection data of CT scan, the metal artifact needs to be corrected. Of course, the image to be corrected includes, but is not limited to, X-ray images, CT (computed tomography) images, PET (positron emission tomography), MRI (magnetic resonance imaging), ultrasound images, and the like.
The first correction image represents an image corresponding to projection data of an image to be corrected after removing part or all of artifact components caused by an interference source, and the error image represents an image corresponding to projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time. For example, an image corresponding to the image to be corrected after the metal portion is removed is a first corrected image, and an image corresponding to the image to be corrected after the first corrected image is removed is an error image.
Specifically, taking an image of a CT scan as an example, projection data of a first correction image is generated by removing at least a part of an artifact component caused by a metal from captured projection data which is projection data obtained by the CT scan, and projection data of an error image is generated by subtracting the generated projection data of the first correction image from the captured projection data. The processing method of the first corrected image may adopt an iterative method, an interpolation method, or other types of artifact correction methods, which is not limited in this embodiment.
In one embodiment, the first corrected image may be determined by: generating initial image data according to the shooting projection data, wherein the processed image data is an image to be corrected; generating tissue classification image data according to the initial image data; carrying out forward projection on the tissue classification image data to generate tissue classification projection data; the projection data of the first corrected image is generated by replacing the projection value of the metal transmission region of the captured projection data with the projection value of the metal transmission region of the projection data corresponding to the tissue classification image. Illustratively, the tissue classification image data may be obtained by: each pixel of the initial image data is classified into a plurality of tissues or organs, which are predetermined, and replaced with a predetermined CT value for each tissue or organ.
In one embodiment, the first corrected image may be determined by: reconstructing shooting projection data to generate initial image data; extracting a metal region from the initial image data to obtain metal image data; carrying out forward projection on the metal image data to generate metal projection data; the non-metal projection data is generated by performing interpolation processing on the metal transmission region of the captured projection data. In this embodiment, the metal transmission area can be determined by setting a threshold.
Wherein, the preset number of weights refers to the number of types of weights, and optionally, 101 weights can be generated at intervals of 0.01 from 0 to 1, and then the 101 weights are adopted to respectively fuse the first corrected image and the error image, so that 101 virtual fused images can be generated. Of course, in other embodiments, the foregoing interval may be adjusted, the number of corresponding weight types may also be changed, and may also be set to a smaller precise value such as 0.001, or an interval such as 0.1. In this embodiment, the number of the weight types is not particularly limited.
Illustratively, let the weight be fWeightAt 0:0.01:1 (i.e. representing that 101 weights can be generated at 0.01 intervals from 0 to 1), the first corrected image I is fused with any one of the 101 weightscor1And error image IerrThen generating a virtual fusion image InewThe formula is as follows: i isnew=(1-fWeight)*Icor1+fWeight*IerrTherefore, when the weight f isWeightWhen the number of the image frames is 101, the obtained virtual fusion image InewAlso 101, i.e. the number of weights equals the number of generated virtual fusion images.
When the multithread parallel execution is adopted, one thread correspondingly processes a pair of pixels, wherein the pair of pixels refers to the pixel in the first correction image and the pixel in the corresponding position in the error image. Optionally, in an embodiment, fusing the first corrected image and the error image into a preset number of virtual fused images with a preset number of weights includes: sequentially adopting one weight in a preset number of weights to fuse the first corrected image and the error image to obtain a preset number of virtual fused images; and when the first correction image and the error image are fused each time, each thread in the plurality of threads correspondingly processes the pixels at the corresponding positions in the first correction image and the error image.
For example, the preset number of weights are the above 101 weights, one of which is represented as fn'Weight(where the subscript n corresponds to the thread number, and n can take the value of 2, 5, 10, 20,100, 200, etc.) the pair of pixels of the first corrected image and the error image are a and a', thread n adopts the weight fn'WeightThe result of fusing pixels A and A' is fn'Weight*A+(1-fn'Weight) A'; therefore, for a weight fn'WeightEach thread adopts the weight to fuse corresponding pixel pairs in the first corrected image and the error image to obtain respective fusion results of all the pixel pairs to form a virtual fusion image; then switching next weight, and fusing corresponding pixel pairs in the first corrected image and the error image by each thread by adopting the next weight to obtain respective fusion results of all the pixel pairs to form a next virtual fusion image; then, the next weight is switched, so that 101 weights are switched in turn to obtain 101 virtual fusion images. The order of the switching weights may be switched from small to large in the order of 0-1, which is not limited in this respect.
S1022, for each weight, acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under each different weight; the target weight is one of a preset number of weights.
The information entropy refers to the average information quantity after redundancy is eliminated in the information of the neighborhood where each pixel of the virtual fusion image is located. After the information entropy of the neighborhood where each pixel of each virtual fusion image is located is obtained, the information entropy of the neighborhood where each pixel is located under different weights is analyzed, and the minimum information entropy of each pixel can be determined, that is, when the information entropy of the neighborhood where the pixel is located is minimum, the artifact (metal artifact) of an interference source in the neighborhood can be considered to be minimum or no artifact, so that the weight adopted by the neighborhood where the pixel is located in the virtual fusion image in the fusion step is used as the target weight. That is, the target weight is one of the preset number of weights, for example, if there are 101 types of weights, the target weight is one of 101 types of weights. It should be noted that, in the embodiment of the present application, all pixels are executed in parallel in a manner that one thread is responsible for one pixel, so that each thread is responsible for acquiring the information entropy in the neighborhood where each pixel is located, and similarly, when determining the weight corresponding to the pixel with the smallest information entropy in the neighborhood where the pixel is located in the fusion step as the target weight, the single pixel is also determined by the single thread as the unit.
As a final result, each pixel in each virtual fusion image determines a target weight, and the virtual fusion image is fused by using different weights for the first correction image and the error image, so that the positions of each pixel in the first correction image, the error image, and the virtual fusion image are in one-to-one correspondence, and therefore, the target weight of each pixel in the virtual fusion image is determined, which may be the target weight for each pixel in the error image or the target weight for each pixel in the first correction image, which is not limited in this embodiment.
S1023, fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
Since the target weight is determined by the minimum information entropy of the neighborhood where each pixel is located under different weights, after the target weight of each pixel is obtained, it can be considered that the artifact in the image obtained after each pixel in the first corrected image and each pixel in the error image are fused by using the target weight of each pixel is minimum or none, therefore, the first corrected image and the error image are fused by using the target weight, the formed virtual fused image is used as a second corrected image, and the second corrected image can obtain the image to be corrected after artifact correction. In one embodiment, the obtaining of the image to be corrected through the second correction image after artifact correction may include the following operations: and performing frequency segmentation and fusion on the image to be corrected and the second corrected image to obtain a third corrected image. For example, the high-frequency partial image of the image to be corrected and the low-frequency partial image of the second correction image may be segmented, and the high-frequency partial image and the low-frequency partial image may be fused to obtain the third correction image. The third corrected image is an image with a significantly reduced interference source.
Similarly, in a unit of a single pixel, when the computer device fuses the first corrected image and the error image according to the target weight, one pixel is correspondingly processed by one thread. Optionally, in an embodiment, the manner in which the computer device fuses the first corrected image and the error image according to the target weight of each pixel in the error image to obtain the second corrected image includes: and acquiring the weighted sum of the pixels at the corresponding positions in the first corrected image and the error image according to the target weight of each pixel to obtain a second corrected image.
The first corrected image and the error image are fused, the first corrected image and the error image are separated from the image to be corrected, and the image sizes are the same, namely the number of pixels in the first corrected image and the error image is the same as that of pixels in the image to be corrected, therefore, when one thread correspondingly processes one pixel, one thread fuses the corresponding pixel in the first corrected image and the corresponding position pixel in the error image together, and the combination of the target weight is the weighted sum of the corresponding pixel in the first corrected image and the corresponding position pixel in the error image. For example, thread n is responsible for fusing pixel A in the first corrected image and pixel A' in the error image, with the target weight being fn'WeightThen fn'Weight*A+(1-fn'Weight) A' is the weighted sum obtained by the thread n, and the pixels responsible for other threads also perform the same operation, so that the weighted sum of all the pixels in the whole image can be finally obtained, namely the obtained second corrected image.
In order to make the artifact correction cleaner, optionally, a metal part in the image to be corrected is fused into the second corrected image, resulting in an artifact-corrected image of the image to be corrected. That is, on the basis of the second correction image, the metal parts in the image to be corrected are fused (added) to the second correction image again, and the obtained new image can be determined as the final artifact-corrected image of the image to be corrected. The secondary fusion of the metal part in the image to be corrected into the second corrected image can enhance the display of the metal part in the image, thereby reducing the display of the artifact. Referring to fig. 2a, a schematic diagram of an original CT image after metal artifact correction is shown, and it can be seen from the diagram that both image noise and CT mean value after metal artifact correction are greatly reduced, and the metal artifact is effectively corrected.
In the artifact correction method provided in this embodiment, a plurality of threads are called to execute parallel execution, a preset number of weights are used to fuse a first correction image and an error image into a preset number of virtual fusion images, information entropies of neighborhoods where each pixel of each virtual fusion image is located under each weight are obtained, a target weight is determined for each pixel of the error image according to the information entropies of the neighborhoods where each pixel is located under different weights, then the first correction image and the error image are fused according to the target weight to obtain a second correction image, and finally an image of the image to be corrected after artifact correction is obtained through the second correction image; the method is carried out in a mode that one pixel is correspondingly processed by one thread in the process of executing and obtaining the image after the artifact correction of the image to be corrected, so that a plurality of single-thread parallel operation data streams in the calculation process are realized by taking a single pixel as a unit and carrying out multi-thread parallel processing, the calculation performance is greatly improved, the calculation time is saved, and the artifact correction efficiency is improved.
The following description is made with reference to a specific embodiment of the process of acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located in S1022, and determining the target weight of each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights. As shown in fig. 3, in one embodiment, the step S1022 includes the following steps:
s201, obtaining a probability distribution function of a neighborhood where each pixel in each virtual fusion image is located.
When the information entropy of the neighborhood where each pixel of each virtual fusion image is located is obtained, the information entropy needs to be determined according to a probability distribution function of the neighborhood where each pixel is located, wherein the probability distribution function represents the probability distribution of each pixel in the neighborhood where each pixel is located, and the probability distribution function of the neighborhood where each pixel is located can be obtained by counting distribution maps such as a pixel distribution histogram or a pie chart of a neighborhood matrix where the pixel is located.
For example: assuming that the information entropy is represented as H and the probability distribution function of the neighborhood where the pixel x is located is represented as p (x), the formula for calculating the information entropy according to the probability distribution function is as follows:
Figure BDA0002582947610000131
where m represents the number of pixels included in a neighborhood where each pixel is located, m is an integer greater than or equal to 3, and m may be any number such as 3, 5, 8, 10, and the like. In practical application, the information entropy of the neighborhood where each pixel is located can be determined according to the probability distribution function p (x) of the neighborhood where each pixel is located in the virtual fusion image.
The process of processing a single pixel by a single thread may refer to the embodiment shown in fig. 4, and as shown in fig. 4, in an embodiment, the process of obtaining a probability distribution function of a neighborhood where each pixel of each virtual fusion image is located includes the following steps:
s301, for a single pixel of a single virtual fusion image, calling a thread corresponding to the position of the pixel to acquire the probability distribution of the pixel in the neighborhood of the pixel, and obtaining the probability distribution function of the pixel.
When a single pixel is taken as a unit, the computer device needs to acquire a probability distribution function of a neighborhood where the single pixel is located in a single virtual fusion image.
As shown in fig. 4a, the actual size of the virtual fusion image and the number of pixels included in the virtual fusion image may refer to the range formed by the inner solid line contour in fig. 4a, and for the pixels around the solid line contour, some pixels in the neighborhood do not exist, so that for the convenience of calculation, the contour formed by the outer dotted line in fig. 4a is expanded, and all the expanded pixels are filled with 0, so that the calculation result is not affected, the calculation amount is not increased, and each pixel in the virtual fusion image is ensured to have a neighborhood pixel, thereby facilitating the consistency of the calculation process.
Specifically, the computer device invokes a plurality of threads to compute (one thread executes one pixel) the probability distribution of each pixel in the virtual fusion image in the neighborhood where the pixel is located in parallel, that is, the probability distribution function of the neighborhood where the pixel is located can be obtained.
S302, obtaining the probability distribution function of each pixel in each virtual fusion image respectively to obtain the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located.
When the probability distribution function of each pixel is obtained in each virtual fusion image, the probability distribution of the neighborhood where the pixel is located in each virtual fusion image may be obtained sequentially according to the order of generation of each virtual fusion image, for example, after the probability distribution function of the neighborhood where each pixel is located in one virtual fusion image is determined, the computer device switches the next virtual fusion image and determines the probability distribution function of the neighborhood where each pixel is located in the next virtual fusion image.
When the probability distribution function of the neighborhood where each pixel is located in each virtual fusion image is determined, one thread in multiple threads is adopted to correspond to one pixel for parallel processing. In this way, one virtual fusion image is calculated and switched to the next virtual fusion image until all virtual fusion images are calculated. For example, there are 101 virtual fused images, where the virtual fused image is 512 × 512 pixels, which corresponds to 512 × 512 threads, and then during parallel processing, pixels at corresponding positions in the 101 virtual fused images are executed by the same thread, for example, the pixel at the upper left corner in the 101 virtual fused images is processed by the same thread, and the pixel at the lower right corner in the 101 virtual fused images is processed by another thread.
S202, determining the information entropy of the neighborhood where each pixel of each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located.
After the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located is determined, further, the information entropy of the neighborhood where each pixel of each virtual fusion image is located needs to be determined according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located.
In the actual calculation process for determining the information entropy of the virtual fusion image according to the probability distribution function, one thread executes one pixel correspondingly, so that the process is described below in units of single pixels. In one embodiment, as shown in fig. 5, an implementation manner of determining the information entropy of the neighborhood where each pixel of each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located includes the following steps:
s401, for a single pixel of a single virtual fusion image, calling a thread corresponding to the position of the pixel, and acquiring the information entropy of the pixel according to the probability distribution function of the pixel.
According to the probability distribution function of the neighborhood where each pixel is located, passing through a formula
Figure BDA0002582947610000151
Where m represents the number of pixels in the neighborhood where the pixel is located. And calculating the information entropy of the neighborhood where the pixel is located.
S402, respectively obtaining the information entropy of each pixel in each virtual fusion image to obtain the information entropy of the neighborhood where each pixel is located in each virtual fusion image.
After the information entropies of the neighborhoods of all the pixels in one virtual fusion image are determined, the computer equipment switches the next virtual fusion image, and the information entropies of all the pixels in all the virtual fusion images are calculated in parallel in a mode that one thread in multiple threads corresponds to one pixel. In this way, after the information entropy of the neighborhood of each pixel in one virtual fusion image is calculated, the next virtual fusion image is switched until all the pixels in all the virtual fusion images are completely calculated.
S203, determining the corresponding weight of the minimum information entropy in the information entropies of the neighborhoods where the pixels are located under different weights in the preset number of weights as the target weight.
After the information entropy of the neighborhood where each pixel is located in the virtual fusion image is obtained, the minimum information entropy in the information entropy of the neighborhood where each pixel is located under different weights is determined as the target weight according to the corresponding weight in the preset number of weights.
Each virtual fusion image is obtained by adopting one of the weights in the preset number at the beginning, so after the virtual fusion image where the minimum information entropy is located is found, the weight adopted in the virtual fusion image fusion is determined, and the weight is definitely one of the weights in the preset number.
In one embodiment, referring to the embodiment shown in fig. 6, in the step S203, the step of determining the corresponding weight of the minimum information entropy in the information entropies of the neighborhoods where each pixel is located under different weights in the preset number of weights as the target weight includes the following steps:
s501, when the information entropy of each pixel in each virtual fusion image is respectively obtained, the information entropy of each pixel in the previous virtual fusion image is compared with the information entropy of the pixel at the same position in the next virtual fusion image, the smaller information entropy is reserved as the information entropy of the pixel at the position, and finally the minimum information entropy of each pixel under different weights is obtained.
Taking 101 virtual fused images, where the virtual fused images are 512 × 512 pixels and correspond to 512 × 512 threads as an example, the pixels at corresponding positions in the 101 virtual fused images are all executed by the same thread. For example, the pixels in the upper left corner of the 101 virtual fused images are all processed by the same thread n. For a single thread n, the information entropy of the neighborhood where the pixel at the upper left corner of the first virtual fusion image is located is calculated, then the information entropy of the neighborhood where the pixel at the upper left corner of the second virtual fusion image is located is calculated, and then the information entropy of the neighborhood where the pixel at the upper left corner of the third virtual fusion image is located is calculated, and. According to the process, it can be naturally determined that the thread n knows the information entropy of the neighborhood where the pixel at the upper left corner of each virtual fusion image is located, and based on the information entropy, when the information entropy of the neighborhood where the pixel at the upper left corner of each virtual fusion image is calculated by the thread n every time, only a small information entropy is reserved through pairwise comparison until the information entropy of the neighborhood where the pixel at the upper left corner of 101 virtual fusion images is located is finally calculated, and then the minimum information entropy of the neighborhood where the pixel at the upper left corner is located can be finally obtained. For example, if the information entropy of the neighborhood where the upper left pixel of the first virtual fusion image is located is smaller than the information entropy of the neighborhood where the upper left pixel of the second virtual fusion image is located, the information entropy of the neighborhood where the upper left pixel of the first virtual fusion image is located is reserved, the information entropy is compared with the reserved information entropy next time, if the information entropy of the neighborhood where the upper left pixel of the first virtual fusion image is located is larger than the information entropy of the neighborhood where the upper left pixel of the third virtual fusion image is located, the information entropy of the neighborhood where the upper left pixel of the third virtual fusion image is located is reserved, and thus until the calculation of 101 virtual fusion images is completed, the minimum information entropy of the neighborhood where the upper left pixel of 101 virtual fusion images is located is reserved by the thread n; according to the process, the information entropy of the neighborhood where the pixel at the corresponding position is located is also finally retained by other threads, and finally, a minimum information entropy matrix M of 512 × 512 pixels is obtained, for example, M ═ argmin (H (x, y)), where argmin represents the minimum function operation.
S502, determining the virtual fusion image where the minimum information entropy of each pixel is located, and determining the corresponding weight in the preset number adopted by the virtual fusion image in the fusion process as the target weight of the pixel at the corresponding position in the error image.
Finally, an information entropy matrix M of the neighborhood where 512 × 512 pixels are located is obtained, and the information entropy of the neighborhood where each pixel is located is the minimum of the information entropies of the neighborhoods of the pixels at the corresponding positions of 101 virtual fusion images. For a single pixel, a virtual fused image where the pixel is located is determined according to the information entropy of the neighborhood where the pixel is located, then the weight adopted by the virtual fused image in the fusion process is found out and used as the target weight of the pixel at the corresponding position in the error image, and in this way, after the target weight of the pixel at the corresponding position in the error image of each pixel in 512 × 512 pixels is found out, the target weight of each pixel in the error image can be finally obtained.
After the target weight of each pixel in the error image is obtained, the weighted sum of each pixel in the error image and the pixel at the corresponding position in the first corrected image can be obtained according to the target weight of each pixel in the error image, and the obtained weighted result is the value of the pixel at the same position in the second corrected image.
For example, IerrRepresenting an error image, Icor1Representing a first corrected image, assuming the target weight of the pixel (x, y) in the error image to be fminweightThen for pixel (x, y), the second correction image Icor2Can be expressed as: i iscor2(x,y)=fminweightIerr(x,y)+(1-fminweight)*Icor1(x,y)。
In this embodiment, after the target weight of each pixel in the error image is obtained, and the target weight of each pixel in the error image is taken as a reference, obtaining a weighted sum of each pixel in the error image and a pixel at a corresponding position in the first corrected image, and obtaining a second corrected image, where the second corrected image may be used to obtain an image after the artifact of the image to be corrected is corrected, and since the target weight corresponds to the minimum information entropy after the pixels in the error image and the first corrected image are virtually fused, it is equivalent to that the artifact included in the second corrected image formed by obtaining the weighted sum of each pixel in the error image and the pixel at the corresponding position in the first corrected image obtained by using the target weight is the minimum, so that the artifact in the image to be corrected is more accurate.
As shown in fig. 7, an embodiment of the present application further provides an artifact correction method, where the embodiment includes:
s1, calling a mode that one of the multiple threads correspondingly processes one pixel, and fusing the first correction image and the error image by adopting one of the weights in the preset number in sequence to obtain a preset number of virtual fusion images;
s2, for a single pixel of a single virtual fusion image, the thread corresponding to the pixel obtains the probability distribution of the pixel in the neighborhood where the pixel is located, and obtains the probability distribution function of the neighborhood where the pixel is located;
s3, respectively obtaining the probability distribution function of the neighborhood where each pixel in each virtual fusion image is located;
s4, acquiring the information entropy of the neighborhood where each pixel in each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel in each virtual fusion image is located;
s5, when the information entropy of the neighborhood where each pixel in each virtual fusion image is located is obtained, comparing the information entropy of each pixel in the previous virtual fusion image with the information entropy of the pixel at the same position in the next virtual fusion image, and keeping the smaller information entropy as the information entropy of the pixel at the position to finally obtain the minimum information entropy of each pixel;
s6, determining the virtual fusion image where the minimum information entropy of each pixel is located, and determining the corresponding weight in the preset number adopted by the virtual fusion image in the fusion process as the target weight of the pixel of each pixel at the corresponding position in the error image;
s7, acquiring the weighted sum of the first corrected image and the pixel at the corresponding position in the error image according to the target weight of the pixel at the corresponding position in the error image to obtain a second corrected image;
and S8, acquiring an image with the artifact corrected to be corrected according to the second corrected image.
The implementation principle and technical effect of each step in the artifact correction method provided in this embodiment are similar to those in the foregoing embodiments of the artifact correction method, and are not described herein again. The implementation manner of each step in the embodiment of fig. 7 is only an example, and is not limited to this, and the order of each step may be adjusted in practical application as long as the purpose of each step can be achieved. In addition, it should be noted that in the embodiment of the present application, a calculation process of the local information entropy of the image is accelerated by executing each pixel in parallel, so that, in practical applications, artifact correction performed on an ultrasound image, an MR image, or an image shot by a general camera has an acceleration effect, and therefore, in some scenes, on the basis of the artifact correction method provided in the embodiment of the present application, the maximum information entropy may be calculated by using the GPU, and then the pixel gray value is adjusted to generate an image focusing effect, so that acceleration of automatic focusing of the image is realized, and the automatic focusing efficiency is improved.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 8, there is provided an artifact correction apparatus, including:
the calling module 10 is used for calling a plurality of thread modules 11 to execute the correction operation in parallel, and one thread module 11 correspondingly processes one or more pixels;
a thread module 11 for performing a correction operation;
the thread module 11 includes: a fusion unit 111, configured to fuse the first corrected image and the error image into a preset number of virtual fusion images by using a preset number of weights; the first correction image represents an image corresponding to the projection data of the image to be corrected after removing part or all of artifact components caused by the interference source, and the error image represents an image corresponding to the projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
a determining unit 112, configured to obtain, for each weight, an information entropy of a neighborhood where each pixel of each virtual fusion image is located, and determine a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of a preset number of weights;
a correction unit 113 for fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
In an embodiment, the fusion unit 111 is specifically configured to sequentially use one of a preset number of weights to fuse the first corrected image and the error image, so as to obtain a preset number of virtual fusion images; and when the first correction image and the error image are fused each time, each thread in the plurality of threads correspondingly processes the pixels at the corresponding positions in the first correction image and the error image.
In one embodiment, the determining unit 112 includes:
the distribution function obtaining subunit is used for obtaining a probability distribution function corresponding to a neighborhood where each pixel of each virtual fusion image is located;
the information entropy determining subunit is used for determining the information entropy of the neighborhood where each pixel of each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located;
and the weight determining subunit is used for determining the corresponding weight of the minimum information entropy in the information entropies of the neighborhoods where the pixels are located under different weights in a preset number of weights as the target weight.
In an embodiment, the distribution function obtaining subunit is specifically configured to, for a single pixel of a single virtual fusion image, call a thread corresponding to a position of the pixel to obtain probability distribution of the pixel in a neighborhood where the pixel is located, and obtain a probability distribution function of the pixel; and respectively obtaining the probability distribution function of each pixel in each virtual fusion image to obtain the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located.
In an embodiment, the information entropy determining subunit is specifically configured to, for a single pixel of a single virtual fusion image, call a thread corresponding to a position of the pixel, and obtain an information entropy of the pixel according to a probability distribution function of the pixel; and respectively obtaining the information entropy of each pixel in each virtual fusion image to obtain the information entropy of the neighborhood where each pixel in each virtual fusion image is located.
In an embodiment, the weight determining subunit is specifically configured to, when the information entropy of each pixel in each virtual fusion image is obtained, compare the information entropy of each pixel in the previous virtual fusion image with the information entropy of the pixel at the same position in the next virtual fusion image, reserve a smaller information entropy as the information entropy of the pixel at the position, and finally obtain the minimum information entropy of each pixel under different weights; and determining the virtual fusion image in which the minimum information entropy of each pixel is positioned, and determining the corresponding weight in the preset number adopted by the virtual fusion image in the fusion process as the target weight of the pixel at the corresponding position in the error image.
In an embodiment, the correction unit 113 is specifically configured to obtain a weighted sum of the first corrected image and a pixel at a corresponding position in the error image according to a target weight of each pixel in the error image, so as to obtain the second corrected image.
In one embodiment, the number of the threads is equal to or greater than the number of pixels included in the error image, and the plurality of threads simultaneously determine the target weight for the corresponding pixels.
In one embodiment, the number of threads is less than the number of pixels included in the error image, and at least one thread determines the target weight for at least two pixels.
In one embodiment, the apparatus further comprises:
the segmentation module is used for segmenting a high-frequency partial image of the image to be corrected and a low-frequency partial image of the second correction image;
and the acquisition module is used for fusing the high-frequency partial image and the low-frequency partial image to obtain a third corrected image, wherein the third corrected image is an image subjected to artifact correction of the image to be corrected.
For the specific definition of the artifact correction device, reference may be made to the above definition of the artifact correction method, which is not described herein again. The modules in the artifact correction device can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of artifact correction. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more pixels;
the corrective override comprises: fusing the first correction image and the error image into a preset number of virtual fusion images by adopting a preset number of weights; the first correction image represents an image corresponding to the projection data of the image to be corrected after removing part or all of artifact components caused by the interference source, and the error image represents an image corresponding to the projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
for each weight, acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of a preset number of weights;
fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more pixels;
the corrective override comprises: fusing the first correction image and the error image into a preset number of virtual fusion images by adopting a preset number of weights; the first correction image represents an image corresponding to the projection data of the image to be corrected after removing part or all of artifact components caused by the interference source, and the error image represents an image corresponding to the projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
for each weight, acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of a preset number of weights;
fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image subjected to artifact correction of the image to be corrected.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of artifact correction, the method comprising:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more pixels;
the correcting operation includes:
fusing the first corrected image and the error image into the virtual fused images of the preset number by adopting the weight of the preset number; the first correction image represents an image corresponding to projection data of an image to be corrected after removing part or all of artifact components caused by an interference source, and the error image represents an image corresponding to projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
for each weight, acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located, and determining a target weight for each pixel of an error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of the preset number of weights;
fusing the first correction image and the error image according to the target weight to obtain a second correction image; the second correction image is used for acquiring an image of the image to be corrected after artifact correction.
2. The method according to claim 1, wherein said fusing the first corrected image and the error image into the preset number of virtual fused images with the preset number of weights comprises:
sequentially adopting one weight in the preset number of weights to fuse the first corrected image and the error image to obtain the preset number of virtual fused images;
and when the first correction image and the error image are fused each time, each thread in the plurality of threads correspondingly processes the pixels at the corresponding position in the first correction image and the error image.
3. The method according to claim 2, wherein the obtaining information entropy of a neighborhood in which each pixel of each of the virtual fusion images is located, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood in which each pixel is located under different weights comprises:
acquiring a probability distribution function corresponding to a neighborhood where each pixel of each virtual fusion image is located;
determining the information entropy of the neighborhood where each pixel of each virtual fusion image is located according to the probability distribution function of the neighborhood where each pixel of each virtual fusion image is located;
and determining the corresponding weight of the minimum information entropy in the information entropies of the neighborhoods where the pixels are located under different weights in the preset number of weights as the target weight.
4. The method of claim 1, wherein said fusing the first corrected image and the error image to obtain a second corrected image according to the target weight comprises:
and acquiring the weighted sum of the pixels at the corresponding positions in the first corrected image and the error image according to the target weight of each pixel in the error image to obtain the second corrected image.
5. The method of claim 1, wherein the number of threads is equal to or greater than the number of pixels included in the error image, and wherein multiple threads simultaneously determine the target weight for the corresponding pixels.
6. The method of claim 1, wherein the number of threads is less than the number of pixels included in the error image, and wherein at least one thread determines a target weight for at least two pixels.
7. The method according to any one of claims 1 to 6, further comprising:
segmenting a high-frequency partial image of an image to be corrected and a low-frequency partial image of the second correction image;
and fusing the high-frequency partial image and the low-frequency partial image to obtain a third corrected image, wherein the third corrected image is an image subjected to artifact correction of the image to be corrected.
8. An artifact correction device, characterized in that said device comprises:
the calling module is used for calling a plurality of thread modules to execute correction operation in parallel, and one thread module correspondingly processes one or more pixels;
the thread module is used for executing the correction operation;
the thread module comprises:
the fusion unit is used for fusing the first correction image and the error image into the virtual fusion images of the preset number by adopting the weight of the preset number; the first correction image represents an image corresponding to projection data of an image to be corrected after removing part or all of artifact components caused by an interference source, and the error image represents an image corresponding to projection data of the first correction image subtracted from the projection data of the image to be corrected for the first time;
the determining unit is used for acquiring the information entropy of the neighborhood where each pixel of each virtual fusion image is located for each weight, and determining a target weight for each pixel of the error image according to the information entropy of the neighborhood where each pixel is located under different weights; the target weight is one of the preset number of weights;
a correction unit for fusing the first corrected image and the error image according to the target weight to obtain a second corrected image; the second correction image is used for acquiring an image of the image to be corrected after artifact correction.
9. A method of artifact correction, the method comprising:
calling a plurality of threads to execute correction operation in parallel, wherein one thread correspondingly processes one or more projection data;
the correcting operation includes:
acquiring projection data of CT scanning;
generating first corrected projection data by removing at least a portion of an artifact component caused by an interference source from projection data of the CT scan; subtracting the first correction projection data from the projection data of the CT scan to generate projection data of an error image;
applying a preset number of weights to each data point of the projection data of the error image and the corresponding data point of the first correction projection data to obtain a fusion data point;
for each fused data point, respectively calculating the information entropy of the neighborhood where the fused data point is located;
determining target weight for each data point of the projection data of the error image according to the information entropy of the neighborhood where each fused data point is located;
and fusing the first correction projection data and the projection data of the error image by adopting target weight to obtain a plurality of synthetic data points, wherein the plurality of synthetic data points form the projection data of the second correction image.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7, or 9.
CN202010672813.0A 2020-07-14 2020-07-14 Artifact correction method, artifact correction device and storage medium Pending CN113936068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010672813.0A CN113936068A (en) 2020-07-14 2020-07-14 Artifact correction method, artifact correction device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010672813.0A CN113936068A (en) 2020-07-14 2020-07-14 Artifact correction method, artifact correction device and storage medium

Publications (1)

Publication Number Publication Date
CN113936068A true CN113936068A (en) 2022-01-14

Family

ID=79273799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010672813.0A Pending CN113936068A (en) 2020-07-14 2020-07-14 Artifact correction method, artifact correction device and storage medium

Country Status (1)

Country Link
CN (1) CN113936068A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808718A (en) * 2024-02-29 2024-04-02 江西科技学院 Method and system for improving medical image data quality based on Internet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808718A (en) * 2024-02-29 2024-04-02 江西科技学院 Method and system for improving medical image data quality based on Internet
CN117808718B (en) * 2024-02-29 2024-05-24 江西科技学院 Method and system for improving medical image data quality based on Internet

Similar Documents

Publication Publication Date Title
CN106683144B (en) Image iterative reconstruction method and device
US9384555B2 (en) Motion correction apparatus and method
CN106683143B (en) Image metal artifact correction method
US8355555B2 (en) System and method for multi-image based virtual non-contrast image enhancement for dual source CT
JP2007530086A (en) Correction of metal artifacts in CT
CN109461192B (en) Image iterative reconstruction method, device and equipment and storage medium
US10013778B2 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
US20170172534A1 (en) Thoracic imaging for cone beam computed tomography
CN110400359B (en) Method, device and equipment for eliminating image artifacts and storage medium
KR20220040872A (en) Image Processing Method and Image Processing Device using the same
Su et al. A deep learning method for eliminating head motion artifacts in computed tomography
KR101467380B1 (en) Method and Apparatus for improving quality of medical image
CN108898582B (en) Heart image reconstruction method and device and computer equipment
CN113936068A (en) Artifact correction method, artifact correction device and storage medium
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
JP2005193008A (en) Method and apparatus for segmentation-base image operation
US20210027430A1 (en) Image processing apparatus, image processing method, and x-ray ct apparatus
CN111462273A (en) Image processing method and device, CT (computed tomography) equipment and CT system
JP7456928B2 (en) Abnormal display control method of chest X-ray image, abnormal display control program, abnormal display control device, and server device
US20240135502A1 (en) Generalizable Image-Based Training Framework for Artificial Intelligence-Based Noise and Artifact Reduction in Medical Images
US20220414832A1 (en) X-ray imaging restoration using deep learning algorithms
KR102480389B1 (en) Method and apparatus for bone suppression in X-ray Image
Zhang et al. Metal artifact reduction based on the combined prior image
KR102342954B1 (en) Apparatus and method for removing metal artifact of computer tomography image based on artificial intelligence
JP5196801B2 (en) Digital tomography imaging processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination