WO2014132830A1 - 画像処理装置、磁気共鳴イメージング装置および画像処理方法 - Google Patents
画像処理装置、磁気共鳴イメージング装置および画像処理方法 Download PDFInfo
- Publication number
- WO2014132830A1 WO2014132830A1 PCT/JP2014/053679 JP2014053679W WO2014132830A1 WO 2014132830 A1 WO2014132830 A1 WO 2014132830A1 JP 2014053679 W JP2014053679 W JP 2014053679W WO 2014132830 A1 WO2014132830 A1 WO 2014132830A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- wavelength band
- denoising
- unit
- original
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 145
- 238000002595 magnetic resonance imaging Methods 0.000 title claims description 33
- 238000003672 processing method Methods 0.000 title claims 4
- 238000000034 method Methods 0.000 claims description 104
- 230000008569 process Effects 0.000 claims description 88
- 238000004364 calculation method Methods 0.000 claims description 33
- 239000002131 composite material Substances 0.000 claims description 26
- 238000003384 imaging method Methods 0.000 claims description 19
- 238000005481 NMR spectroscopy Methods 0.000 claims description 18
- 238000009499 grossing Methods 0.000 claims description 18
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 description 33
- 230000005540 biological transmission Effects 0.000 description 11
- 239000000203 mixture Substances 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003252 repetitive effect Effects 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 125000004435 hydrogen atom Chemical class [H]* 0.000 description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 125000004429 atom Chemical group 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000005281 excited state Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910052739 hydrogen Inorganic materials 0.000 description 1
- 239000001257 hydrogen Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 239000011574 phosphorus Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7246—Details of waveform analysis using correlation, e.g. template matching or determination of similarity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present invention measures a nuclear magnetic resonance signal (hereinafter referred to as an NMR signal) from hydrogen, phosphorus, etc. in a subject and images a nuclear density distribution, relaxation time distribution, etc. (hereinafter referred to as MRI). And an image filtering technique.
- an NMR signal nuclear magnetic resonance signal
- MRI nuclear density distribution, relaxation time distribution, etc.
- the MRI device measures NMR signals generated by the spins of the subject, especially the tissues of the human body, and visualizes the form and function of the head, abdomen, limbs, etc. in two or three dimensions Device.
- the NMR signal measured by this MRI apparatus includes noise caused by irregular movement of electrons in an electronic circuit called thermal noise. Therefore, noise is superimposed on the obtained image, and the SN ratio of the image is lowered.
- Filtering is one way to solve this problem. Since filtering is post-processing performed after measurement, there is an advantage that measurement time is not extended. However, since not only noise but also necessary signal information is lost in the linear filter, blurring of the image, that is, reduction in resolution becomes a problem.
- a filtering method for preventing loss of necessary information included in an image has been proposed. For example, there are the following methods. One is to detect the direction of the target pixel and the surrounding texture, or to detect the structure such as the edge, and to change the shape of the weight function of the filter based on the result, to maintain the structure on the image (For example, see Non-Patent Document 1). At this time, the directionality and the structure are detected based on the pixel value of the image.
- Non-Patent Document 1 since the change in pixel value is determined as the texture or structure, the change in pixel value due to noise is also stored. In addition, since the structure is detected from the image, noise may be determined as the structure. This can result in an unnatural structure after filtering.
- the present invention has been made in view of the above circumstances, and reduces the error in the determination of the structure caused by noise, and the noise superimposed on the image retains significant information, and the result image after removal is unnatural. It aims at providing the technique of removing without producing.
- a similarity is calculated by comparing a reference image created from a plurality of original images with each original image, and is used as an index for noise determination. Using the index, each original image is smoothed and synthesized to obtain a final image after noise removal. At this time, the original image may be divided into a plurality of wavelength bands based on the maximum wavelength of noise, and the reference image may be created and smoothed.
- the structure judgment error caused by noise can be reduced, and noise superimposed on the image can be removed without causing unnaturalness in the resulting image while retaining significant information.
- FIG. 1 is a block diagram showing the overall configuration of one embodiment of the MRI apparatus of this embodiment.
- the MRI apparatus 100 of the present embodiment obtains a tomographic image of a subject using an NMR phenomenon, and as shown in FIG. 1, a static magnetic field generation system 120, a gradient magnetic field generation system 130, and a high-frequency magnetic field generation system (Hereinafter referred to as a transmission system) 150, a high-frequency magnetic field detection system (hereinafter referred to as a reception system) 160, a control processing system 170, and a sequencer 140.
- the static magnetic field generation system 120 generates a uniform static magnetic field in the direction perpendicular to the body axis in the space around the subject 101 if the vertical magnetic field method is used, and in the body axis direction if the horizontal magnetic field method is used.
- the apparatus includes a permanent magnet type, normal conducting type or superconducting type static magnetic field generating source disposed around the subject 101.
- the gradient magnetic field generation system 130 includes a gradient magnetic field coil 131 wound in the three-axis directions of X, Y, and Z, which is a coordinate system (device coordinate system) of the MRI apparatus 100, and a gradient magnetic field power source that drives each gradient magnetic field coil 132, and by driving a gradient magnetic field power source 132 of each gradient coil 131 in accordance with a command from a sequencer 140 described later, gradient magnetic fields Gx, Gy, and Gz are generated in the three axis directions of X, Y, and Z. Apply.
- the transmission system 150 irradiates the subject 101 with a high-frequency magnetic field pulse (hereinafter referred to as “RF pulse”) in order to cause nuclear magnetic resonance to occur in the nuclear spins of the atoms constituting the biological tissue of the subject 101.
- RF pulse high-frequency magnetic field pulse
- the high frequency oscillator 152 generates an RF pulse and outputs it at a timing according to a command from the sequencer 140.
- the modulator 153 amplitude-modulates the output RF pulse, and the high-frequency amplifier 154 amplifies the amplitude-modulated RF pulse and supplies the amplified RF pulse to the transmission coil 151 disposed close to the subject 101.
- the transmission coil 151 irradiates the subject 101 with the supplied RF pulse.
- the receiving system 160 detects a nuclear magnetic resonance signal (echo signal, NMR signal) emitted by nuclear magnetic resonance of the nuclear spin constituting the living tissue of the subject 101, and receives a high-frequency coil (receiving coil) on the receiving side. 161, a signal amplifier 162, a quadrature detector 163, and an A / D converter 164.
- the reception coil 161 is disposed in the vicinity of the subject 101 and detects an NMR signal in response to the subject 101 induced by the electromagnetic wave irradiated from the transmission coil 151.
- the detected NMR signal is amplified by the signal amplifier 162 and then divided into two orthogonal signals by the quadrature phase detector 163 at the timing according to the command from the sequencer 140, and each is digitally converted by the A / D converter 164. It is converted into a quantity and sent to the control processing system 170.
- the sequencer 140 applies an RF pulse and a gradient magnetic field pulse in accordance with an instruction from the control processing system 170. Specifically, in accordance with an instruction from the control processing system 170, various commands necessary for collecting tomographic image data of the subject 101 are transmitted to the transmission system 150, the gradient magnetic field generation system 130, and the reception system 160.
- the control processing system 170 controls the entire MRI apparatus 100, performs various data processing operations, displays and stores processing results, and includes a CPU 171, a storage device 172, a display device 173, and an input device 174.
- the storage device 172 includes an internal storage device such as a hard disk and an external storage device such as an external hard disk, an optical disk, and a magnetic disk.
- the display device 173 is a display device such as a CRT or a liquid crystal.
- the input device 174 is an interface for inputting various control information of the MRI apparatus 100 and control information of processing performed by the control processing system 170, and includes, for example, a trackball or a mouse and a keyboard.
- the input device 174 is disposed in the vicinity of the display device 173. The operator interactively inputs instructions and data necessary for various processes of the MRI apparatus 100 through the input device 174 while looking at the display device 173.
- the CPU 171 implements each process of the control processing system 170 such as control of the operation of the MRI apparatus 100 and various data processing by executing a program stored in the storage device 172 in accordance with an instruction input by the operator. For example, when data from the receiving system 160 is input to the control processing system 170, the CPU 171 executes signal processing, image reconstruction processing, and the like, and displays the tomographic image of the subject 101 as a result on the display device 173. At the same time, it is stored in the storage device 172.
- the transmission coil 151 and the gradient magnetic field coil 131 are opposed to the subject 101 in the vertical magnetic field method, and in the horizontal magnetic field method. It is installed so as to surround the subject 101. Further, the receiving coil 161 is installed so as to face or surround the subject 101.
- the nuclide to be imaged by the MRI apparatus which is widely used clinically, is a hydrogen nucleus (proton) which is a main constituent material of the subject 101.
- the MRI apparatus 100 by imaging information on the spatial distribution of proton density and the spatial distribution of relaxation time in the excited state, the form or function of the human head, abdomen, limbs, etc. can be expressed two-dimensionally or three-dimensionally. Take an image.
- the control processing system 170 of the present embodiment operates a measurement system including a static magnetic field generation system 120, a gradient magnetic field generation system 130, a high-frequency magnetic field generation system 150, and a high-frequency magnetic field detection system 160 according to a pulse sequence.
- the nuclear magnetic resonance signal measured by controlling the image is arranged in the k space, and the imaging unit 210 for obtaining raw data (Raw data), the image reconstruction unit 220 for reconstructing an image from the raw data, A function of an image processing unit (image processing apparatus) 230 that performs image processing is realized.
- the image processing unit 230 of the present embodiment performs a filtering process for reducing noise using a plurality of images obtained by imaging the same imaging range.
- the plurality of images used by the image processing unit 230 for the filtering process are referred to as “plural original images” or “original image groups”, and when referring to one image in the original image group, simply “original images”. Call it.
- the image after the filtering process is referred to as a “composite image”.
- the image processing unit 230 of the present embodiment creates a reference image by combining a plurality of original images, and calculates the similarity for each of the plurality of original images by comparing with the created reference image. Then, smoothing is performed based on the calculated similarity, and the plurality of original images after smoothing are combined to obtain a combined image.
- the original image group is obtained by the image reconstruction unit 220 reconstructing from the raw data group acquired by the imaging unit 210.
- the raw data group is obtained by the imaging unit 210 using, for example, measurement using a plurality of receiving coils 161. Moreover, you may obtain by measuring several times on the same imaging conditions. The reason why a plurality of images obtained by imaging the same imaging range is used as the original image is to improve noise extraction accuracy by using images on which different random noises are superimposed.
- the image processing unit 230 divides a plurality of original images 410 (original image group) into predetermined wavelength bands, respectively, as shown in FIG.
- a band dividing unit 310 that generates a wavelength band original image 420 (wavelength band original image group) and a reference image that synthesizes a plurality of wavelength band original images 420 for each wavelength band to create a reference image 430 for each wavelength band
- the creation unit 320 compares the plurality of wavelength band original images 420 with the reference image 430 of the same wavelength band, calculates the similarity for each wavelength band original image 420, and generates the similarity map 440
- the calculation unit 330 and each of the plurality of wavelength band original images 420 are each smoothed using the similarity of the wavelength band original image 420, whereby each denoising image 450 (a plurality of denoising images;
- a denoising unit 340 for generating (image group) and a plurality of And a synthesizing unit 350 that synthesizes the den
- the data and the image group generated during the processing executed by each unit are stored in, for example, the RAM of the storage device 172.
- FIG. 4 is a process flow for explaining the flow of the filtering process of the present embodiment.
- N and M are integers of 1 or more.
- the band dividing unit 310 performs band dividing processing on each of the N original images 410 (step S1001).
- N ⁇ M wavelength band original images 420 are obtained.
- the reference image creation unit 320 performs reference image creation processing to obtain a reference image 430 for each wavelength band (step S1002).
- One reference image 430 is created for each band. For this reason, M reference images 430 are obtained here.
- the similarity calculation unit 330 performs similarity calculation processing to calculate the similarity (step S1003).
- the similarity is calculated for each pixel in two directions, the x direction and the y direction, to obtain a similarity map 440. Accordingly, 2 ⁇ N ⁇ M similarity maps 440 are obtained.
- the denoising unit 340 performs denoising processing to generate a denoising image 450 (step S1004).
- denoising processing is performed for each of the N ⁇ M wavelength band original images 420 for each pixel in two directions, the x direction and the y direction. Therefore, N ⁇ M denoising images are obtained.
- composition unit 350 performs image composition processing to obtain a composite image 460 (step S1005).
- the N ⁇ M denoising images that have undergone the denoising process are synthesized to generate one synthesized image 460.
- the band dividing unit 310 performs band dividing processing for dividing the original image group 410 into predetermined wavelength bands.
- a filter divides the image into two groups, that is, an image group having a band larger than a predetermined cutoff wavelength and an image group having a band shorter than the cutoff wavelength.
- the band dividing unit of the present embodiment realizes this using a low-pass filter and a high-pass filter in which the same cutoff wavelength is set.
- the wavelength band to be divided is not limited to two.
- FIG. 5 is a processing flow of the band division processing according to the present embodiment.
- Each original image 410 is assigned an original image number in advance.
- the original image of the original image number n is represented as I Org (n, x, y).
- the band dividing unit 310 sets the cutoff wavelength of the low pass filter and the high pass filter (step S1101).
- step S1103 the following processing from step S1103 to step S1104 is performed for each of a plurality of original images (original image group: I Org (n, x, y)) by incrementing the original image number n by 1 by 1 (step S1102, S1107).
- the band dividing unit 310 reads the nth original image (I Org (n, x, y)) (step S1103). Then, low-pass filter processing using a low-pass filter is performed on the original image group I Org (n, x, y) (step S1104). Further, a high-pass filter process using a high-pass filter is performed on the original image group (step S1105). Either of the two filter processes may be performed first.
- the band division unit 310 stores the obtained low-pass filter result and high-pass filter result in the storage device 172 as the wavelength band original image 420, respectively (step S1106).
- a band number is assigned to each band after wavelength band division.
- the wavelength band original image of band number m is represented as I Sep (m, n, x, y).
- band number 1 is given to the low-pass filter result, which is called the first wavelength band original image I Sep (1, n, x, y).
- the high pass filter result is assigned a band number 2 and is called a second wavelength band original image I Sep (2, n, x, y).
- the cutoff wavelength to be set is determined according to the noise wavelength. That is, when the maximum noise wavelength ⁇ Noise is J (pixels), the cutoff wavelength is 1 / J (1 / pixels). For example, if the maximum noise wavelength ⁇ Noise is 3 pixels, the cutoff wavelength is 1/3 (1 / pixels). In general, the maximum noise wavelength ⁇ Noise is suitably 3 (pixels), but when an interpolated image is used for the original image, the number of points (cutoff wavelength [pixels]) is increased in proportion to the interpolation magnification.
- a Gaussian filter may be used as the low-pass filter. It is preferable to use a filter having a gentle cutoff characteristic such as Gaussian rather than a filter having a steep cutoff characteristic because ringing hardly occurs in an image.
- the cutoff wavelength ⁇ cutoff is set as follows.
- the amplitude transfer characteristic H ( ⁇ ) of the Gaussian filter is expressed by the following equation (1).
- ⁇ is represented by the following formula (2).
- ⁇ is a wavelength (pixels). From Expressions (1) and (2), when the attenuation rate (%) of the signal component below the maximum noise wavelength ⁇ Noise is ⁇ , the cutoff wavelength ⁇ cutoff can be calculated by the following Expression (3).
- the cutoff wavelength ⁇ cutoff is about 7.7 (pixels).
- the low-pass filter processing in step S1104 follows the following equation (4).
- h ( ⁇ ) is a weight function of the Gaussian filter.
- the high-pass filter processing in step 1105 follows the following equation (5).
- the maximum noise wavelength ⁇ Noise is set by the user via a GUI described later. Further, it may be set in the storage device 172 in advance.
- FIG. 6 is a process flow of the reference image creation process of the present embodiment.
- the reference image creating unit 320 repeats the following processing from step S1202 to step S1206 for the divided wavelength band number M (M is an integer of 2 or more) (steps S1201 and S1207).
- M is an integer of 2 or more
- the cutoff wavelength is used, and the wavelength band is divided into two wavelength bands by the low-pass filter and the high-pass filter, so the process is repeated twice.
- m 1
- m 2
- m 2, n, x, y
- the reference image creation unit 320 reads all the wavelength band original images 420 having the band number m generated from the N original images 410 (steps S1202, S1203, and S1204).
- the reference image creation unit 320 synthesizes the read wavelength band original image 420 to obtain one reference image 430 for each wavelength band (step S1205).
- an average value of the wavelength band original images 420 of the same wavelength band obtained from all the original images 410 is calculated according to the following equation (6).
- I Ref (m, x, y) is the reference image 430 of band number m
- N is the total number of original images 410, here, the wavelength band of band number m obtained from N original images 410 The number of original images.
- the reference image creation unit 320 stores the obtained reference image I Ref (m, x, y) 430 in the storage device 172 (step S1206).
- the reference image I Ref (1, x, y) obtained from the first wavelength band original image I Sep (1, n, x, y) group is referred to as the first reference image
- the second The reference image I Ref (2, x, y) obtained from the wavelength band original image I Sep (1, n, x, y) group is referred to as a second reference image.
- the reference image for each wavelength band is obtained by calculating the average value, but the calculation method is not limited to this.
- the reference image may be obtained by taking the square root of the square sum of each wavelength band original image.
- FIG. 7 is a processing flow of similarity calculation processing according to the present embodiment.
- the similarity calculation unit 330 repeats the processing from step S1302 to step S1309 for each number of divided wavelength bands (steps S1301 and S1309). In this embodiment, the process is repeated twice as in the reference image creation process.
- the similarity calculation unit 330 reads the mth reference image 430 (step S1302).
- m 1
- the first reference image I Ref (1, x, y) is used.
- the similarity calculation unit 330 performs the processing from step S1304 to step S1307 on the wavelength band original image 420 of band number m obtained from each of the N original images 410 (steps S1303 and S1308).
- the similarity calculation unit 330 first reads the wavelength band original image 420 of band number m, which is the wavelength band original image I Sep (m, n, x, y) generated from the nth original image 410 (step S1304).
- the similarity calculation unit 330 calculates the local similarity between the read wavelength band original image I Sep (m, n, x, y) and the reference image I Ref (m, x, y).
- the similarity in the x direction is calculated for each pixel of the wavelength band original image I Sep (m, n, x, y), and a similarity map (x direction similarity map) is created (step S1305).
- a similarity map x direction similarity map
- the similarity in the x direction of each pixel is calculated, for example, according to the following equation (7).
- L is a correlation coefficient calculation range.
- the correlation coefficient calculation range L is set to the same value as the maximum wavelength of noise defined in advance, for example.
- the calculation of the correlation coefficient in the above equation is intended to express similarity.
- the maximum value of the similarity is 1 and the minimum value is -1.
- the similarity calculation unit 330 calculates the similarity in the y direction for each pixel of the wavelength band original image I Sep (m, n, x, y), and calculates the similarity map in the y direction; Similarity Y (m, n, x, y) are created (step S1306).
- the y-direction similarity map Similarity Y (m, n, x, y) is calculated, for example, according to the following equation (8).
- the similarity calculation unit 330 calculates the similarity in the x direction obtained by using each pixel as the similarity in the x direction and the similarity in the y direction. Similarity map X (m, n, x, y) and similarity in the y direction The degree map Similarity Y (m, n, x, y) is stored in the storage device 172 (step S1307). When there is no need to particularly distinguish the x direction and the y direction, the similarity map is referred to as Similarity Map Similarity (m, n, x, y).
- the similarity map may be subjected to filter processing such as a median filter.
- the filtering process is performed under the assumption that the degree of similarity varies spatially and continuously.
- denoising processing by the denoising unit 340 will be described.
- the noise of each pixel of the original image 410 divided for each wavelength band, that is, each wavelength band original image 420 is removed.
- Noise removal is performed by creating a weight function for each pixel according to the degree of similarity of the pixel.
- the weighting function is assumed to have a steeper shape for pixels with higher similarity. This is because a pixel having a higher similarity is less likely to be smoothed and a pixel having a lower similarity is more easily smoothed.
- FIG. 8 is a processing flow of denoising processing according to this embodiment.
- a weighting function is created for each pixel of each wavelength band original image 420 in each of the x direction and the y direction, and filtering processing is performed.
- a pixel number p is assigned to each pixel (x, y).
- the total number of pixels is P (P is an integer of 1 or more).
- the denoising unit 340 repeats the processing from steps S1402 to S1414 for each divided wavelength band (steps S1401 and S1415). In this embodiment, the process is repeated twice as in the reference image creation process.
- the denoising unit 340 repeats the processing from step S1403 to step S1413 for each wavelength band original image group 420 of the same wavelength band (steps S1402 and S1414). Here, repeat N times.
- the denoising unit 340 performs the processing from step S1404 to step S1406 in order for all pixels (steps S1403 and S1407). Here, repeat P times.
- the weight function is created according to the following equation (9) using, for example, the similarity map Similarity (m, n, x, y) and the cutoff wavelength ⁇ cutoff .
- FIG. 9 shows an example in which the weighting function h Denoise ( ⁇ ) is created according to the equation (9).
- a solid line 501 is a weighting function when the similarity degree similarity is 0 or less
- a broken line 502 is a weighting function when the similarity degree similarity is 0.5
- a dotted line 503 is a weighting function when the similarity degree similarity is 0.8. is there.
- the denoising unit 340 performs filtering processing (smoothing) in the x direction using the calculated weight function (step S1406).
- the denoising unit 340 performs the same processing in the y direction. That is, for each pixel p, the following steps S1409 to S1411 are repeated (steps S1408 and S1412).
- a weighting function is created using the read similarity map Similarity , Y (m, n, x, y) (step S1410).
- the weighting function is created according to the above formula (9), for example.
- a filtering process smoothing is performed in the y direction using the created weight function (step S1411).
- the denoising unit 340 stores the obtained image in the storage device 172 as a denoising image deN (m, n) 450 (step S1413).
- FIG. 10 is a processing flow of the image composition processing of the present embodiment.
- the denoising images deN (m, n) 450 generated from the same original image 410 are synthesized, and denoising images (synthetic denoising images) deNal (n) for all N wavelength bands are obtained.
- all the synthesized denoising images deNal (n) are synthesized, and finally one synthesized image 460 is obtained.
- the synthesis unit 350 executes the processing from steps S1502 to S1505 for each original image 410 that is the source of the denoising image deN (m, n) 450 (steps S1501 and S1506).
- the synthesizing unit 350 reads all the denoising images deN (m, n) 450 of the wavelength band original image 420 generated from the same original image 410 (steps S1502, S1503, S1504), combines them, and denoises the image deNal for all wavelength bands. (n) is generated (step S1505).
- deN (1, n) and deN (2, n) are read and combined to generate a combined denoising image deNal (n).
- the synthesizing unit 350 synthesizes the noise images by all the obtained synthesis and obtains the synthesized image 460 (step S1507).
- the synthesis executed by the synthesis unit 350 may be simple addition. For example, if the image group is acquired by a plurality of reception coils 161, the calculation may be performed by taking the square root after the sum of squares. Good.
- FIG. 11A shows a profile 601 of an image obtained by simple addition and a profile 602 of an image (composite image 460) obtained by the filtering processing according to the present embodiment.
- the thin line is the profile 601
- the dark line is the profile 602.
- FIG. 11 (b) shows an enlarged view of the dashed line 603 in FIG. 11 (a).
- the profile 602 of the image (composite image 460) obtained by the filtering processing according to the present embodiment has a structure with reduced noise, compared to the profile 601 of the image obtained by simply adding the original images. keeping.
- the variable that should be specified by the operator to execute the filter processing of the present embodiment is the maximum noise wavelength ⁇ Noise that determines the cutoff wavelength used by the band dividing unit 310.
- the GUI 700 of the present embodiment includes a box 701 that accepts a maximum noise wavelength ⁇ Noise and a button 702 that accepts a determination intention, and accepts a setting for accepting the setting of the maximum noise wavelength ⁇ Noise. It functions as a part.
- the band dividing unit 310 receives the maximum noise wavelength ⁇ Noise via the GUI 700.
- the GUI 700 is displayed on the display device 173 and operated through the input device 174.
- the operator inputs the maximum noise wavelength in the box 701 and presses the button 702 to complete the setting.
- the band division unit 310 accepts pressing of the button 702
- the band division unit 310 accepts the value input to the box 701 at that time as the maximum noise wavelength ⁇ Noise .
- the maximum noise wavelength ⁇ Noise to be set is normally 3 [pixels] as described above, but in the case of an interpolated image, the number of points is increased in proportion to the interpolation magnification. Further, the maximum noise wavelength ⁇ Noise is also a variable for adjusting the degree of smoothing of the filter processing. Therefore, a large numerical value is input when it is desired to increase the degree of smoothing, and a small numerical value is input when it is desired to decrease the degree of smoothing. Since a signal component having a wavelength less than or equal to the maximum wavelength of the input noise is removed, the operator may set a value according to the resolution required for the processed image.
- the GUI 700 is intended to set the maximum noise wavelength by the operator. For this reason, a structure is not restricted above. A configuration in which other input methods such as a slider bar can be realized without necessarily using a text box may be used.
- the image processing unit 230 includes the band dividing unit 310 that divides the plurality of original images 410 for each predetermined wavelength band, and generates the plurality of wavelength band original images 420.
- the plurality of wavelength band original images 420 are combined for each wavelength band to create a reference image 430 for each wavelength band, and the plurality of wavelength band original images 420 are respectively the same.
- a similarity calculation unit 330 that calculates a similarity map 440 for each wavelength band original image 420, and the plurality of wavelength band original images 420 are respectively referred to as the wavelength band original image 420.
- the denoising unit 340 generates a denoising image 450 from each wavelength band original image 420 by smoothing using the similarity map 440, and a synthesizing unit that combines the denoising image 450 to generate a composite image 460. 350.
- the reference image 430 is created by combining a plurality of images (wavelength band original image group 420) on which different random noises are superimposed. Then, the similarity (similarity map 440) calculated by comparing the obtained reference image 430 and each image (wavelength band original image group 420) is used as a structure index. Therefore, it is possible to reduce errors in recognizing random noise as a structure.
- the original image group 410 is divided by a wavelength band determined from the maximum noise wavelength. For this reason, it is difficult for the calculated similarity to be affected by data in a wavelength band different from the noise wavelength band. Thereby, the noise detection capability can be improved.
- denoising or filtering is performed based on the similarity with the reference image 430. This eliminates the need for a priori assumption (assuming that the structure can be preserved if smoothing is performed using data of the same pixel value level), so that it is difficult to produce an unnatural processing result.
- the denoising process is performed based on the calculated similarity, there is no need to assume the texture directionality or edge. Therefore, good denoising processing results can be obtained for images having any shape.
- the filter processing of this embodiment does not require complicated variable settings. Since the operator only has to set a single variable (maximum wavelength of noise), the processing result is easy to imagine and can be used easily.
- the determination error of the structure caused by noise is reduced, and the noise superimposed on the image retains significant information, and the resulting image is unnatural. Can be removed without
- the band division process may not be performed.
- the band dividing unit 310 may not be provided.
- the similarity is calculated by comparing the reference image 430 created from the original image group 410 and the original image group 410, and the calculated similarity is determined as noise.
- the judgment error which recognizes the noise superimposed on the image as a significant signal decreases.
- the denoising process is performed using the calculated similarity, it is not necessary to assume the directionality or edge of the texture. Therefore, a good denoising process result can be obtained for images having any shape.
- the CPU 171 of the MRI apparatus 100 realizes the image processing unit 230 has been described as an example, but the present invention is not limited thereto.
- the function of the image processing unit 230 may be constructed on another information processing apparatus capable of transmitting and receiving data to and from the MRI apparatus 100.
- the case where the MRI apparatus 100 is used as an example of an apparatus that acquires an original image to be processed by the image processing unit 230 has been described as an example.
- an apparatus that acquires an image is not limited thereto. Absent.
- Other medical image acquisition apparatuses may be used.
- CT Computerputed Tomography
- the original image may not be a medical image acquired by the medical image acquisition device.
- Second Embodiment a second embodiment to which the present invention is applied will be described.
- a flexible and sufficient noise removal effect is obtained by repeating the filtering process of the first embodiment.
- the configuration of the MRI apparatus 100 of this embodiment is basically the same as that of the first embodiment. However, since the above-described repetitive processing is performed, the function of the image processing unit 230 of the present embodiment is different. Hereinafter, the present embodiment will be described focusing on the configuration different from the first embodiment.
- the image processing unit 230 determines whether or not the composite image 460 is converged every time the composite image 460 is generated.
- a convergence determination unit 360 is provided that replaces the original image 410 with the synthesized denoising image generated immediately before.
- the convergence determination unit 360 determines whether or not the composite image 460 has converged based on the amount of change from the composite image 460 obtained in the previous iteration.
- the amount of change becomes smaller than a predetermined value, it is determined that it has converged.
- the composite image I Comp (RepeatNum, x, y) obtained at the RepeatNum time is compared with the composite image I Comp (RepeatNum-1, x, y) obtained at the (RepeatNum-1) time as follows: If the condition (10) is satisfied, it is determined that it has converged.
- X is the number of image points in the x direction
- Y is the number of image points in the y direction
- Abs is an absolute value function
- RepeatNum is the number of repetitions.
- Expression (10) represents that the amount of change by the iterative process is 1% or less with respect to the sum of the pixel values of the entire image. Note that the convergence criterion is not limited to this. You may comprise so that a user can set the maximum value of the variation
- the convergence determination unit 360 always determines that the synthesized image I Comp (1, x, y) obtained for the first time is not acceptable because the above equation (10) cannot be calculated.
- FIG. 14 is a process flow for explaining the flow of the filtering process of the present embodiment.
- the band dividing unit 310 performs the band dividing process (step S2101)
- the reference image creating unit 320 performs the reference image creating process (step S2102)
- the similarity calculating unit 330 performs the similarity calculating process.
- the denoising unit 340 performs denoising processing (step S2104)
- the synthesizing unit 350 performs synthesizing processing (step S2105).
- the synthesis unit 350 also stores the synthesized denoising image 470 in the storage device 172.
- the convergence determination unit 360 calculates, for example, the above equation (10), and determines whether or not the obtained composite image 460 has converged (step S2106). If it is determined that it has converged, the process is terminated. On the other hand, when it is determined NO, the convergence determination unit 360 replaces the original image 410 group with the synthesized denoising image 470 group calculated immediately before (step S2107), and returns to step S2101. It should be noted that the image replacement may be performed as long as it is after generation of the synthesized denoising image, and may be performed before the convergence determination.
- a plurality of original images 410 are combined to create a reference image 430, and each of the plurality of original images 410 is Similarity is calculated by comparison, and the denoising process is performed using the calculated similarity as an index for noise determination.
- reference image creation, similarity calculation, and denoising processing may be performed for each predetermined wavelength band. Accordingly, as in the first embodiment, the determination error for recognizing noise superimposed on the image as a significant signal is reduced.
- the denoising process is performed using the calculated similarity, it is not necessary to assume the directionality or edge of the texture. Therefore, a good denoising process result can be obtained for images having any shape.
- the present invention can be applied to various images without adjusting the processing parameters.
- the apparatus that acquires the original image 410 to be processed by the image processing unit 230 is not limited to the MRI apparatus.
- Other medical image acquisition apparatuses may be used.
- any device capable of acquiring an image may be used instead of a medical image acquisition device.
- the configuration of the MRI apparatus 100 of this embodiment is basically the same as that of the first embodiment. However, since the above-described repetitive processing is performed, the function of the image processing unit 230 of the present embodiment is different. Hereinafter, the present embodiment will be described focusing on the configuration different from the first embodiment.
- the image processing unit 230 of the present embodiment includes a wavelength control unit 370 as shown in FIG. 15 in addition to the configuration of the first embodiment.
- the band dividing unit 310 divides the original image 410 into two wavelength band original images 420 using one cut-off wavelength. Then, the denoising process is performed only for the wavelength band original image 420 having the smaller wavelength band.
- the wavelength control unit 370 of the present embodiment repeatedly updates the cutoff wavelength set to the minimum value in the wavelength band by incrementing a predetermined increment within a predetermined wavelength band, and Replace the original image 410.
- the thermal noise included in the NMR signal has a substantially uniform amplitude spectrum in the entire wavelength band, the noise wavelength band is 0 to infinity.
- noise that is particularly desired to be removed in MRI is short-wavelength noise that causes a fine structure to become invisible. Therefore, it is practical and desirable to set the wavelength band for changing the cutoff wavelength to about 3 to 12 [pixels]. When the image is interpolated at a high magnification, it is desirable to set a wider wavelength band.
- the setting of the wavelength band to be changed may be performed by an operator, or may be determined in advance and held in the storage device 172. The same applies to the increment when changing.
- FIG. 16 is a process flow for explaining the flow of the filtering process of the present embodiment.
- the wavelength controller 370 sets a noise wavelength band as a range in which the cutoff wavelength is changed (step S3101).
- the setting range is as described above.
- the increment is also set, and the loop counter k and the maximum value K of k are calculated.
- the minimum value of the noise wavelength band is ⁇ min
- the maximum value is ⁇ max
- the increment is ⁇ .
- Int (x) is a function that returns the integer part of x.
- the cutoff wavelength when the counter is k is expressed as ⁇ cutoff (k).
- K becomes 10.
- the wavelength controller 370 changes the loop counter k from 1 to K one by one, and repeats the processing from step S3103 to S3108 (steps S3102 and S3109).
- Band division section 310 performs band division processing on each original image 410 using cut-off wavelength ⁇ cutoff (k) (step S3103).
- each original image 410 is divided into two wavelength bands using the cutoff wavelength ⁇ cutoff (k).
- the wavelength band original image 420 of the wavelength band equal to or less than the cutoff wavelength ⁇ cutoff (k) is used as the first wavelength band original image I Sep (1, n, x, y), and the other as the second wavelength band original image.
- I Sep (2, n, x, y) Let I Sep (2, n, x, y).
- the reference image creation unit 320 performs reference image creation processing (step S3104).
- the denoising process is performed only on the first wavelength band original image I Sep (1, n, x, y) that is the wavelength band original image 420 having a cutoff wavelength ⁇ cutoff (k) or less.
- the reference image 430 is also created only from the first wavelength band original image I Sep (1, n, x, y). That is, the reference image creation unit 320 of the present embodiment reads and combines all the first wavelength band original images I Sep (1, n, x, y), and combines the first reference image I Ref (1, x, Create y).
- the similarity calculation unit 330 performs similarity calculation processing (step S3105).
- the first reference image I Ref (1, x, y) is used, and the x direction and the y direction.
- the similarity map of the first wavelength band original image I Sep (1, n, x, y) is created.
- the denoising unit 340 performs denoising processing on the first wavelength band original image I Sep (1, n, x, y) (step S3106). Also in the denoising process is performed first wavelength band original image I Sep (1, n, x , y) for each pixel only, the denoising of the first wavelength band the original image I Sep (1, n, x , y) An image (hereinafter referred to as a first denoising image) is generated.
- the composition unit 350 performs image composition processing (step S3107).
- the first denoising image and the second wavelength band original image I Sep (2, n, x, y) created from the same original image 410 are combined to combine the original images of one.
- a denoising image 470 is obtained.
- all the synthesized denoising images are synthesized to obtain a synthesized image 460.
- the synthesis unit 350 also stores the synthesized denoising image in the storage device 172.
- the wavelength control unit 370 replaces the original image 410 group with the obtained synthesized denoising image 470 group (step S3108).
- step S3107 only the synthesized denoising image 470 may be calculated, and after all the loop processing is completed, the synthesized denoising image 470 may be synthesized to obtain the synthesized image 460.
- the similarity is calculated by comparing the reference image 430 created from a plurality of original images 410 with the original image 410, and the calculated similarity Denoise processing is performed using degree as an index for noise determination. For this reason, the effect similar to 1st embodiment can be acquired.
- the apparatus that acquires the original image 410 to be processed by the image processing unit 230 is not limited to the MRI apparatus.
- Other medical image acquisition apparatuses may be used.
- any device capable of acquiring an image may be used instead of a medical image acquisition device.
- the loop processing of this embodiment may be combined with the convergence determination processing of the second embodiment. In this case, it is determined whether or not the image is converged every time the composite image 460 is generated. If the image is determined to be negative, a convergence determination unit 360 that replaces the original image 410 with the composite denoising image 470 is further provided. The processing flow in this case is shown in FIG.
- step S2103 the processing from step S3103 to step S3108 of the present embodiment is performed for each cutoff wavelength, and the convergence determination processing (step S2106) of the second embodiment is performed. If the convergence determination process determines NO, the process from step S3103 is repeated. Then, in the convergence determination process, after it is determined that convergence has occurred, the cutoff wavelength is updated (steps S3102 and 3109).
- a fourth embodiment to which the present invention is applied will be described.
- a plurality of original images are acquired in advance and the image processing is performed.
- a single image can be generated using an MRI apparatus, and data that can be generated is acquired, and a plurality of original images are created from the data.
- the MRI apparatus 100 of this embodiment is basically the same as the MRI apparatus 100 of the first embodiment. However, in this embodiment, a plurality of original images are created from the acquired data for one image. For this reason, the image reconstruction unit 220 according to the present embodiment generates a plurality of original images from one image.
- the present embodiment will be described focusing on the configuration different from the first embodiment.
- the image reconstruction unit 220 includes a missing data generation unit 221 that generates a plurality of missing row data with different missing regions from one row data acquired by the imaging unit 210, and An estimation data generation unit 222 that generates estimation data from missing raw data, and an original image generation unit 223 that reconstructs an original image from each estimation data, respectively.
- the raw data 801 is obtained by arranging NMR signals for one image acquired by the imaging unit 210 in the k space.
- the missing data generation unit 221 generates a plurality of missing raw data 802 from the acquired raw data 801.
- the missing raw data 802 is obtained by zeroing data in a part of the k space where the raw data 801 is arranged. At this time, a plurality of different missing raw data 802 are generated by changing the regions to be lost.
- the area to be deleted shall be an area that is less than half of all raw data. Moreover, it is desirable not to lose the vicinity of the center.
- the estimated data generation unit 222 uses a known k-space estimation technique to fill the zeroed regions of the missing raw data 802 groups with data to generate the estimated raw data 803 groups.
- the k-space estimation technique uses, for example, a technique described in Non-Patent Document 2 that fills data of a missing area using data of a non-missing area.
- the original image generation unit 223 generates an original image group 804 by Fourier transforming each estimated raw data 803 group and reconstructing the image.
- the original image 804 group of the present embodiment is created from the raw data 801 for one image. Noise in different areas is emphasized and has different noise. Since each original image 804 has different noise, noise and a significant signal can be distinguished by comparison with a reference image created from the group of original images 804.
- the subsequent processing performed using the created original image 804 is the same as that in any of the first embodiment, the second embodiment, and the third embodiment.
- the same effects as those of the above embodiments can be obtained in the filtering process by the image processing unit 230. Furthermore, in this embodiment, the original image group 410 can be obtained with a single receiving coil 161 or with a single measurement.
- the Gaussian filter described in Expression (4) and Expression (5) is used for dividing the wavelength band, but the present invention is not limited to this.
- a spline filter or a filter having an arbitrary amplitude transfer characteristic can be used.
- the similarity is calculated using Expression (7) and Expression (8), but is not limited thereto.
- the sum of squares of the deviation between the reference image and the wavelength band divided image may be used.
- the weight function h denoise ( ⁇ ) used when filtering is calculated according to the equation (9), but is not limited thereto .
- outlier processing by comparison between median filters or image groups may be performed.
- the original image to be processed in each of the above embodiments is not limited to a two-dimensional image, and may be a three-dimensional image.
- the degree of smoothing may be changed for each dimension.
- the degree of smoothing can be changed by changing the setting of the maximum wavelength of noise.
- the definition of the maximum wavelength of noise may not be used for specifying the degree of smoothing.
- the degree of smoothing may be adjusted by multiplying the similarities Similarity X and Similarity Y calculated in the similarity calculation processing by a coefficient.
- 100 MRI apparatus 101 subject, 120 static magnetic field generation system, 130 gradient magnetic field generation system, 131 gradient magnetic field coil, 132 gradient magnetic field power supply, 140 sequencer, 150 high frequency magnetic field generation system (transmission system), 151 transmission coil, 152 high frequency oscillator , 153 modulator, 154 high frequency amplifier, 160 high frequency magnetic field detection system (reception system), 161 reception coil, 162 signal amplifier, 163 quadrature phase detector, 164 A / D converter, 170 control processing system, 171 CPU, 172 storage Device, 173 display device, 174 input device, 210 imaging unit, 220 image reconstruction unit, 221 missing data generation unit, 222 estimated data generation unit, 223 original image generation unit, 230 image processing unit, 310 band division unit, 320 Image creation unit, 330 similarity calculation unit, 340 denoising unit, 350 synthesis unit, 360 convergence determination unit, 370 wavelength control unit, 410 original image, 420 wavelength band original image, 430 reference image, 440 similarity map, 450 deno Izu image, 460 composite
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Image Processing (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Description
以下、添付図面に従って本発明に係る好ましい実施形態について詳説する。なお、発明の実施形態を説明するための全図において、特に明示しない限り、同一機能を有するものは同一符号を付け、その繰り返しの説明は省略する。
まず、本実施形態のMRI装置の一例の全体概要を図1に基づいて説明する。図1は、本実施形態のMRI装置の一実施形態の全体構成を示すブロック図である。
N、Mは、1以上の整数とする。
帯域分割部310は、上述のように、原画像群410を、予め定めた波長帯域毎に分割する帯域分割処理を行う。本実施形態では、フィルタにより、予め定めたカットオフ波長より大きい帯域を有する画像群とカットオフ波長以下の帯域を有する画像群との2つに分割する場合を例にあげて説明する。本実施形態の帯域分割部は、これを、同一のカットオフ波長を設定したローパスフィルタおよびハイパスフィルタを用いて実現する。なお、分割する波長帯域は、2つに限らない。
次に、参照画像作成部320による参照画像作成処理の詳細を説明する。参照画像作成処理では、波長帯域毎に、暫定的に正しいものとみなす参照画像を作成する。本実施形態では、各原画像410から作成した波長帯域毎の波長帯域原画像420を用い、参照画像430を作成する。図6は、本実施形態の参照画像作成処理の処理フローである。
次に、類似度算出部330による類似度算出処理について説明する。類似度算出処理では、参照画像430に対する類似度を、波長帯域原画像420毎に算出する。類似度の算出は、各画素について、x方向、y方向について行う。図7は、本実施形態の類似度算出処理の処理フローである。
次に、デノイズ部340によるデノイズ処理について説明する。デノイズ処理では、波長帯域毎に分割した原画像410、すなわち、波長帯域原画像420それぞれの各画素のノイズを除去する。ノイズの除去は、画素ごとに、当該画素の類似度に応じて重み関数を作成し、それを用いて行う。重み関数は、類似度が高い画素ほど急峻な形状のものとする。これは、類似度が高い画素ほど、平滑化がされにくく、類似度が低い画素ほど平滑化されやすくするためである。
全画素数は、P(Pは、1以上の整数)とする。
次に、合成部350による、画像合成処理について説明する。図10は、本実施形態の画像合成処理の処理フローである。ここでは、同一の原画像410から生成されたデノイズ画像deN(m,n)450を合成し、N個の全波長帯域のデノイズ画像(合成デノイズ画像)deNal(n)を得る。そして、全ての合成デノイズ画像deNal(n)を合成し、最終的に1枚の合成画像460を得る。
このため、構成は、上記に限られない。必ずしもテキストボックスを用いなくても、スライダバーなど他の入力方法が実現可能な構成であってもよい。
次に、本発明を適用する第二の実施形態を説明する。本実施形態では、第一の実施形態のフィルタリング処理を繰り返すことにより、柔軟かつ十分なノイズ除去効果を得る。
収束と判定する変化量の最大値をユーザが設定可能なように構成してもよい。
次に、本発明を適用する第三の実施形態を説明する。本実施形態では、帯域分割処理に用いるカットオフ波長を変化させながらフィルタリング処理を繰り返す。
次に、本発明を適用する第四の実施形態を説明する。第一の実施形態、第二の実施形態では、予め複数の原画像を取得し、上記画像処理を行う。しかしながら、本実施形態では、MRI装置を用いて画像を1枚、生成可能なデータを取得し、そのデータから複数の原画像を作成する。
Claims (15)
- 複数の原画像を合成して参照画像を作成し、前記複数の原画像それぞれについて、作成した前記参照画像と比較して類似度を算出し、算出した前記類似度に基づいて前記複数の原画像を平滑化し、平滑化後の前記複数の原画像を合成して合成画像を得る画像処理部を備えること
を特徴とする画像処理装置。 - 前記画像処理部は、
前記複数の原画像それぞれを、予め定めた波長帯域毎に分割し、複数の波長帯域原画像を生成する帯域分割部と、
前記複数の波長帯域原画像を、前記波長帯域毎に合成して前記波長帯域毎の前記参照画像を作成する参照画像作成部と、
前記複数の波長帯域原画像それぞれを、同一の波長帯域の前記参照画像と比較して、前記波長帯域原画像毎の前記類似度を算出する類似度算出部と、
前記複数の波長帯域原画像それぞれを、当該波長帯域原画像の前記類似度を用いて平滑化することにより、各前記波長帯域原画像からそれぞれデノイズ画像を生成するデノイズ部と、
前記デノイズ画像を合成して前記合成画像を生成する合成部と、を備えること
を特徴とする請求項1記載の画像処理装置。 - 前記帯域分割部は、予め定めたノイズの最大波長からカットオフ波長を算出し、当該カットオフ波長により、前記複数の原画像それぞれから、前記波長帯域原画像を生成すること を特徴とする請求項2記載の画像処理装置。
- 前記画像処理部は、前記合成画像が生成される毎に収束しているか否かを判別し、否と判別する場合、前記合成画像を前記原画像に置き変える収束判定部をさらに備え、
前記帯域分割部は、前記合成画像が前記原画像に置き換えられる毎に、置き換え後の原画像を分割して新たな前記波長帯域原画像を生成し、
前記参照画像作成部は、前記波長帯域原画像が生成される毎に前記参照画像を作成し、 前記類似度算出部は、前記参照画像が作成される毎に前記類似度を算出し、
前記デノイズ部は、前記類似度が算出される毎に前記デノイズ画像を生成し、
前記合成部は、前記デノイズ画像が生成される毎に、同一の前記原画像から生成されたデノイズ画像を合成して合成デノイズ画像を生成し、生成した全ての合成デノイズ画像を合成して前記合成画像を生成し、
前記収束判定部は、否と判別する場合、前記原画像を直前に生成された前記合成デノイズ画像に置き換えること
を特徴とする請求項2記載の画像処理装置。 - 前記画像処理部は、予め定めた波長帯域内で、前記波長帯域内の最小値に設定された前記カットオフ波長を予め定めた増分ずつ増加させて更新することを繰り返すとともに、前記原画像を置き換える波長制御部をさらに備え、
前記帯域分割部は、前記カットオフ波長が設定または更新される毎に、当該カットオフ波長を用いて前記置き換え後の原画像を2つの波長帯域に分割し、前記カットオフ波長以下の波長帯域の第一の波長帯域原画像と当該カットオフ波長より大きい波長帯域の第二の波長帯域原画像とを生成し、
前記参照画像作成部は、前記第一の波長帯域原画像が生成される毎に当該第一の波長帯域原画像から前記参照画像を作成し、
前記類似度算出部は、前記参照画像が作成される毎に前記第一の波長帯域原画像の前記類似度を算出し、
前記デノイズ部は、前記類似度が算出される毎に前記第一の波長帯域原画像の前記デノイズ画像を生成し、
前記合成部は、前記デノイズ画像が生成される毎に、同一の前記原画像から生成された前記デノイズ画像と前記第二の波長帯域画像とを合成して合成デノイズ画像を生成し、生成した全ての合成デノイズ画像を合成して前記合成画像を生成し、
前記波長制御部は、前記カットオフ波長を更新する際、前記原画像を、直前に生成された前記合成デノイズ画像に置き換えること
を特徴とする請求項2記載の画像処理装置。 - 前記画像処理部は、前記合成画像が生成される毎に収束しているか否かを判別し、否と判別する場合、前記原画像を前記合成デノイズ画像に置き変える収束判定部をさらに備えること
を特徴とする請求項5記載の画像処理装置。 - 前記画像処理部は、前記ノイズの最大波長の設定を受け付ける設定受付部をさらに備えること
を特徴とする請求項3記載の画像処理装置。 - 前記類似度は、画素毎の、前記参照画像との局所的な類似度であって、当該類似度から算出される前記平滑化に用いる重み関数を、当該類似度が高いほど急峻な形状とするものであること
を特徴とする請求項2記載の画像処理装置。 - パルスシーケンスに従って、静磁場発生系、傾斜磁場発生系、高周波磁場発生系および高周波磁場検出系を備える計測系の動作を制御して計測した核磁気共鳴信号をk空間に配置し、ローデータを得る撮像部と、
1の前記ローデータから複数の原画像を再構成する画像再構成部と、
請求項1記載の画像処理装置と、を備えること
を特徴とする磁気共鳴イメージング装置。 - 前記画像再構成部は、
前記1のローデータから欠損領域がそれぞれ異なる複数の欠損ローデータを生成する欠損データ生成部と、
前記各欠損ローデータから推定データを生成する推定データ生成部と、
各推定データから前記原画像を再構成する原画像生成部と、を備えること
を特徴とする請求項9記載の磁気共鳴イメージング装置。 - パルスシーケンスに従って、静磁場発生系、傾斜磁場発生系、高周波磁場発生系および高周波磁場検出系を備える計測系の動作を制御して計測した核磁気共鳴信号をk空間に配置し、ローデータを得る撮像部と、
1の前記ローデータから1の原画像を再構成する画像再構成部と、
請求項1記載の画像処理装置と、を備え、
前記高周波磁場検出系は、複数の受信コイルを備え、
前記撮像部は、前記計測した核磁気共鳴信号を前記受信コイル毎に前記k空間に配置することにより、複数の前記ローデータを得、
前記画像再構成部は、各前記ローデータからそれぞれ原画像を再構成することにより、前記複数の原画像を得ること
を特徴とする磁気共鳴イメージング装置。 - パルスシーケンスに従って、静磁場発生系、傾斜磁場発生系、高周波磁場発生系および高周波磁場検出系を備える計測系の動作を制御して計測を行い、得られた核磁気共鳴信号をk空間に配置し、ローデータを得る撮像部と、
1の前記ローデータから1の原画像を再構成する画像再構成部と、
請求項1記載の画像処理装置と、を備え、
前記撮像部は、同じ撮像条件で前記計測を繰り返すことにより複数の前記ローデータを得、
前記画像再構成部は、各前記ローデータからそれぞれ原画像を再構成することにより、前記複数の原画像を得ること
を特徴とする磁気共鳴イメージング装置。 - 複数の原画像を、それぞれ、波長帯域毎に分割し、複数の波長帯域原画像を生成する帯域分割処理と、
前記複数の波長帯域原画像を、前記波長帯域毎に合成して前記波長帯域毎の参照画像を作成する参照画像作成処理と、
前記複数の波長帯域原画像を、それぞれ同一の波長帯域の前記参照画像と比較して、前記波長帯域原画像毎の類似度を算出する類似度算出処理と、
前記複数の波長帯域原画像をそれぞれ当該波長帯域原画像の前記類似度に基づいて平滑化することにより、各波長帯域原画像からそれぞれデノイズ画像を生成するデノイズ処理と、
前記デノイズ画像を合成して合成画像を生成する合成処理と、を実行すること
を特徴とする画像処理方法。 - 前記合成画像が生成される毎に収束しているか否かを判別し、否と判別する場合、前記原画像を置き換え、収束していると判定するまで前記帯域分割処理、前記参照画像作成処理、前記類似度算出処理、前記デノイズ処理、および前記合成処理を実行させる収束判定処理をさらに実行し、
前記合成処理では、前記デノイズ画像が生成される毎に、同一の前記原画像から生成されたデノイズ画像を合成して合成デノイズ画像を生成し、生成した全ての合成デノイズ画像を合成して前記合成画像を生成し、
前記収束判定処理では、前記原画像を前記合成デノイズ画像に置き換えること
を特徴とする請求項13記載の画像処理方法。 - 予め定めた波長帯域内の最小値をカットオフ波長に設定する処理と、
複数の原画像それぞれを、前記カットオフ波長を用いて2つの波長帯域毎に分割し、前記カットオフ波長以下の波長帯域の第一の波長帯域原画像と前記カットオフ波長より大きい波長帯域の第二の波長帯域原画像とを生成する帯域分割処理と、
前記第一の波長帯域原画像を合成して参照画像を作成する参照画像作成処理と、
前記第一の波長帯域原画像それぞれを、前記参照画像と比較して、類似度を算出する類似度算出処理と、
前記第一の波長帯域原画像それぞれを、当該第一の波長帯域原画像の前記類似度に基づいて平滑化することにより、それぞれデノイズ画像を生成するデノイズ処理と、
同一の原画像から生成された前記デノイズ画像と前記第二の波長帯域原画像と合成して前記原画像毎の合成デノイズ画像を生成し、生成した合成デノイズ画像を全て合成して合成画像を生成する合成処理と、
前記原画像を前記合成デノイズ画像に置き換え、前記カットオフ波長を予め定めた増分だけ増加し、前記受け付けた範囲を超えるまで、前記帯域分割処理、参照画像作成処理、類似度算出処理、デノイズ処理および合成処理を繰り返す処理と、
を実行することを特徴とする画像処理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/770,980 US9830687B2 (en) | 2013-02-28 | 2014-02-18 | Image processing device, magnetic resonance imaging apparatus and image processing method |
JP2015502869A JP6300782B2 (ja) | 2013-02-28 | 2014-02-18 | 画像処理装置、磁気共鳴イメージング装置および画像処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013038135 | 2013-02-28 | ||
JP2013-038135 | 2013-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014132830A1 true WO2014132830A1 (ja) | 2014-09-04 |
Family
ID=51428102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/053679 WO2014132830A1 (ja) | 2013-02-28 | 2014-02-18 | 画像処理装置、磁気共鳴イメージング装置および画像処理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9830687B2 (ja) |
JP (1) | JP6300782B2 (ja) |
WO (1) | WO2014132830A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019143748A1 (en) * | 2018-01-19 | 2019-07-25 | Bae Systems Information And Electronic Systems Integration Inc. | Methods for image denoising and deblurring |
CN111049989A (zh) * | 2019-12-26 | 2020-04-21 | 维沃移动通信有限公司 | 一种图像显示方法及电子设备 |
JP2020099627A (ja) * | 2018-12-25 | 2020-07-02 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、医用画像処理方法、およびプログラム |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9360543B2 (en) * | 2012-04-19 | 2016-06-07 | Regents Of The University Of Minnesota | System and method for parallel magnetic resonance image reconstruction using digital beamforming |
WO2017217395A1 (ja) * | 2016-06-14 | 2017-12-21 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム記録媒体 |
CN107918929B (zh) * | 2016-10-08 | 2019-06-21 | 杭州海康威视数字技术股份有限公司 | 一种图像融合方法、装置及系统 |
CN111278363B (zh) * | 2017-10-16 | 2022-07-22 | 北京深迈瑞医疗电子技术研究院有限公司 | 超声成像设备、系统及其超声造影成像的图像增强方法 |
CN107886475A (zh) * | 2017-12-11 | 2018-04-06 | 奕响(大连)科技有限公司 | 一种单通道的图片相似判定方法 |
CN108805840B (zh) * | 2018-06-11 | 2021-03-26 | Oppo(重庆)智能科技有限公司 | 图像去噪的方法、装置、终端及计算机可读存储介质 |
JP7302988B2 (ja) * | 2019-03-07 | 2023-07-04 | 富士フイルムヘルスケア株式会社 | 医用撮像装置、医用画像処理装置、及び、医用画像処理プログラム |
DE102019215460A1 (de) * | 2019-10-09 | 2021-04-15 | Siemens Healthcare Gmbh | Verfahren und Vorrichtung zur Rauschreduktion von Bildaufnahmen |
JP2023160661A (ja) * | 2022-04-22 | 2023-11-02 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びコンピュータプログラム |
CN114943639B (zh) * | 2022-05-24 | 2023-03-28 | 北京瑞莱智慧科技有限公司 | 图像获取方法、相关装置及存储介质 |
CN116645661B (zh) * | 2023-07-27 | 2023-11-14 | 深圳市青虹激光科技有限公司 | 一种防重码检测方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002086821A1 (fr) * | 2001-04-19 | 2002-10-31 | Kabushiki Kaisha Toshiba | Procede et dispositif de traitement d'image |
JP2003061964A (ja) * | 2001-08-24 | 2003-03-04 | Toshiba Corp | 超音波診断装置 |
JP2003190148A (ja) * | 2001-10-16 | 2003-07-08 | Toshiba Corp | 局所血流動態に関するインデックスを演算する方法及び装置 |
JP2007301118A (ja) * | 2006-05-11 | 2007-11-22 | Hitachi Ltd | 磁気共鳴測定装置 |
JP2010181951A (ja) * | 2009-02-03 | 2010-08-19 | Mitsubishi Electric Corp | 画像処理装置および画像処理プログラム |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3847512B2 (ja) * | 2000-02-07 | 2006-11-22 | 株式会社日立メディコ | 磁気共鳴イメージング装置 |
JP2010161521A (ja) * | 2009-01-07 | 2010-07-22 | Nec Corp | 画像処理装置、撮像装置、画像ぶれの補正方法、及びプログラム |
JP5917054B2 (ja) * | 2011-09-12 | 2016-05-11 | キヤノン株式会社 | 撮像装置、画像データ処理方法、およびプログラム |
-
2014
- 2014-02-18 JP JP2015502869A patent/JP6300782B2/ja active Active
- 2014-02-18 WO PCT/JP2014/053679 patent/WO2014132830A1/ja active Application Filing
- 2014-02-18 US US14/770,980 patent/US9830687B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002086821A1 (fr) * | 2001-04-19 | 2002-10-31 | Kabushiki Kaisha Toshiba | Procede et dispositif de traitement d'image |
JP2003061964A (ja) * | 2001-08-24 | 2003-03-04 | Toshiba Corp | 超音波診断装置 |
JP2003190148A (ja) * | 2001-10-16 | 2003-07-08 | Toshiba Corp | 局所血流動態に関するインデックスを演算する方法及び装置 |
JP2007301118A (ja) * | 2006-05-11 | 2007-11-22 | Hitachi Ltd | 磁気共鳴測定装置 |
JP2010181951A (ja) * | 2009-02-03 | 2010-08-19 | Mitsubishi Electric Corp | 画像処理装置および画像処理プログラム |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019143748A1 (en) * | 2018-01-19 | 2019-07-25 | Bae Systems Information And Electronic Systems Integration Inc. | Methods for image denoising and deblurring |
US10643313B2 (en) | 2018-01-19 | 2020-05-05 | Bae Systems Information And Electronic Systems Integration Inc. | Methods for image denoising and deblurring |
JP2020099627A (ja) * | 2018-12-25 | 2020-07-02 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、医用画像処理方法、およびプログラム |
JP7186604B2 (ja) | 2018-12-25 | 2022-12-09 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、医用画像処理方法、およびプログラム |
CN111049989A (zh) * | 2019-12-26 | 2020-04-21 | 维沃移动通信有限公司 | 一种图像显示方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US20160012569A1 (en) | 2016-01-14 |
US9830687B2 (en) | 2017-11-28 |
JPWO2014132830A1 (ja) | 2017-02-02 |
JP6300782B2 (ja) | 2018-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6300782B2 (ja) | 画像処理装置、磁気共鳴イメージング装置および画像処理方法 | |
JP6289664B2 (ja) | 磁気共鳴イメージング装置、定量的磁化率マッピング方法、計算機、磁化率分布計算方法、及び、磁化率分布計算プログラム | |
JP5902317B2 (ja) | 磁気共鳴イメージング装置および定量的磁化率マッピング法 | |
JP5843876B2 (ja) | 磁気共鳴イメージング装置および磁化率強調画像生成方法 | |
JP6227710B2 (ja) | 磁気共鳴イメージング装置およびフリップ角決定方法 | |
JP5815722B2 (ja) | 磁気共鳴イメージング装置および磁気共鳴イメージング方法 | |
JP5394374B2 (ja) | 磁気共鳴イメージング装置及び血管画像取得方法 | |
JP6744764B2 (ja) | 画像診断装置、及び画像取得方法 | |
WO2017221654A1 (ja) | 磁気共鳴イメージング装置、画像処理装置、及び拡散強調画像計算方法 | |
JP2018198682A (ja) | 磁気共鳴イメージング装置及び磁気共鳴画像処理方法 | |
US10132902B2 (en) | Intrinsic navigation from velocity-encoding gradients in phase-contrast MRI | |
JP6608764B2 (ja) | 磁気共鳴イメージング装置、磁気共鳴イメージング方法及び磁化率算出プログラム | |
JPWO2016021603A1 (ja) | 磁気共鳴イメージング装置および磁気共鳴イメージング方法 | |
WO2016017385A1 (ja) | 磁気共鳴イメージング装置および画像再構成方法 | |
JP6783619B2 (ja) | 磁気共鳴イメージング装置及び画像解析方法 | |
WO2015170394A1 (ja) | 撮像装置、画像処理装置及び画像処理方法 | |
US8299790B2 (en) | Magnetic resonance method control device and system for imaging a volume segment of a subject | |
WO2016125572A1 (ja) | 磁気共鳴イメージング装置および磁気共鳴イメージング方法 | |
JP2018029656A (ja) | 磁気共鳴イメージング装置および画像再構成方法 | |
JP2018068954A (ja) | 磁気共鳴イメージング装置及び画像処理方法 | |
JP2011031058A (ja) | 核磁気共鳴撮影装置 | |
JP2018079147A (ja) | 磁気共鳴イメージング装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14757749 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015502869 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14770980 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14757749 Country of ref document: EP Kind code of ref document: A1 |