WO2022091869A1 - 医用画像処理装置、医用画像処理方法及びプログラム - Google Patents
医用画像処理装置、医用画像処理方法及びプログラム Download PDFInfo
- Publication number
- WO2022091869A1 WO2022091869A1 PCT/JP2021/038600 JP2021038600W WO2022091869A1 WO 2022091869 A1 WO2022091869 A1 WO 2022091869A1 JP 2021038600 W JP2021038600 W JP 2021038600W WO 2022091869 A1 WO2022091869 A1 WO 2022091869A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- resolution
- medical image
- learning
- noise
- amount
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the present invention relates to a medical image processing apparatus, a medical image processing method and a program.
- Diagnosis and treatment based on radiography using X-rays are actively performed in the medical field, and digital image diagnosis using X-ray images taken using a flat panel sensor (hereinafter referred to as a sensor) is widely used worldwide. .. Since the sensor can immediately image the output, it can capture not only still images but also moving images. Furthermore, the resolution of the sensor has been increased, and it has become possible to take pictures to acquire more detailed information.
- a sensor flat panel sensor
- the resolution may be reduced to obtain a radiographic image.
- the X-ray dose per pixel is increased by treating the output data from the sensor for a plurality of pixels as one pixel.
- the total X-ray irradiation dose can be suppressed, and the exposure dose to the subject can be suppressed.
- Super-resolution processing As a process for restoring (higher resolution) detailed information of a low-resolution image.
- Super-resolution processing has long been known as a method of increasing the resolution from a plurality of low-resolution images, associating the features of the low-resolution image with the features of the high-resolution image, and increasing the resolution based on the information.
- machine learning has come to be used as a method of associating features.
- a configuration in which supervised learning is performed using a convolutional neural network (hereinafter referred to as CNN) is rapidly becoming widespread due to its high performance (Patent Document 2).
- Super-resolution processing using CNN restores the detailed information of the input low-resolution image using the learning parameters created by supervised learning.
- Super-resolution processing is also applied to medical images.
- Patent 4529804 Gazette Patent 6276901 Gazette Japanese Unexamined Patent Publication No. 2020-141908
- Super-resolution processing using CNN infers a low-resolution image as input and outputs a super-resolution image as the inference result.
- the high-resolution image is the correct image during learning. Therefore, a plurality of high-resolution images and low-resolution images are prepared as a set as learning data.
- As an image for example, it can be applied to a medical image by preparing a medical image.
- a radiation image is an image in which the ratio (S / N) of a signal to noise is low in a low dose region and noise is dominant. Therefore, when the CNN is learned from the radiographic image, the CNN learns not only the structure of the subject, which is a signal component, but also the noise component as the information to be restored.
- the present invention has been made in view of the above problems, and an object of the present invention is to construct a learning model capable of outputting a medical image having improved resolution while reducing noise.
- the other object of the present invention is not limited to the above-mentioned object, but is an action and effect derived by each configuration shown in the embodiment for carrying out the invention described later, and exerts an action and effect which cannot be obtained by the conventional technique. It can be positioned as one of.
- the medical image processing apparatus converts a medical image having a first resolution subjected to noise reduction processing into a resolution, and acquires a medical image having a second resolution having a resolution lower than the first resolution. It is provided with a means and a learning means for learning a learning model using the training data including the medical image having the first resolution subjected to the noise reduction processing and the medical image having the second resolution. It is a feature.
- a radiographic image is used as an example of a medical image
- a radiographic image obtained by simple X-ray photography is used as an example of a radiographic image
- the medical image applicable to this embodiment is not limited to this, and other medical images can be suitably applied.
- it may be a medical image taken by a CT device, an MRI device, a three-dimensional ultrasonic imaging device, an optical acoustic tomography device, a PET / SPECT, an OCT device, a digital radiography device, or the like.
- a learning model by supervised learning using a convolutional neural network in which a low-resolution medical image as input data and a high-resolution medical image as correct data are used as teacher data.
- CNN convolutional neural network
- the learning model will be described as CNN. It is not always necessary to use CNN for learning, and any machine learning method that can construct a learning model capable of outputting a medical image with improved resolution while reducing noise may be used.
- the medical image processing apparatus learns using a high-resolution radiographic image subjected to noise reduction processing and a low-resolution radiographic image generated from the high-resolution radiographic image subjected to the noise reduction processing. conduct. Then, it is characterized by constructing a learning model that generates a high-resolution image of the radiographic image from the input radiological image.
- FIG. 1 shows a configuration diagram of the medical image processing apparatus 100 according to the present invention.
- the medical image processing device 100 includes an input unit 101, a noise reduction unit 102, an image conversion unit 103, and a machine learning unit 104.
- the input unit 101 acquires a radiographic image from an external device and outputs it as a high-resolution radiographic image.
- the noise reduction unit 102 takes a high-resolution radiographic image as an input and outputs a high-resolution radiographic image with noise reduced.
- the resolution conversion unit 103 inputs a high-resolution radiographic image with reduced noise and performs reduction processing to output a low-resolution radiographic image with reduced noise.
- the machine learning unit 104 inputs a high-resolution radiographic image with reduced noise and a low-resolution radiographic image with reduced noise, learns the super-resolution processing CNN, and updates the parameters of the CNN.
- FIGS. 2A and 2B are diagrams showing an example of a hardware configuration when the functional configuration of the medical image processing apparatus 100 of FIG. 1 is realized by using hardware.
- the control PC 201 and the X-ray sensor 202 are connected by a Gigabit Ethernet 204.
- the signal line may be CAN (Control Area Network) or an optical fiber instead of Gigabit Ethernet.
- An X-ray generator 203, a display unit 205, a storage unit 206, a network interface unit 207, an ion chamber 210, and an X-ray control unit 211 are connected to the Gigabit Ethernet 204.
- the control PC 201 has a configuration in which a CPU (central processing unit) 2012, a RAM (Random Access Memory) 2013, a ROM (Read Only Memory) 2014, and a storage unit 2015 are connected to the bus 2011. Then, the input unit 208 is connected to the control PC 201 by USB or PS / 2, and the display unit 209 is connected by DisplayPort or DVI. A command is sent to the X-ray sensor 202, the display unit 205, and the like via the control PC 201. In the control PC 201, the processing contents for each shooting mode are stored in the storage unit 2015 as a software module, read into the RAM 2013 by an instruction means (not shown), and executed. The processed image is sent to the storage unit 2015 in the control PC or the storage unit 206 outside the control PC and stored.
- a CPU central processing unit
- RAM Random Access Memory
- ROM Read Only Memory
- the learning PC 221 has a configuration in which a CPU (central processing unit) 2212, a RAM (Random Access Memory) 2213, a ROM (Read Only Memory) 2214, and a storage unit 2215 are connected to the bus 2211.
- the input unit 222 is connected to the learning PC 201 by USB or PS / 2
- the display unit 223 is connected by DisplayPort or DVI
- the storage unit 224 is connected by USB.
- the 101, 102, 103, and 104 shown in FIG. 1 are stored in the storage unit 2215 as software modules.
- 101, 102, 103, 104 shown in FIG. 1 may be mounted as a dedicated image processing board. The optimum implementation may be performed according to the purpose.
- the learning image acquired in FIG. 2A may be input from the storage unit 224, or may be input from a storage unit (not shown) via the network I / F.
- S302 Data expansion
- the input unit 101 expands the data of the acquired radiographic image.
- Data expansion is to treat one image as an image with different characteristics by rotating or inverting it. If the high-resolution image size input to the CNN is less than or equal to the radiographic image size acquired by the input unit 101, cutting out is also included. Data expansion is performed by a combination of cutting, rotation, and inversion.
- the input unit 101 outputs the data-expanded image as a high-resolution image to the noise reduction unit 102.
- the high-resolution image corresponds to a correct image for the super-resolution image output by the CNN.
- the noise reduction unit 102 performs noise reduction processing on the high-resolution image and outputs the noise-reduced high-resolution image.
- ⁇ filtering is a method of reducing only noise while preserving the edge structure, and is represented by Equation 1.
- H is a high-resolution image
- HN is a high-resolution image after noise reduction
- ⁇ is a threshold value for separating edge and noise.
- the threshold value for separating the edge and the noise may be arbitrarily determined by the user, or may be calculated by using the radiation noise model described later in the second embodiment.
- the ⁇ filtering is an example of a non-linear spatial filter, and may be, for example, an NL-Means filter, a median filter, or the like.
- noise may be reduced by performing addition averaging as in Equation 2.
- H (x, y) is a high-resolution image, and n is the number of sheets to be added. That is, the noise reduction processing is performed by adding and averaging the radiation image having the first resolution and the plurality of radiation images taken under the same imaging conditions as the radiation image having the first resolution.
- CNN may be used as a noise reduction method (Patent Document 3).
- the resolution conversion unit 103 inputs a high-resolution image with noise reduction, and outputs a low-resolution image with noise reduction by reduction processing.
- the method of reduction processing depends on the relationship between the low-resolution image and the high-resolution image to which the super-resolution processing is applied. For example, if the low-resolution image in the system to be applied has a relationship obtained by averaging the high-resolution images, the reduction process is performed by the averaging. That is, a reduction method suitable for the content to be learned by CNN is selected. The same applies to the reduction ratio, and if CNN that doubles the resolution is to be learned, the reduction ratio is halved.
- the machine learning unit 104 performs preprocessing on the noise-reduced high-resolution image and the noise-reduced low-resolution image, and preprocesses the preprocessed high-resolution image and the preprocessed low-resolution image. Is output.
- the pretreatment is a treatment for improving the robustness of the CNN. For example, normalization is performed in the range of the maximum value and the minimum value of the image, or regularization is performed with the average value of the image being 0 and the standard deviation being 1.
- the preprocessed low-resolution image and the pre-processed high-resolution image used in S307 may be generated by the medical image processing apparatus 100 through the steps of S301 to S306, or may be generated by different medical image processing apparatus. You may get the image. Further, when the preprocessed low resolution image and the preprocessed high resolution image are stored in advance in the storage unit 2015 included in the medical image processing device 100, the preprocessed low resolution image and the preprocessed high resolution image may be read out from the storage unit 2015 and acquired.
- the machine learning unit 104 constructs a learning model by performing supervised learning using a set of input data and output data as supervised data.
- the teacher data is a set of a low-resolution image 411 as input data and a high-resolution image 415 as corresponding correct answer data.
- the machine learning unit 104 performs inference processing on the low-resolution image 411 by the parameters of the CNN 412 during learning, and outputs the super-resolution image 414 as the inference result (S401).
- the CNN 412 has a structure in which a large number of processing units 413 are arbitrarily connected.
- Examples of the processing unit 413 include a convolution operation, a normalization process, and a process by an activation function such as ReLU or Sigmoid, and have a parameter group for describing the content of each process.
- sets that perform processing in order such as convolution operation ⁇ normalization ⁇ activation function are connected in layers of about 3 to several hundreds, and various structures can be taken.
- the machine learning unit 104 calculates the loss function from the super-resolution image 414 and the high-resolution image 415, which are the inference results.
- the loss function any function such as a square error or a cross entropy error can be used.
- S404 Judgment of end of learning
- the machine learning unit 104 determines the end of learning, and if learning is to be continued, proceeds to S401.
- the parameter update of the CNN 412 is repeated so that the loss function is lowered, and the accuracy of the machine learning unit 104 can be improved.
- the process is completed.
- the determination of the end of learning is made based on the determination criteria set according to the problem, for example, the accuracy of the inference result becomes a certain value or more without overfitting, and the loss function becomes a certain value or less.
- the parameters of CNN are updated from the comparison result between the super-resolution image output by CNN after inputting the preprocessed low resolution image and the preprocessed high resolution image.
- Both the pre-processed low-resolution image and the pre-processed high-resolution image have the noise component reduced, and the signal component is dominant in the image. Therefore, it is learned to increase the resolution of only the signal component.
- the super-resolution image output using the CNN parameters learned in the present embodiment becomes a super-resolution image in which the enhancement of the noise component is suppressed, and a high-quality image can be obtained. It is desirable that noise reduction is performed for both low-resolution images and high-resolution images. If either the low-resolution image or the high-resolution image contains a noise component, the process of adjusting the noise amount is learned in addition to the high-resolution image. As a result, the complexity of the learning content increases, the image quality deteriorates, the network scale increases (resources used, performance increases), and the like.
- the medical image processing apparatus in order to construct a learning model using a low-resolution image and a high-resolution image with reduced noise components as training data, a medical image with improved resolution while reducing noise can be obtained. It is possible to build a learning model that can be output.
- the network scale can be reduced.
- the input unit 501, the resolution conversion unit 503, and the machine learning unit 504 included in the medical image processing device 500 in FIG. 5 are the same as the input unit 101, the resolution conversion unit 103, and the machine learning unit 104 in FIG.
- the noise estimation unit 502 inputs a high-resolution image and outputs the noise amount of the high-resolution image, and is stored in the storage unit 2015 as a software module in FIGS. 2A and 2B.
- S601 to S603 are the same as S301 to S303 of the first embodiment, they are omitted. Further, since S605 to S607 are the same as S305 to S307 of the first embodiment, they are omitted.
- the noise estimation unit 502 takes a high-resolution image as an input and estimates the amount of noise included in the high-resolution image.
- a method of calculating the amount of noise for example, a standard deviation in a predetermined region is used. Calculate the standard deviation in multiple regions and take the average.
- it may be calculated from the physical characteristics of X-rays.
- X-ray noise can be broadly divided into random quantum noise that depends on X-ray dose and random system noise that does not depend on X-ray dose. If the conversion coefficient Kq and the amount Sn of the random system noise when calculating the random quantum noise amount from the pixel value of the radiation image are obtained in advance, the noise amount n of the pixel value x can be expressed as the equation 3.
- the calculated noise amount may be used as it is, but since the X-ray noise amount depends on the X-ray dose, it is difficult to understand the effect on the actual image.
- the S / N ratio obtained by dividing the signal value for that purpose by the amount of noise may be used as the estimation result.
- the noise estimation unit 502 compares the estimated noise amount with a preset threshold value, and if it is determined that the image has a small noise amount, the resolution conversion unit 503 generates a low-resolution image (S605). On the other hand, when it is determined that the amount of noise is large, the process is terminated without using the high-resolution image as an image for learning. That is, the resolution conversion unit 503 converts the medical image of the first resolution selected based on the amount of noise estimated by the noise estimation unit 502 into a resolution, and has a second resolution that is lower than the first resolution. It corresponds to an example of an acquisition means for acquiring a medical image.
- CNN learns to increase the resolution of not only the signal component but also the noise component, which causes deterioration of the image quality of the super-resolution image. Therefore, when an image with a large amount of noise is input, the image is not used for learning.
- the amount of noise in the high-resolution image was measured, but the noise may be reduced in the high-resolution image and the amount of noise may be measured from the high-resolution image after the noise is reduced. It may be estimated from a low resolution image instead of a high resolution image.
- S701 to S703 are the same as S601 to S603 of the second embodiment, they are omitted.
- the noise estimation unit 502 takes a high-resolution image as an input, estimates the amount of noise contained in the high-resolution image, and outputs the noise amount (S704). Since the estimation method is the same as that of the second embodiment, it will be omitted. Since S705 to S706 are the same as S605 to S606 of the second embodiment, they are omitted.
- the machine learning unit 504 learns CNN using the preprocessed low resolution image and the preprocessed high resolution image (S307).
- the learning flow is shown in FIGS. 4A and 4B.
- the machine learning unit 104 performs inference processing on the low-resolution image 411 using the parameters of the CNN 412 during learning, and outputs the super-resolution image 414 as the inference result (S401).
- the loss function is calculated from the super-resolution image 414 and the high-resolution image 415, which are the inference results (S402).
- the error back propagation is performed starting from the loss function calculated in S402, and the parameter group of CNN412 is updated (S403).
- the machine learning unit 104 changes the learning rate of parameter update of CNN412 according to the amount of noise calculated in S704 in order to suppress the influence on noise.
- ⁇ is the learning rate.
- the parameter of CNN412 is updated so as to reduce the magnitude of the learning rate ⁇ and suppress the influence on noise.
- the relationship between the amount of noise and the learning rate ⁇ may be changed continuously or discretely.
- the loss function may be changed according to the amount of noise calculated in S704.
- L2 (SR-HR) 2
- L1 loss Compared to L2 loss, the value of L1 loss tends to be smaller when the difference between images is large, and as a result, it is difficult for the error to be reflected in the parameters. Therefore, if L1 loss is used when the amount of noise is large, the influence on outliers related to noise can be reduced. That is, for example, the data having a large amount of noise may be a loss function such that the output becomes small when the error is large with respect to the data having a small amount of noise.
- the machine learning unit 104 determines whether to continue learning based on the magnitude of the error or the like (S404). If it continues, learning is performed again with the updated parameters of CNN412. On the other hand, if it does not continue, the updated CNN412 parameter becomes the final determined parameter.
- the present invention supplies a program that realizes one or more functions of the above-described embodiment to a system or device via a network or storage medium, and one or more processors in the computer of the system or device reads and executes the program. It can also be realized by the processing to be performed. It can also be realized by a circuit that realizes one or more functions.
- the processor or circuit may include a central processing unit (CPU), a microprocessing unit (MPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), and a field programmable gateway (FPGA). Also, the processor or circuit may include a digital signal processor (DSP), a data flow processor (DFP), or a neural processing unit (NPU).
- CPU central processing unit
- MPU microprocessing unit
- GPU graphics processing unit
- ASIC application specific integrated circuit
- FPGA field programmable gateway
- DSP digital signal processor
- DFP data flow processor
- NPU neural processing unit
- the medical image processing device in each of the above-described embodiments may be realized as a single device, or may be a form in which a plurality of devices are combined so as to be able to communicate with each other to execute the above-mentioned processing, both of which are the embodiments of the present invention. Included in the form.
- the above processing may be executed by a common server device or a group of servers.
- the plurality of devices constituting the medical image processing device need not be present in the same facility or in the same country as long as they can communicate at a predetermined communication rate.
- a software program that realizes the functions of the above-described embodiment is supplied to the system or device, and the computer of the system or device reads and executes the code of the supplied program. Including morphology.
- the program code itself installed on the computer is also one of the embodiments of the present invention. Further, based on the instruction included in the program read by the computer, the OS or the like running on the computer performs a part or all of the actual processing, and the function of the above-described embodiment can be realized by the processing. ..
- the present invention is not limited to the above-described embodiment, and various modifications (including organic combinations of each embodiment) are possible based on the gist of the present invention, and these are excluded from the scope of the present invention. It's not something to do. That is, all the configurations combining the above-described embodiments are also included in the embodiments of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- High Energy & Nuclear Physics (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/295,147 US20230245274A1 (en) | 2020-10-26 | 2023-04-03 | Medical-image processing apparatus, medical-image processing method, and program for the same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020179041A JP7562369B2 (ja) | 2020-10-26 | 2020-10-26 | 医用画像処理装置、医用画像処理方法及びプログラム |
JP2020-179041 | 2020-10-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/295,147 Continuation US20230245274A1 (en) | 2020-10-26 | 2023-04-03 | Medical-image processing apparatus, medical-image processing method, and program for the same |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022091869A1 true WO2022091869A1 (ja) | 2022-05-05 |
Family
ID=81383751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/038600 WO2022091869A1 (ja) | 2020-10-26 | 2021-10-19 | 医用画像処理装置、医用画像処理方法及びプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230245274A1 (enrdf_load_stackoverflow) |
JP (1) | JP7562369B2 (enrdf_load_stackoverflow) |
WO (1) | WO2022091869A1 (enrdf_load_stackoverflow) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024145104A1 (en) * | 2022-12-28 | 2024-07-04 | Neuro42 Inc. | Deep learning super-resolution training for ultra low-field magnetic resonance imaging |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12154252B2 (en) * | 2021-09-30 | 2024-11-26 | Saudi Arabian Oil Company | Method for denoising wellbore image logs utilizing neural networks |
KR102730947B1 (ko) * | 2023-03-08 | 2024-11-14 | 건양대학교 산학협력단 | 지도학습된 행동 모델을 적용한 멀티 에이전트 강화학습 기반 멀티 테스킹 전산화단층영상 화질복원 시스템 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019212050A (ja) * | 2018-06-05 | 2019-12-12 | 株式会社島津製作所 | 画像処理方法、画像処理装置および学習モデル作成方法 |
WO2020175446A1 (ja) * | 2019-02-28 | 2020-09-03 | 富士フイルム株式会社 | 学習方法、学習システム、学習済みモデル、プログラム及び超解像画像生成装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7302988B2 (ja) * | 2019-03-07 | 2023-07-04 | 富士フイルムヘルスケア株式会社 | 医用撮像装置、医用画像処理装置、及び、医用画像処理プログラム |
-
2020
- 2020-10-26 JP JP2020179041A patent/JP7562369B2/ja active Active
-
2021
- 2021-10-19 WO PCT/JP2021/038600 patent/WO2022091869A1/ja active Application Filing
-
2023
- 2023-04-03 US US18/295,147 patent/US20230245274A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019212050A (ja) * | 2018-06-05 | 2019-12-12 | 株式会社島津製作所 | 画像処理方法、画像処理装置および学習モデル作成方法 |
WO2020175446A1 (ja) * | 2019-02-28 | 2020-09-03 | 富士フイルム株式会社 | 学習方法、学習システム、学習済みモデル、プログラム及び超解像画像生成装置 |
Non-Patent Citations (1)
Title |
---|
FARSIU, SINA ET AL.: "Fast and Robust Multiframe Super Resolution", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 13, no. 10, 2004, pages 1327 - 1344, XP011118230, DOI: 10.1109/TIP.2004.834669 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024145104A1 (en) * | 2022-12-28 | 2024-07-04 | Neuro42 Inc. | Deep learning super-resolution training for ultra low-field magnetic resonance imaging |
Also Published As
Publication number | Publication date |
---|---|
US20230245274A1 (en) | 2023-08-03 |
JP7562369B2 (ja) | 2024-10-07 |
JP2022070035A (ja) | 2022-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11120582B2 (en) | Unified dual-domain network for medical image formation, recovery, and analysis | |
US11026642B2 (en) | Apparatuses and a method for artifact reduction in medical images using a neural network | |
CN107610193B (zh) | 使用深度生成式机器学习模型的图像校正 | |
WO2022091869A1 (ja) | 医用画像処理装置、医用画像処理方法及びプログラム | |
CN112102213B (zh) | 低剂量ct图像处理方法、扫描系统及计算机存储介质 | |
CN109949235A (zh) | 一种基于深度卷积神经网络的胸部x光片去噪方法 | |
JP2021013736A (ja) | X線診断システム、画像処理装置及びプログラム | |
JP6044046B2 (ja) | 動き追従x線ct画像処理方法および動き追従x線ct画像処理装置 | |
JP7475979B2 (ja) | X線システム及び撮像プログラム | |
US12182970B2 (en) | X-ray imaging restoration using deep learning algorithms | |
US20230206404A1 (en) | Image processing apparatus, image processing method, and computer-readable medium | |
CN117203671A (zh) | 基于机器学习的迭代图像重建改进 | |
US12125198B2 (en) | Image correction using an invertable network | |
WO2021039211A1 (ja) | 機械学習装置、機械学習方法及びプログラム | |
JP7566696B2 (ja) | 画像処理装置、画像処理方法、学習装置、学習方法、及びプログラム | |
CN118840300A (zh) | 一种基于图像处理的ct图像质量优化系统 | |
WO2022091875A1 (ja) | 医用画像処理装置、医用画像処理方法及びプログラム | |
CN117541481A (zh) | 一种低剂量ct图像修复方法、系统及存储介质 | |
Vlasov et al. | An a priori information based algorithm for artifact preventive reconstruction in few-view computed tomography | |
CN113192155A (zh) | 螺旋ct锥束扫描图像重建方法、扫描系统及存储介质 | |
CN113706653B (zh) | 基于AwTV的CT迭代重建替代函数优化方法 | |
JP2023088665A (ja) | 学習装置、医用情報処理装置、学習データの生成方法、学習方法、およびプログラム | |
JP7532332B2 (ja) | 放射線画像処理装置、放射線画像処理方法、学習装置、学習データの生成方法、及びプログラム | |
CN119722511B (zh) | 一种ct图像去噪方法、系统、电子设备及存储介质 | |
JP2025528361A (ja) | シミュレーションx線におけるct画像形成の最適化 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21885988 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21885988 Country of ref document: EP Kind code of ref document: A1 |