US11823441B2 - Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium - Google Patents

Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium Download PDF

Info

Publication number
US11823441B2
US11823441B2 US17/675,071 US202217675071A US11823441B2 US 11823441 B2 US11823441 B2 US 11823441B2 US 202217675071 A US202217675071 A US 202217675071A US 11823441 B2 US11823441 B2 US 11823441B2
Authority
US
United States
Prior art keywords
image
input image
machine learning
region
augmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/675,071
Other versions
US20220172461A1 (en
Inventor
Tsuyoshi Kobayashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KOBAYASHI, TSUYOSHI
Publication of US20220172461A1 publication Critical patent/US20220172461A1/en
Application granted granted Critical
Publication of US11823441B2 publication Critical patent/US11823441B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7747Organisation of the process, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06T3/0006
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a machine learning apparatus, a machine learning method, and a non-transitory computer-readable storage medium storing a program and, more particularly, to a machine learning technique capable of appropriately augmenting training data at the time of learning.
  • CNN convolutional neural network
  • One of the application fields is region extraction processing in a medical image.
  • a medical radiation imaging apparatus to suppress the influence of radiation to regions other than a region of interest (to be referred to as an “irradiation field” hereinafter) necessary for diagnosis, in general, the irradiation field is narrowed using a collimator, thereby preventing radiation irradiation to regions other than the irradiation field.
  • a technique of correctly extracting an irradiation field in an image is important, and, for example, PTL 1 proposes various kinds of techniques using machine learning.
  • PTL 2 proposes a technique of augmenting data by rotating an image.
  • the technique of PTL 2 performs data augmentation by rotating an image to a plurality of angles. If an image is simply rotated, the image after the rotation may include a region where image information (image signal) is defective. In general, an arbitrary value such as zero is substituted into the region where image information is defective.
  • the present invention has been made in consideration of the above-described problem, and provides a machine learning technique capable of more accurately extracting a region by performing appropriate data augmentation for training data used in learning.
  • a machine learning apparatus for extracting a region from an input image, comprising: an inference unit configured to output the region by inference processing for the input image; and an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data, wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
  • a machine learning method by a machine learning apparatus including inference unit configured to output a region by inference processing for an input image and configured to extract the region from the input image, comprising performing, in learning when learning of the inference unit is performed based on training data, data augmentation by increasing the number of input images constituting the training data, wherein the data augmentation is performed such that a region where image information held by the input image is defective is not included.
  • FIG. 1 shows a block diagram 1 a showing an example of the basic configuration of a radiation imaging system including a machine learning apparatus according to the embodiment, and a block diagram 1 b showing an example of the configuration of a learning unit;
  • FIG. 2 shows a flowchart 2 a showing the procedure of processing of the learning unit, and a view 2 b schematically showing the concept of learning of the learning unit;
  • FIG. 3 A is a flowchart showing the procedure of processing of a data augmentation unit
  • FIG. 3 B is a view schematically showing image examples in data augmentation processing
  • FIG. 3 C is a view schematically showing an image example in data augmentation processing.
  • FIG. 4 is a schematic view showing the concept of inference in an inference unit.
  • 1 a is a block diagram showing an example of the basic configuration of a radiation imaging system including a machine learning apparatus according to the embodiment.
  • 1 b in FIG. 1 is a block diagram showing an example of the configuration of a learning unit.
  • a radiation imaging system 100 includes a radiation generating apparatus 101 that generates radiation, a bed 103 on which an object 102 is arranged, a radiation detection apparatus 104 that detects the radiation and outputs image data according to the radiation that has passed through the object 102 , a control apparatus 105 that controls the radiation generating timing and the radiation generating conditions of the radiation generating apparatus 101 , a data collection apparatus 106 that collects various kinds of digital data, and an information processing apparatus 107 that controls image processing or the entire apparatus in accordance with a user instruction.
  • the configuration of the radiation imaging system 100 is sometimes called a radiation imaging apparatus.
  • the information processing apparatus 107 includes a machine learning apparatus 108 including a learning unit 109 and an inference unit 110 , a CPU 112 , a memory 113 , an operation panel 114 , a storage device 115 , a display device 116 , and a diagnostic image processing apparatus 117 . These are electrically connected via a CPU bus 111 .
  • the memory 113 stores various kinds of data necessary in processing of the CPU 112 , and also includes a work memory for the CPU 112 .
  • the CPU 112 is configured to, using the memory 113 , control the operation of the entire apparatus in accordance with a user instruction input to the operation panel 114 .
  • radiation is not limited to X-rays to be generally used and includes ⁇ -rays, ⁇ -rays, ⁇ -rays, and the like, which are beams formed by particles (including photons) emitted upon radioactive decay, and beams (for example, particle rays and cosmic rays) with equal or higher energy.
  • the radiation imaging system 100 starts the imaging sequence of the object 102 .
  • the radiation generating apparatus 101 generates radiation under predetermined conditions, and the radiation detection apparatus 104 is irradiated with the radiation that has passed through the object 102 .
  • the control apparatus 105 controls the radiation generating apparatus 101 based on radiation generating conditions such as a voltage, a current, and an irradiation time, and causes the radiation generating apparatus 101 to generate radiation under the predetermined conditions.
  • the radiation detection apparatus 104 detects the radiation that has passed through the object 102 , converts the detected radiation into an electrical signal, and outputs image data according to the radiation.
  • the image data output from the radiation detection apparatus 104 is collected as digital image data by the data collection apparatus 106 .
  • the data collection apparatus 106 transfers the image data collected from the radiation detection apparatus 104 to the information processing apparatus 107 .
  • the image data is transferred to the memory 113 via the CPU bus 111 under the control of the CPU 112 .
  • the machine learning apparatus 108 performs region extraction processing for the image data stored in the memory 113 , and extracts a region from the input image.
  • the input image is the image captured using the radiation imaging system 100
  • the region is the irradiation field irradiated with radiation by the radiation imaging system 100 .
  • the machine learning apparatus 108 can perform, for example, irradiation field recognition processing of extracting the irradiation field in the image captured by radiography.
  • the irradiation field recognition processing is processing of classifying a collimator region and an irradiation field, as will be described later.
  • the machine learning apparatus 108 is configured to perform region extraction processing using machine learning, and the machine learning apparatus 108 includes the learning unit 109 and the inference unit 110 . Also, as shown in 1 b of FIG. 1 , the learning unit 109 includes, as functional components, a data augmentation unit 120 , an inference unit 121 , a parameter updating unit 122 , and an end determination unit 123 .
  • the inference unit 121 is an inference unit halfway through learning. When learning ends, the inference unit 110 is set in the machine learning apparatus 108 as inference unit after learning.
  • a region is extracted from an input image based on supervised learning using a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the learning unit 109 when performing region extraction processing, performs supervised learning using a plurality of training data prepared in advance, and decides parameters of the CNN.
  • the inference unit 110 performs region extraction processing by applying the CNN having the parameters decided by the learning unit 109 , and transfers the region extraction result to the memory 113 .
  • the region extraction result and the image data are transferred to the diagnostic image processing apparatus 117 .
  • the diagnostic image processing apparatus 117 applies diagnostic image processing such as gradation processing, emphasis processing, and noise reduction processing to the image data, and creates an image suitable for diagnosis.
  • the result is stored in the storage device 115 and displayed on the display device 116 .
  • FIG. 2 As for the processing of the learning unit 109 in the machine learning apparatus 108 , a case in which a convolutional neural network (CNN) is used will be described as an example with reference to FIG. 2 .
  • CNN convolutional neural network
  • FIG. 2 2 a is a flowchart showing the procedure of processing of the learning unit 109
  • 2 b is a view schematically showing the concept of learning of the learning unit 109 .
  • Training data is formed by a set of an input image 201 and ground truth data 205 corresponding to the input image 201 and representing an extraction region.
  • ground truth data 205 for example, a labeling image formed by labeling, using an arbitrary value, a predetermined region (extraction region) in the input image can be used.
  • coordinate data representing the extraction region in the input image by coordinates can be used.
  • the ground truth data 205 for example, data that specifies the boundary of the extraction region in the input image by a line or a curve can be used.
  • irradiation field recognition processing as the ground truth data 205 , for example, a binary labeling image in which an irradiation field in the input image 201 is set to 1, and a collimator region is set to 0 can be used.
  • step S 201 the data augmentation unit 120 applies data augmentation processing to training data. Details of the data augmentation processing will be described later.
  • the inference unit 121 performs, for the input image 201 , inference processing using the parameters of the convolutional neural network (CNN) 202 halfway through learning, and outputs an inference result 204 .
  • the inference unit 121 outputs a region by inference processing for the input image.
  • the CNN 202 has a structure in which a number of processing units 203 are connected arbitrarily.
  • the processing unit 203 for example, a convolutional operation, normalization processing, and processing by an activation function such as ReLU or Sigmoid are included, and a parameter group configured to describe the processing contents is provided.
  • sets for performing processes in order for example, convolutional operation ⁇ normalization ⁇ activation function, are connected in three to several hundred layers, and various structures can be taken.
  • step S 203 the parameter updating unit 122 calculates a loss function from the inference result 204 and the ground truth data 205 .
  • the loss function an arbitrary function, for example, a square error or a cross entropy error can be used.
  • step S 204 the parameter updating unit 122 performs back propagation using the loss function calculated in step S 203 as a starting point, and updates the parameter group of the convolutional neural network (CNN) 202 halfway through learning.
  • CNN convolutional neural network
  • step S 205 the end determination unit 123 determines the end of the learning.
  • the process returns to step S 201 to similarly execute the processes of steps S 201 to S 204 .
  • the parameters of the CNN 202 are repetitively updated such that the loss function lowers, and the accuracy of the machine learning apparatus 108 can be increased. If the learning is sufficiently done, and the end determination unit 123 determines to end the learning (YES in step S 205 ), the processing is ended.
  • the end of learning can be judged based on a judgement criterion set in accordance with a problem, for example, whether overlearning does not occur, and the accuracy of the inference result has a predetermined value or more, or whether the loss function has a predetermined value or less.
  • a calculation unit having high parallel calculation performance such as a GPU, can also be used as the configuration of the learning unit 109 .
  • FIG. 3 A is a flowchart showing the procedure of processing of the data augmentation unit 120
  • FIGS. 3 B and 3 C are views schematically showing image examples in data augmentation processing.
  • the data augmentation unit 120 performs data augmentation by increasing the number of input images that constitute the training data.
  • the data augmentation unit 120 performs data augmentation such that a region where image information held by the input image is defective is not included.
  • the data augmentation unit 120 performs, for the training data, data augmentation using at least one augmentation processing of affine transform processing, extraction processing, and signal amount adjustment processing.
  • the data augmentation unit 120 performs the same augmentation processing for the input image and the ground truth data.
  • the data augmentation unit 120 augments training data by performing step S 301 (affine transform processing), step S 302 (extraction processing), and step S 303 (signal amount adjustment processing). This can improve generalization performance in learning of the machine learning apparatus 108 .
  • step S 301 the data augmentation unit 120 applies affine transform processing to training data, thereby rotating, inverting, enlarging, or reducing an image.
  • the same affine transform is applied to, for example, the input image 201 and the ground truth data 205 shown in 2 b of FIG. 2 .
  • An example of a labeling image having the same size as the input image 201 will be shown below as the ground truth data 205 .
  • the ground truth data 205 is a labeling image whose size is different from the input image 201 , or is an equation of a line or a curve representing the boundary of a desired region, augmentation processing having the same meaning as the data augmentation applied to the input image is performed for the ground truth data.
  • (x, y) be the coordinate system of the input image
  • (X′, Y′) be the coordinate system of a transformed image
  • a, b, c, d, e, and f be the transform parameters of affine transform processing.
  • affine transform processing can be expressed by equation (1) below.
  • the transform parameters a to f arbitrary values can be selected for each training data.
  • the range of values the transform parameters can take is limited by a rule to be described later.
  • step S 302 the data augmentation unit 120 performs extraction processing for the transformed image, and outputs an extracted image.
  • the data augmentation unit 120 selects the size (width and height) of the extracted image in accordance with the input/output size of the CNN 202 .
  • FIG. 3 B is a view schematically showing an example of an image in a case in which the processes of steps S 301 and S 302 are applied to the original input image 301 .
  • the data augmentation unit 120 affine-transforms the input image 301 in accordance with the processing of step S 301 , and generates a transformed image 306 .
  • the data augmentation unit 120 performs extraction processing for the transformed image 306 in accordance with the processing of step S 302 , and generates an extracted image 307 .
  • a defect region 305 including an invalid region where image information derived from the input image 301 is defective is generated in the transformed image 306 .
  • a part of the defect region 305 may be included in the extracted image 307 , as shown in B 2 of FIG. 3 B .
  • the collimator region 303 is a region where radiation is shielded by an irradiation field stop and therefore exists to surround the outer periphery of the input image 301 .
  • the image information image signal
  • the image signal abruptly becomes small at the boundary to the irradiation field 304 .
  • the defect region 305 is a region which exists to surround the outer periphery of the transformed image 306 and in which image information is defective, and has a characteristic feature close to the collimator region 303 .
  • the collimator region 303 includes scattered rays derived from the object 302 and the irradiation field 304 , the defect region 305 does not include the influence of such a physical phenomenon. For this reason, the defect region 305 has a similar but distinctly different characteristic feature from the collimator region 303 . Note that since the signal of the collimator region 303 is generated by a complex physical phenomenon, it is difficult to artificially reproduce it in the defect region 305 .
  • Irradiation field recognition processing is processing of classifying the collimator region 303 and the irradiation field 304 . If the defect region 305 is included in the extracted image 307 to be used for learning because of data augmentation, the machine learning apparatus 108 learns information other than the feature of the collimator region 303 , which should originally be learned, and the accuracy may lower due to data augmentation. Hence, to prevent the defect region 305 from being included in the extracted image 307 , the transform parameters in the affine transform of step S 301 and the position to extract the extracted image 307 in the extraction processing of step S 302 need to be selected such that the defect region 305 is not included in the extracted image 307 , as shown in B 3 of FIG. 3 B .
  • the transformed image 306 having an image width of ( ⁇ W in cos ⁇ + ⁇ H in sin ⁇ ) and an image height of ( ⁇ W in sin ⁇ + ⁇ H in cos ⁇ ) and including the defect region 305 is generated by processing of the data augmentation unit 120 .
  • step S 302 to prevent the defect region 305 from being included in the extracted image 307 , the data augmentation unit 120 sets an extractable region 317 in the transformed image 306 , and limits the range to acquire the extracted image 307 .
  • the data augmentation unit 120 performs data augmentation by generating the extracted image 307 that extracts a part of the transformed image 306 obtained by affine transform of the input image constituting the training data, and limits the range to acquire the extracted image 307 such that the region (defect region 305 ) in which image information is defective is not included in the extracted image 307 .
  • the data augmentation unit 120 sets the extractable region 317 ( FIG. 3 C ) in the transformed image 306 , and limits the range to acquire the extracted image 307 .
  • the data augmentation unit 120 can set the extractable region 317 in accordance with the rotation angle ⁇ of the input image 301 in the affine transform. Also, the data augmentation unit 120 can set the parameters (magnification factors ⁇ and ⁇ ) representing the magnification factors of the input image 301 in accordance with the rotation angle ⁇ of the input image 301 in the affine transform. Here, the data augmentation unit 120 sets the rotation angle ⁇ and the parameters (magnification factors ⁇ and ⁇ ) representing the magnification factors of the input image 301 such that a part of the input image 301 is not made defective by the affine transform.
  • the extracted image 307 is limited such that it is included in the extractable region 317 surrounded by vertices 309 , 310 , 311 , 312 , 313 , 314 , 315 , and 316 .
  • the coordinates (x, y) of the vertices x are given by the following equations. That is, the coordinates of the vertex 309 are given by equation (2), the coordinates of the vertex 310 are given by equation (3), the coordinates of the vertex 311 are given by equation (4), and the coordinates of the vertex 312 are given by equation (5).
  • the coordinates of the vertex 313 are given by equation (6)
  • the coordinates of the vertex 314 are given by equation (7)
  • the coordinates of the vertex 315 are given by equation (8)
  • the coordinates of the vertex 316 are given by equation (9).
  • the transform parameters can be set at random within the range that all the vertices 309 to 316 are included in the transformed image 306 .
  • the data augmentation unit 120 can set the magnification factors ⁇ and ⁇ to, for example, about 0.8 to 1.2 and set the transform parameters such that the length relationship between the image widths W trim and H trim of the extracted image 307 and W in and H in satisfies a ratio of, for example, about 1:2.
  • the magnification factors ⁇ and ⁇ are set large, the extracted range becomes wide.
  • the magnification factors ⁇ and ⁇ of the transform parameters may be changed in synchronism with the size of the defect region 305 generated by the rotation angle ⁇ .
  • step S 303 the data augmentation unit 120 performs signal amount adjustment processing for the extracted image 307 , and outputs an adjusted image.
  • the data augmentation unit 120 performs, for the extracted image 307 , multiplication using an arbitrary coefficient and addition using an arbitrary coefficient.
  • I trim be the extracted image 307
  • I out be the adjusted image
  • I out ⁇ I trim + ⁇ (10)
  • the coefficient ⁇ an arbitrary coefficient of about 0.1 to 10 may be set, and the extracted image I trim may be multiplied by that to uniformly increase/decrease the signal.
  • a two-dimensional filter such as a Gaussian filter may be set and applied to the extracted image I trim .
  • a uniform value may be added/subtracted, or arbitrary random noise may be added for each pixel. When adding noise, noise according to the physical characteristic of the radiation detection apparatus 104 can also be added.
  • FIG. 3 A shows an example in which steps S 301 to S 303 are sequentially processed.
  • the steps need not be performed in this order. Only some of the processes may be performed, or the order of the processes may arbitrarily be changed.
  • another arbitrary data augmentation method may be used as long as a region where image information is defective is not newly generated by data augmentation.
  • the data augmentation unit 120 can set the extractable region 317 in the input image to limit the range to acquire the extracted image 307 .
  • the data augmentation unit 120 performs data augmentation by generating the extracted image 307 that extracts a part of the input image 301 constituting training data, and limits the range to acquire the extracted image 307 such that the region (defect region 305 ) where image information is defective is not included in the extracted image 307 .
  • the data augmentation unit 120 sets the extractable region 317 ( FIG. 3 C ) in the input image 301 , and limits the range to acquire the extracted image 307 .
  • FIG. 4 is a view schematically showing the concept of inference of the inference unit 110 .
  • the inference unit 110 is an inference unit learned by the learning unit 109 , and can perform inference processing based on learned parameters acquired based on learning.
  • the inference unit 110 includes a learned convolutional neural network (CNN) 402 having a learned parameter group obtained by the learning unit 109 .
  • CNN convolutional neural network
  • the inference unit 110 applies inference processing by the learned CNN 402 to an input image 401 input to the inference unit 110 , and outputs an inference result 403 .
  • the machine learning apparatus 108 for example, it is preferable that learning is performed before introduction to the use environment of the user, and the parameter group of the learned CNN 402 is obtained in advance. However, it is also possible to update the machine learning apparatus 108 in accordance with a use situation after introduction to the use environment of the user. In this case, a set of an image acquired in the use environment of the user and the data set of an irradiation field is stored as training data in the storage device 115 .
  • the learning unit 109 of the machine learning apparatus 108 can perform additional learning and update the parameter group of the learned CNN 402 .
  • the additionally learned inference unit 110 can perform inference processing based on the result of learning to which a set of an image captured using the radiation imaging system 100 and the data of an irradiation field corresponding to the image is added as training data, and the result of learning performed in advance.
  • the learning unit 109 can select the timing of executing additional learning from, for example, the timing when a predetermined number or more of data sets are accumulated in the storage device 115 , the timing when a predetermined number or more of data sets in which the irradiation field recognition processing results are corrected by the user are accumulated, and the like.
  • the initial value of the parameter group of the CNN when additionally performing learning the parameter group of the learned CNN 402 used before the additional learning may be set to perform transfer learning.
  • the storage device 115 and the machine learning apparatus 108 need not always be mounted on the information processing apparatus 107 , and the storage device 115 and the machine learning apparatus 108 may be provided on a cloud server connected via a network. In this case, data sets obtained by a plurality of radiation imaging systems 100 may be collected/stored on the cloud server, and the machine learning apparatus 108 may perform additional learning using the data set collected/stored on the cloud server.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as a
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A machine learning apparatus for extracting a region from an input image, comprises: an inference unit configured to output the region by inference processing for the input image; and an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data, wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a Continuation of International Patent Application No. PCT/JP2020/028193, filed Jul. 21, 2020, which claims the benefit of Japanese Patent Application No. 2019-158927, filed Aug. 30, 2019, both of which are hereby incorporated by reference herein in their entirety.
BACKGROUND OF THE INVENTION Field of the Invention
The present invention relates to a machine learning apparatus, a machine learning method, and a non-transitory computer-readable storage medium storing a program and, more particularly, to a machine learning technique capable of appropriately augmenting training data at the time of learning.
Background Art
In recent years, a technique of performing object recognition on an image using machine learning and detecting the position of a target has become popular. In particular, a configuration for performing supervised learning using a convolutional neural network (to be referred to as a “CNN” hereinafter) has been deployed in a lot of fields because of its high performance.
One of the application fields is region extraction processing in a medical image. In a medical radiation imaging apparatus, to suppress the influence of radiation to regions other than a region of interest (to be referred to as an “irradiation field” hereinafter) necessary for diagnosis, in general, the irradiation field is narrowed using a collimator, thereby preventing radiation irradiation to regions other than the irradiation field. To perform image processing for an irradiation field, a technique of correctly extracting an irradiation field in an image is important, and, for example, PTL 1 proposes various kinds of techniques using machine learning.
As a characteristic feature of image processing using machine learning, the quality and amount of training data are directly associated with the performance. It is therefore preferable to use a large amount of training data for learning. However, for images such as medical images whose availability is not necessarily high, it is often impossible to ensure sufficient training data.
For this reason, there has been proposed a data augmentation technique for increasing variations of images by artificially deforming held training data. For example, PTL 2 proposes a technique of augmenting data by rotating an image.
CITATION LIST Patent Literature
  • PTL 1: Japanese Patent Laid-Open No. 04-261649
  • PTL 2: Japanese Patent Laid-Open No. 2017-185007
The technique of PTL 2 performs data augmentation by rotating an image to a plurality of angles. If an image is simply rotated, the image after the rotation may include a region where image information (image signal) is defective. In general, an arbitrary value such as zero is substituted into the region where image information is defective.
Consider the case of the above-described medical radiation imaging apparatus. In regions other than the irradiation field, since radiation is shielded by the collimator, image information exists only in a small amount or is almost zero. That is, when recognizing the irradiation field, that the amount of image information derived from the input image is small and is one of the features to be learned. Hence, if a region where the image information is uniformly set to an arbitrary value such as zero is newly created by data augmentation, it may be impossible to do learning, and the accuracy may be lowered by data augmentation.
The present invention has been made in consideration of the above-described problem, and provides a machine learning technique capable of more accurately extracting a region by performing appropriate data augmentation for training data used in learning.
SUMMARY OF THE INVENTION
According to one aspect of the present invention, there is provided a machine learning apparatus for extracting a region from an input image, comprising: an inference unit configured to output the region by inference processing for the input image; and an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data, wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
According to another aspect of the present invention, there is provided a machine learning method by a machine learning apparatus including inference unit configured to output a region by inference processing for an input image and configured to extract the region from the input image, comprising performing, in learning when learning of the inference unit is performed based on training data, data augmentation by increasing the number of input images constituting the training data, wherein the data augmentation is performed such that a region where image information held by the input image is defective is not included.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain principles of the invention.
FIG. 1 shows a block diagram 1 a showing an example of the basic configuration of a radiation imaging system including a machine learning apparatus according to the embodiment, and a block diagram 1 b showing an example of the configuration of a learning unit;
FIG. 2 shows a flowchart 2 a showing the procedure of processing of the learning unit, and a view 2 b schematically showing the concept of learning of the learning unit;
FIG. 3A is a flowchart showing the procedure of processing of a data augmentation unit;
FIG. 3B is a view schematically showing image examples in data augmentation processing;
FIG. 3C is a view schematically showing an image example in data augmentation processing; and
FIG. 4 is a schematic view showing the concept of inference in an inference unit.
DESCRIPTION OF THE EMBODIMENTS
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
In FIG. 1, 1 a is a block diagram showing an example of the basic configuration of a radiation imaging system including a machine learning apparatus according to the embodiment. In addition, 1 b in FIG. 1 is a block diagram showing an example of the configuration of a learning unit.
A radiation imaging system 100 includes a radiation generating apparatus 101 that generates radiation, a bed 103 on which an object 102 is arranged, a radiation detection apparatus 104 that detects the radiation and outputs image data according to the radiation that has passed through the object 102, a control apparatus 105 that controls the radiation generating timing and the radiation generating conditions of the radiation generating apparatus 101, a data collection apparatus 106 that collects various kinds of digital data, and an information processing apparatus 107 that controls image processing or the entire apparatus in accordance with a user instruction. Note that the configuration of the radiation imaging system 100 is sometimes called a radiation imaging apparatus.
The information processing apparatus 107 includes a machine learning apparatus 108 including a learning unit 109 and an inference unit 110, a CPU 112, a memory 113, an operation panel 114, a storage device 115, a display device 116, and a diagnostic image processing apparatus 117. These are electrically connected via a CPU bus 111.
The memory 113 stores various kinds of data necessary in processing of the CPU 112, and also includes a work memory for the CPU 112. The CPU 112 is configured to, using the memory 113, control the operation of the entire apparatus in accordance with a user instruction input to the operation panel 114.
In the embodiment of the present invention, radiation is not limited to X-rays to be generally used and includes α-rays, β-rays, γ-rays, and the like, which are beams formed by particles (including photons) emitted upon radioactive decay, and beams (for example, particle rays and cosmic rays) with equal or higher energy.
In accordance with a user instruction via the operation panel 114, the radiation imaging system 100 starts the imaging sequence of the object 102. The radiation generating apparatus 101 generates radiation under predetermined conditions, and the radiation detection apparatus 104 is irradiated with the radiation that has passed through the object 102. Here, the control apparatus 105 controls the radiation generating apparatus 101 based on radiation generating conditions such as a voltage, a current, and an irradiation time, and causes the radiation generating apparatus 101 to generate radiation under the predetermined conditions.
The radiation detection apparatus 104 detects the radiation that has passed through the object 102, converts the detected radiation into an electrical signal, and outputs image data according to the radiation. The image data output from the radiation detection apparatus 104 is collected as digital image data by the data collection apparatus 106. The data collection apparatus 106 transfers the image data collected from the radiation detection apparatus 104 to the information processing apparatus 107. In the information processing apparatus 107, the image data is transferred to the memory 113 via the CPU bus 111 under the control of the CPU 112.
In the radiation imaging system 100, the machine learning apparatus 108 performs region extraction processing for the image data stored in the memory 113, and extracts a region from the input image. Here, the input image is the image captured using the radiation imaging system 100, and the region is the irradiation field irradiated with radiation by the radiation imaging system 100. As the region extraction processing, the machine learning apparatus 108 can perform, for example, irradiation field recognition processing of extracting the irradiation field in the image captured by radiography. Here, the irradiation field recognition processing is processing of classifying a collimator region and an irradiation field, as will be described later. In the following explanation, an example in which the machine learning apparatus 108 performs irradiation field recognition processing as region extraction processing will be described.
The machine learning apparatus 108 is configured to perform region extraction processing using machine learning, and the machine learning apparatus 108 includes the learning unit 109 and the inference unit 110. Also, as shown in 1 b of FIG. 1 , the learning unit 109 includes, as functional components, a data augmentation unit 120, an inference unit 121, a parameter updating unit 122, and an end determination unit 123. Here, the inference unit 121 is an inference unit halfway through learning. When learning ends, the inference unit 110 is set in the machine learning apparatus 108 as inference unit after learning.
As the processing of the machine learning apparatus 108, for example, a region is extracted from an input image based on supervised learning using a convolutional neural network (CNN). In the machine learning apparatus 108, when performing region extraction processing, the learning unit 109 performs supervised learning using a plurality of training data prepared in advance, and decides parameters of the CNN. When performing region extraction processing, the inference unit 110 performs region extraction processing by applying the CNN having the parameters decided by the learning unit 109, and transfers the region extraction result to the memory 113.
The region extraction result and the image data are transferred to the diagnostic image processing apparatus 117. The diagnostic image processing apparatus 117 applies diagnostic image processing such as gradation processing, emphasis processing, and noise reduction processing to the image data, and creates an image suitable for diagnosis. The result is stored in the storage device 115 and displayed on the display device 116.
As for the processing of the learning unit 109 in the machine learning apparatus 108, a case in which a convolutional neural network (CNN) is used will be described as an example with reference to FIG. 2 . In FIG. 2, 2 a is a flowchart showing the procedure of processing of the learning unit 109, and 2 b is a view schematically showing the concept of learning of the learning unit 109.
Learning is performed based on training data. Training data is formed by a set of an input image 201 and ground truth data 205 corresponding to the input image 201 and representing an extraction region. As the ground truth data 205, for example, a labeling image formed by labeling, using an arbitrary value, a predetermined region (extraction region) in the input image can be used. Also, as the ground truth data 205, for example, coordinate data representing the extraction region in the input image by coordinates can be used. Alternatively, as the ground truth data 205, for example, data that specifies the boundary of the extraction region in the input image by a line or a curve can be used. In irradiation field recognition processing, as the ground truth data 205, for example, a binary labeling image in which an irradiation field in the input image 201 is set to 1, and a collimator region is set to 0 can be used.
In step S201, the data augmentation unit 120 applies data augmentation processing to training data. Details of the data augmentation processing will be described later.
In step S202, the inference unit 121 performs, for the input image 201, inference processing using the parameters of the convolutional neural network (CNN) 202 halfway through learning, and outputs an inference result 204. The inference unit 121 outputs a region by inference processing for the input image. Here, the CNN 202 has a structure in which a number of processing units 203 are connected arbitrarily. As the processing unit 203, for example, a convolutional operation, normalization processing, and processing by an activation function such as ReLU or Sigmoid are included, and a parameter group configured to describe the processing contents is provided. In these, sets for performing processes in order, for example, convolutional operation→normalization→activation function, are connected in three to several hundred layers, and various structures can be taken.
In step S203, the parameter updating unit 122 calculates a loss function from the inference result 204 and the ground truth data 205. As the loss function, an arbitrary function, for example, a square error or a cross entropy error can be used.
In step S204, the parameter updating unit 122 performs back propagation using the loss function calculated in step S203 as a starting point, and updates the parameter group of the convolutional neural network (CNN) 202 halfway through learning.
In step S205, the end determination unit 123 determines the end of the learning. To continue the learning (NO in step S205), the process returns to step S201 to similarly execute the processes of steps S201 to S204. When the processing is repeated while changing the input image 201 and the ground truth data 205, the parameters of the CNN 202 are repetitively updated such that the loss function lowers, and the accuracy of the machine learning apparatus 108 can be increased. If the learning is sufficiently done, and the end determination unit 123 determines to end the learning (YES in step S205), the processing is ended. The end of learning can be judged based on a judgement criterion set in accordance with a problem, for example, whether overlearning does not occur, and the accuracy of the inference result has a predetermined value or more, or whether the loss function has a predetermined value or less. Note that since the calculation cost of the processes of steps S201 to S205 is high, a calculation unit having high parallel calculation performance, such as a GPU, can also be used as the configuration of the learning unit 109.
Processing of the data augmentation unit 120 will be described next with reference to FIGS. 3A, 3B, and 3C. FIG. 3A is a flowchart showing the procedure of processing of the data augmentation unit 120, and FIGS. 3B and 3C are views schematically showing image examples in data augmentation processing. At the time of learning when learning of the inference unit 121 is performed based on the training data, the data augmentation unit 120 performs data augmentation by increasing the number of input images that constitute the training data. The data augmentation unit 120 performs data augmentation such that a region where image information held by the input image is defective is not included.
The data augmentation unit 120 performs, for the training data, data augmentation using at least one augmentation processing of affine transform processing, extraction processing, and signal amount adjustment processing. The data augmentation unit 120 performs the same augmentation processing for the input image and the ground truth data. The data augmentation unit 120 augments training data by performing step S301 (affine transform processing), step S302 (extraction processing), and step S303 (signal amount adjustment processing). This can improve generalization performance in learning of the machine learning apparatus 108.
In step S301, the data augmentation unit 120 applies affine transform processing to training data, thereby rotating, inverting, enlarging, or reducing an image. The same affine transform is applied to, for example, the input image 201 and the ground truth data 205 shown in 2 b of FIG. 2 . An example of a labeling image having the same size as the input image 201 will be shown below as the ground truth data 205. Even if the ground truth data 205 is a labeling image whose size is different from the input image 201, or is an equation of a line or a curve representing the boundary of a desired region, augmentation processing having the same meaning as the data augmentation applied to the input image is performed for the ground truth data.
Let (x, y) be the coordinate system of the input image, (X′, Y′) be the coordinate system of a transformed image, and a, b, c, d, e, and f be the transform parameters of affine transform processing. In this case, affine transform processing can be expressed by equation (1) below. As the transform parameters a to f, arbitrary values can be selected for each training data. However, the range of values the transform parameters can take is limited by a rule to be described later.
( x y ) = ( a b c d ) ( X Y ) + ( e f ) ( 1 )
For example, to rotate the input image by θ and enlarge it to α times in the x-axis direction and β times in the y-axis direction, a=α cos θ, b=−α sin θ, c=β sin θ, d=β cos θ, and d=e=0 are set.
In step S302, the data augmentation unit 120 performs extraction processing for the transformed image, and outputs an extracted image. The data augmentation unit 120 selects the size (width and height) of the extracted image in accordance with the input/output size of the CNN 202.
Consider an example in which data augmentation is performed for an input image 301 including an object 302, a collimator region 303, and an irradiation field 304, as shown in B1 of FIG. 3B.
B2 of FIG. 3B is a view schematically showing an example of an image in a case in which the processes of steps S301 and S302 are applied to the original input image 301. Here, the data augmentation unit 120 affine-transforms the input image 301 in accordance with the processing of step S301, and generates a transformed image 306. After the transformed image 306 is generated, the data augmentation unit 120 performs extraction processing for the transformed image 306 in accordance with the processing of step S302, and generates an extracted image 307.
If rotation processing is included in affine transform, a defect region 305 including an invalid region where image information derived from the input image 301 is defective is generated in the transformed image 306.
Depending on the magnification factor in step S301 or the extraction position or the size of the extracted image in step S302, a part of the defect region 305 may be included in the extracted image 307, as shown in B2 of FIG. 3B.
The collimator region 303 is a region where radiation is shielded by an irradiation field stop and therefore exists to surround the outer periphery of the input image 301. As a characteristic feature, the image information (image signal) abruptly becomes small at the boundary to the irradiation field 304.
On the other hand, the defect region 305 is a region which exists to surround the outer periphery of the transformed image 306 and in which image information is defective, and has a characteristic feature close to the collimator region 303. However, although the collimator region 303 includes scattered rays derived from the object 302 and the irradiation field 304, the defect region 305 does not include the influence of such a physical phenomenon. For this reason, the defect region 305 has a similar but distinctly different characteristic feature from the collimator region 303. Note that since the signal of the collimator region 303 is generated by a complex physical phenomenon, it is difficult to artificially reproduce it in the defect region 305.
Irradiation field recognition processing is processing of classifying the collimator region 303 and the irradiation field 304. If the defect region 305 is included in the extracted image 307 to be used for learning because of data augmentation, the machine learning apparatus 108 learns information other than the feature of the collimator region 303, which should originally be learned, and the accuracy may lower due to data augmentation. Hence, to prevent the defect region 305 from being included in the extracted image 307, the transform parameters in the affine transform of step S301 and the position to extract the extracted image 307 in the extraction processing of step S302 need to be selected such that the defect region 305 is not included in the extracted image 307, as shown in B3 of FIG. 3B.
Limitation of the transform parameters for preventing the defect region 305 from being included in the extracted image 307 will be described next with reference to FIG. 3C.
As shown in FIG. 3C, consider an example in which the transform parameters in the affine transform of step S301 are set to a=α cos θ, b=−α sin θ, c=β sin θ, d=β cos θ, and d=e=0, and the input image 301 is enlarged by a magnification factor α in the x direction and by a magnification factor β in the y direction and rotated by a rotation angle θ. Letting Win be the image width of the input image 301, Hin be the height, Wtrim be the image width of the extracted image 307, and Htrim be the height, the transformed image 306 having an image width of (αWin cos θ+βHin sin θ) and an image height of (αWin sin θ+βHin cos θ) and including the defect region 305 is generated by processing of the data augmentation unit 120.
In step S302, to prevent the defect region 305 from being included in the extracted image 307, the data augmentation unit 120 sets an extractable region 317 in the transformed image 306, and limits the range to acquire the extracted image 307.
The data augmentation unit 120 performs data augmentation by generating the extracted image 307 that extracts a part of the transformed image 306 obtained by affine transform of the input image constituting the training data, and limits the range to acquire the extracted image 307 such that the region (defect region 305) in which image information is defective is not included in the extracted image 307. The data augmentation unit 120 sets the extractable region 317 (FIG. 3C) in the transformed image 306, and limits the range to acquire the extracted image 307.
The data augmentation unit 120 can set the extractable region 317 in accordance with the rotation angle θ of the input image 301 in the affine transform. Also, the data augmentation unit 120 can set the parameters (magnification factors α and β) representing the magnification factors of the input image 301 in accordance with the rotation angle θ of the input image 301 in the affine transform. Here, the data augmentation unit 120 sets the rotation angle θ and the parameters (magnification factors α and β) representing the magnification factors of the input image 301 such that a part of the input image 301 is not made defective by the affine transform. The extracted image 307 is limited such that it is included in the extractable region 317 surrounded by vertices 309, 310, 311, 312, 313, 314, 315, and 316. When an origin 318 of coordinates is set at the upper left corner of the image, the coordinates (x, y) of the vertices x are given by the following equations. That is, the coordinates of the vertex 309 are given by equation (2), the coordinates of the vertex 310 are given by equation (3), the coordinates of the vertex 311 are given by equation (4), and the coordinates of the vertex 312 are given by equation (5). In addition, the coordinates of the vertex 313 are given by equation (6), the coordinates of the vertex 314 are given by equation (7), the coordinates of the vertex 315 are given by equation (8), and the coordinates of the vertex 316 are given by equation (9).
(x 309 ,y 309)=(H trim cos θ sin θ,αW in sin θ−H trim sin2 θ)  (2)
(x 310 ,y 310)=(H trim cos θ sin θ,αW in sin θ+H trim cos2 θ)  (3)
(x 311 ,y 311)=(βH in sin θ−W trim sin2 θ,αW in sin θ+βH in cos θ−W trim cos θ sin θ)  (4)
(x 312 ,y 312)=(βH in sin θ+W trim cos2 θ,αW in sin θ+βH in cos θ−W trim cos θ sin θ)  (5)
(x 313 ,y 313)=(βH in sin θ+αW in cos θ−H trim cos θ sin θ,βH in cos θ+H trim sin2 θ)  (6)
(x 314 ,y 314)=(βH in sin θ+αW in cos θ−H trim cos θ sin θ,βH in cos θ−H trim cos2 θ)  (7)
(x 315 ,y 315)=(αW in cos θ+W trim cos θ sin θ,W trim cos θ sin θ)  (8)
(x 316 ,y 316)=(αW in cos θ−W trim cos2 θ,W trim cos θ sin θ)  (9)
Here, concerning the image width Win of the input image, the image height Hin of the input image, the magnification factors α and β, the rotation angle θ, and the image width Wtrim of the extracted image 307, and the image height Htrim of the extracted image 307, the transform parameters can be set at random within the range that all the vertices 309 to 316 are included in the transformed image 306.
Note that when setting the transform parameters, if the magnification factors α and β are too large, or if the image width Wtrim of the extracted image 307 and the image height Htrim of the extracted image 307 are much smaller than the image width Win of the input image and the image height Hin of the input image, it is difficult to include the collimator region 303 in the extracted image 307, and it may be impossible to perform effective data augmentation. For this reason, the data augmentation unit 120 can set the magnification factors α and β to, for example, about 0.8 to 1.2 and set the transform parameters such that the length relationship between the image widths Wtrim and Htrim of the extracted image 307 and Win and Hin satisfies a ratio of, for example, about 1:2.
Considering that the rotation angle θ is, for example, 0° to 45°, the larger the rotation angle θ is, the larger the defect region 305 in the transformed image 306 is. Hence, when the magnification factors α and β are set large, the extracted range becomes wide. As described above, the magnification factors α and β of the transform parameters may be changed in synchronism with the size of the defect region 305 generated by the rotation angle θ.
In step S303, the data augmentation unit 120 performs signal amount adjustment processing for the extracted image 307, and outputs an adjusted image. As the signal amount adjustment processing, the data augmentation unit 120 performs, for the extracted image 307, multiplication using an arbitrary coefficient and addition using an arbitrary coefficient. In the signal amount adjustment processing, letting Itrim be the extracted image 307, and Iout be the adjusted image, and assuming that coefficients γ and δ are arbitrary coefficients, the relationship between the extracted image 307 (Itrim) and the adjusted image (Iin) can be represented by
I out =γI trim+δ  (10)
Here, as the coefficient γ, an arbitrary coefficient of about 0.1 to 10 may be set, and the extracted image Itrim may be multiplied by that to uniformly increase/decrease the signal. Alternatively, a two-dimensional filter such as a Gaussian filter may be set and applied to the extracted image Itrim. As for the coefficient δ as well, a uniform value may be added/subtracted, or arbitrary random noise may be added for each pixel. When adding noise, noise according to the physical characteristic of the radiation detection apparatus 104 can also be added.
Note that the flowchart of FIG. 3A shows an example in which steps S301 to S303 are sequentially processed. However, the steps need not be performed in this order. Only some of the processes may be performed, or the order of the processes may arbitrarily be changed. In addition, another arbitrary data augmentation method may be used as long as a region where image information is defective is not newly generated by data augmentation.
For example, if affine transform processing of step S301 is not performed, to prevent the defect region 305 from being included in the extracted image 307, the data augmentation unit 120 can set the extractable region 317 in the input image to limit the range to acquire the extracted image 307.
That is, the data augmentation unit 120 performs data augmentation by generating the extracted image 307 that extracts a part of the input image 301 constituting training data, and limits the range to acquire the extracted image 307 such that the region (defect region 305) where image information is defective is not included in the extracted image 307. The data augmentation unit 120 sets the extractable region 317 (FIG. 3C) in the input image 301, and limits the range to acquire the extracted image 307.
As for the processing of the inference unit 110 in the machine learning apparatus 108, a case in which a convolutional neural network (CNN) is used will be described next as an example with reference to FIG. 4 . FIG. 4 is a view schematically showing the concept of inference of the inference unit 110.
The inference unit 110 is an inference unit learned by the learning unit 109, and can perform inference processing based on learned parameters acquired based on learning. The inference unit 110 includes a learned convolutional neural network (CNN) 402 having a learned parameter group obtained by the learning unit 109. The inference unit 110 applies inference processing by the learned CNN 402 to an input image 401 input to the inference unit 110, and outputs an inference result 403.
Note that for learning in the machine learning apparatus 108, for example, it is preferable that learning is performed before introduction to the use environment of the user, and the parameter group of the learned CNN 402 is obtained in advance. However, it is also possible to update the machine learning apparatus 108 in accordance with a use situation after introduction to the use environment of the user. In this case, a set of an image acquired in the use environment of the user and the data set of an irradiation field is stored as training data in the storage device 115.
Using the set of the data set stored in the storage device 115 as new training data, the learning unit 109 of the machine learning apparatus 108 can perform additional learning and update the parameter group of the learned CNN 402. In the use environment of the user, the additionally learned inference unit 110 can perform inference processing based on the result of learning to which a set of an image captured using the radiation imaging system 100 and the data of an irradiation field corresponding to the image is added as training data, and the result of learning performed in advance.
As for the timing of performing additional learning, the learning unit 109 can select the timing of executing additional learning from, for example, the timing when a predetermined number or more of data sets are accumulated in the storage device 115, the timing when a predetermined number or more of data sets in which the irradiation field recognition processing results are corrected by the user are accumulated, and the like. In addition, as the initial value of the parameter group of the CNN when additionally performing learning, the parameter group of the learned CNN 402 used before the additional learning may be set to perform transfer learning.
Note that the storage device 115 and the machine learning apparatus 108 need not always be mounted on the information processing apparatus 107, and the storage device 115 and the machine learning apparatus 108 may be provided on a cloud server connected via a network. In this case, data sets obtained by a plurality of radiation imaging systems 100 may be collected/stored on the cloud server, and the machine learning apparatus 108 may perform additional learning using the data set collected/stored on the cloud server.
As described above, according to this embodiment, it is possible to provide a machine learning technique capable of more accurately extracting a region by performing appropriate data augmentation for training data used in learning.
According to the present invention, it is possible to provide a machine learning technique capable of more accurately extracting a region by performing appropriate data augmentation for training data used in learning.
Other Embodiments
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (29)

What is claimed is:
1. A machine learning apparatus for extracting a region from an input image, comprising:
at least one of (a) one or more processors connected to one or more memories storing a program including instructions executed by the one or more processors and (b) circuitry configured to function as:
an inference unit configured to output the region by inference processing for the input image; and
an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data that is constituted by a set of the input image and ground truth data corresponding to the input image and representing an extraction region, perform data augmentation by increasing the number of input images constituting the training data,
wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
2. The machine learning apparatus according to claim 1, wherein
the augmentation unit performs the data augmentation for the training data by at least one augmentation processing of affine transform processing, extraction processing, and signal amount adjustment processing, and
the augmentation unit performs the same augmentation processing for the input image and the ground truth data.
3. The machine learning apparatus according to claim 2, wherein
the augmentation unit performs the data augmentation by generating an extracted image that extracts a part of the input image constituting the training data, and
a range to acquire the extracted image is limited such that the region where the image information is defective is not included in the extracted image.
4. The machine learning apparatus according to claim 3, wherein the augmentation unit sets an extractable region in the input image and limits the range to acquire the extracted image.
5. The machine learning apparatus according to claim 3, wherein as the signal amount adjustment processing, the augmentation unit performs, for the extracted image, multiplication using an arbitrary coefficient and addition using an arbitrary coefficient.
6. The machine learning apparatus according to claim 2, wherein
the augmentation unit performs the data augmentation by generating an extracted image that extracts a part of a transformed image obtained by affine transform of the input image constituting the training data, and
a range to acquire the extracted image is limited such that the region where the image information is defective is not included in the extracted image.
7. The machine learning apparatus according to claim 6, wherein the augmentation unit sets an extractable region in the transformed image and limits the range to acquire the extracted image.
8. The machine learning apparatus according to claim 7, wherein the augmentation unit sets the extractable region in accordance with a rotation angle of the input image in the affine transform.
9. The machine learning apparatus according to claim 6, wherein the augmentation unit sets a parameter representing a magnification factor of the input image in accordance with the rotation angle of the input image in the affine transform.
10. The machine learning apparatus according to claim 9, wherein the augmentation unit sets the rotation angle and the parameter representing the magnification factor of the input image such that a part of the input image is not made defective by the affine transform.
11. The machine learning apparatus according to claim 1, wherein the ground truth data is a labeling image formed by labeling the extraction region in the input image using an arbitrary value.
12. The machine learning apparatus according to claim 1, wherein the ground truth data is coordinate data representing the extraction region in the input image by coordinates.
13. The machine learning apparatus according to claim 1, wherein the ground truth data is data that specifies a boundary of the extraction region in the input image by a line or a curve.
14. The machine learning apparatus according to claim 1, wherein the inference unit performs the inference processing based on a learned parameter acquired based on the learning.
15. The machine learning apparatus according to claim 1, wherein the machine learning apparatus extracts the region from the input image based on supervised learning using a convolutional neural network.
16. The machine learning apparatus according to claim 1, wherein the input image is an image captured using a radiation imaging system, and
the region is an irradiation field irradiated with radiation by the radiation imaging system.
17. The machine learning apparatus according to claim 16, wherein in a use environment of a user, the inference unit performs the inference processing based on a result of learning to which a set of the image captured using the radiation imaging system and data of the irradiation field corresponding to the image is added as training data, and a result of learning performed in advance.
18. A radiation imaging system comprising:
the machine learning apparatus according to claim 1; and
a radiation detection apparatus that detects radiation, wherein the radiation detection apparatus are communicatively connected to the machine learning apparatus.
19. A machine learning method by a machine learning apparatus including inference unit configured to output a region by inference processing for an input image and configured to extract the region from the input image, comprising
performing, in learning when learning of the inference unit is performed based on training data that is constituted by a set of the input image and ground truth data corresponding to the input image and representing an extraction region, data augmentation by increasing the number of input images constituting the training data,
wherein the data augmentation is performed such that a region where image information held by the input image is defective is not included.
20. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the machine learning method according to claim 19.
21. A machine learning apparatus for extracting a region that is an irradiation field irradiated with radiation by a radiation imaging system from an input image that is an image captured using the radiation imaging system, the machine learning apparatus comprising:
at least one of (a) one or more processors connected to one or more memories storing a program including instructions executed by the one or more processors and (b) circuitry configured to function as:
an inference unit configured to output the region by inference processing for the input image; and
an augmentation unit configured to, in learning when learning of the inference unit is performed based on training data, perform data augmentation by increasing the number of input images constituting the training data,
wherein the augmentation unit performs the data augmentation such that a region where image information held by the input image is defective is not included.
22. A radiation imaging system comprising:
the machine learning apparatus according to claim 21; and
a radiation detection apparatus that detects radiation, wherein the radiation detection apparatus are communicatively connected to the machine learning apparatus.
23. A machine learning method by a machine learning apparatus including inference unit configured to output a region that is an irradiation field irradiated with radiation by the radiation imaging system by inference processing for an input image that is an image captured using a radiation imaging system and configured to extract the region from the input image, comprising
performing, in learning when learning of the inference unit is performed based on training data that is constituted by a set of the input image and ground truth data corresponding to the input image and representing an extraction region, data augmentation by increasing the number of input images constituting the training data,
wherein the data augmentation is performed such that a region where image information held by the input image is defective is not included.
24. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the machine learning method according to claim 23.
25. An information processing apparatus comprising:
at least one of (a) one or more processors connected to one or more memories storing a program including instructions executed by the one or more processors and (b) circuitry configured to function as:
an inference unit configured to perform inference processing on a input image using parameter obtained by learning using a training data that is constituted by input images and ground truth data corresponding to the input image, wherein the input images are obtained by augmentation process by extracting a part of an image obtained by a transform process including rotation process.
26. The information processing apparatus according to claim 25, wherein the inference unit outputs a region that is an irradiation field irradiated with radiation by performing the inference processing.
27. A radiation imaging system comprising:
the information processing apparatus according to claim 25; and
a radiation detection apparatus that detects radiation, wherein the radiation detection apparatus are communicatively connected to the information processing apparatus.
28. An information processing method comprising:
performing inference processing on an input image using parameter obtained by learning using a training data that is constituted by input images and ground truth data corresponding to the input image, wherein the input images are obtained by augmentation process by extracting a part of an image obtained by a transform process including rotation process.
29. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the information processing method according to claim 28.
US17/675,071 2019-08-30 2022-02-18 Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium Active US11823441B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019-158927 2019-08-30
JP2019158927A JP7497145B2 (en) 2019-08-30 2019-08-30 Machine learning device, machine learning method and program, information processing device, and radiation imaging system
PCT/JP2020/028193 WO2021039211A1 (en) 2019-08-30 2020-07-21 Machine learning device, machine learning method, and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/028193 Continuation WO2021039211A1 (en) 2019-08-30 2020-07-21 Machine learning device, machine learning method, and program

Publications (2)

Publication Number Publication Date
US20220172461A1 US20220172461A1 (en) 2022-06-02
US11823441B2 true US11823441B2 (en) 2023-11-21

Family

ID=74684475

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/675,071 Active US11823441B2 (en) 2019-08-30 2022-02-18 Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium

Country Status (3)

Country Link
US (1) US11823441B2 (en)
JP (1) JP7497145B2 (en)
WO (1) WO2021039211A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7497145B2 (en) * 2019-08-30 2024-06-10 キヤノン株式会社 Machine learning device, machine learning method and program, information processing device, and radiation imaging system

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04261649A (en) 1990-08-20 1992-09-17 Fuji Photo Film Co Ltd Method and apparatus for analyzing radiation image
US9418417B2 (en) 2013-06-06 2016-08-16 Canon Kabushiki Kaisha Image processing apparatus, tomography apparatus, image processing method, and storage medium
JP2017185007A (en) 2016-04-05 2017-10-12 株式会社島津製作所 Radiographic apparatus, radiation image object detection program, and object detection method in radiation image
US9813647B2 (en) 2015-06-12 2017-11-07 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging apparatus, image processing method, and storage medium
US20180107928A1 (en) 2016-10-14 2018-04-19 Kla-Tencor Corporation Diagnostic systems and methods for deep learning models configured for semiconductor applications
US9979911B2 (en) 2015-06-12 2018-05-22 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging apparatus, image processing method, and storage medium for dark correction
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
US20190065884A1 (en) 2017-08-22 2019-02-28 Boe Technology Group Co., Ltd. Training method and device of neural network for medical image processing, and medical image processing method and device
JP2019076699A (en) 2017-10-26 2019-05-23 株式会社日立製作所 Nodule detection with false positive reduction
US20200019760A1 (en) * 2018-07-16 2020-01-16 Alibaba Group Holding Limited Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
JP2020014799A (en) 2018-07-27 2020-01-30 コニカミノルタ株式会社 X-ray image object recognition system
US20200057907A1 (en) 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US10872409B2 (en) * 2018-02-07 2020-12-22 Analogic Corporation Visual augmentation of regions within images
US10997716B2 (en) * 2017-10-09 2021-05-04 The Board Of Trustees Of The Leland Stanford Junior University Contrast dose reduction for medical imaging using deep learning
US11276490B2 (en) * 2019-04-16 2022-03-15 Seoul Women's University Industry-University Cooperation Foundation Method and apparatus for classification of lesion based on learning data applying one or more augmentation methods in lesion information augmented patch of medical image
US11321610B2 (en) * 2018-08-22 2022-05-03 Verizon Patent And Licensing Inc. Rehearsal network for generalized learning
US11327497B2 (en) * 2019-06-21 2022-05-10 Volkswagen Ag Autonomous transportation vehicle image augmentation
US11340700B2 (en) * 2019-08-26 2022-05-24 Samsung Electronics Co., Ltd. Method and apparatus with image augmentation
US20220172461A1 (en) * 2019-08-30 2022-06-02 Canon Kabushiki Kaisha Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium
US20220196463A1 (en) * 2020-12-22 2022-06-23 Nec Laboratories America, Inc Distributed Intelligent SNAP Informatics
US11461537B2 (en) * 2019-11-13 2022-10-04 Salesforce, Inc. Systems and methods of data augmentation for pre-trained embeddings
US11599788B2 (en) * 2018-06-13 2023-03-07 Idemia Identity Y & Security France Parameter training method for a convolutional neural network

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04261649A (en) 1990-08-20 1992-09-17 Fuji Photo Film Co Ltd Method and apparatus for analyzing radiation image
US9418417B2 (en) 2013-06-06 2016-08-16 Canon Kabushiki Kaisha Image processing apparatus, tomography apparatus, image processing method, and storage medium
US9813647B2 (en) 2015-06-12 2017-11-07 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging apparatus, image processing method, and storage medium
US9979911B2 (en) 2015-06-12 2018-05-22 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging apparatus, image processing method, and storage medium for dark correction
JP2017185007A (en) 2016-04-05 2017-10-12 株式会社島津製作所 Radiographic apparatus, radiation image object detection program, and object detection method in radiation image
US20180107928A1 (en) 2016-10-14 2018-04-19 Kla-Tencor Corporation Diagnostic systems and methods for deep learning models configured for semiconductor applications
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
US20190065884A1 (en) 2017-08-22 2019-02-28 Boe Technology Group Co., Ltd. Training method and device of neural network for medical image processing, and medical image processing method and device
US10997716B2 (en) * 2017-10-09 2021-05-04 The Board Of Trustees Of The Leland Stanford Junior University Contrast dose reduction for medical imaging using deep learning
JP2019076699A (en) 2017-10-26 2019-05-23 株式会社日立製作所 Nodule detection with false positive reduction
US10872409B2 (en) * 2018-02-07 2020-12-22 Analogic Corporation Visual augmentation of regions within images
US11599788B2 (en) * 2018-06-13 2023-03-07 Idemia Identity Y & Security France Parameter training method for a convolutional neural network
US20200019760A1 (en) * 2018-07-16 2020-01-16 Alibaba Group Holding Limited Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
JP2020014799A (en) 2018-07-27 2020-01-30 コニカミノルタ株式会社 X-ray image object recognition system
US20220058423A1 (en) 2018-08-14 2022-02-24 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium for extracting an irradiation field of a radiograph
US20200057907A1 (en) 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US11321610B2 (en) * 2018-08-22 2022-05-03 Verizon Patent And Licensing Inc. Rehearsal network for generalized learning
US11276490B2 (en) * 2019-04-16 2022-03-15 Seoul Women's University Industry-University Cooperation Foundation Method and apparatus for classification of lesion based on learning data applying one or more augmentation methods in lesion information augmented patch of medical image
US11327497B2 (en) * 2019-06-21 2022-05-10 Volkswagen Ag Autonomous transportation vehicle image augmentation
US11340700B2 (en) * 2019-08-26 2022-05-24 Samsung Electronics Co., Ltd. Method and apparatus with image augmentation
US20220172461A1 (en) * 2019-08-30 2022-06-02 Canon Kabushiki Kaisha Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium
US11461537B2 (en) * 2019-11-13 2022-10-04 Salesforce, Inc. Systems and methods of data augmentation for pre-trained embeddings
US20220196463A1 (en) * 2020-12-22 2022-06-23 Nec Laboratories America, Inc Distributed Intelligent SNAP Informatics

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Amano Toshiyuki et al., "Image Interpolation Using BPLP Method on the Eigenspace" Proceedings of IEICE, D-11 (Mar. 2002) pp. 457-465, vol. J85-D-II, No. 3, with English translation.
Do, Nt et al., "Knee Bone Tumor Segmentation from radiographs using Seg / Unet with Dice Loss" Research Gate (Feb. 2019) pp. 1-6.
International Search Report issued by the Japan Patent Office dated Oct. 20, 2020 in corresponding International Application No. PCT/JP2020/028193, with English translation.
Jang, Y. et al., "Estimating Compressive Strength of Concrete Using Deep Convolutional Neural Networks with Digital Microscope Images" Research Gate (May 2019) pp. 1-12.
Notice of Reasons for Refusal issued by the Japanese Patent Office dated Oct. 6, 2023 in corresponding JP Patent Application No. 2019-158927, with English translation.
Stack Overflow, "Keras ImageDataGenerator with center crop for rotation and translation shift" (May 2019) https://stackoverflow.com/questions/56254393/keras-imagedatagenerator-with-center-crop-for-rotation-and-translation-shift, pp. 1-3.

Also Published As

Publication number Publication date
US20220172461A1 (en) 2022-06-02
JP2021036969A (en) 2021-03-11
WO2021039211A1 (en) 2021-03-04
JP7497145B2 (en) 2024-06-10

Similar Documents

Publication Publication Date Title
US11295158B2 (en) Image processing apparatus, image processing method, and storage medium for extracting an irradiation field of a radiograph
CN107516330B (en) Model generation method, image processing method and medical imaging equipment
CN107595312B (en) Model generation method, image processing method and medical imaging equipment
US9820713B2 (en) Image processing apparatus, radiation imaging system, control method, and storage medium
EP2824638B1 (en) Image processing apparatus and image processing method
US11645736B2 (en) Image processing methods, apparatuses and systems
WO2005110232A1 (en) Image processing device and method thereof
JP2011028588A (en) Information processing apparatus, line noise reduction processing method, and program
US20170178325A1 (en) Method and apparatus for restoring image
EP3631763B1 (en) Method and devices for image reconstruction
JP2010054356A (en) Image processor and x-ray foreign matter detector having the same, and image processing method
US11823441B2 (en) Machine learning apparatus, machine learning method, and non-transitory computer-readable storage medium
CN105324081B (en) Radiation image generating means and image processing method
US10311568B2 (en) Image processing apparatus, control method thereof, and computer-readable storage medium
US20220175331A1 (en) Image processing apparatus, radiation imaging system, image processing method, and non-transitory computer-readable storage medium
JP2014176565A (en) Image processor, radiographic apparatus, image processing method, computer program and recording medium
US9594032B2 (en) Extended field iterative reconstruction technique (EFIRT) for correlated noise removal
JP7277536B2 (en) Image processing device, image processing method, and program
JP5855210B2 (en) Information processing apparatus, line noise reduction processing method, and program
US20230169757A1 (en) Information processing apparatus, method, and storage medium to generate a likelihood map
CN117541481B (en) Low-dose CT image restoration method, system and storage medium
JP6570594B2 (en) Image processing apparatus, radiation imaging apparatus, image processing method, computer program, and storage medium
JP2023100836A (en) Image processing device, image processing method and program
CN115409720A (en) X-ray image restoration method and device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOBAYASHI, TSUYOSHI;REEL/FRAME:059179/0766

Effective date: 20220218

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE