EP2693402B1 - Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren, Programm und Speichermedium - Google Patents
Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren, Programm und Speichermedium Download PDFInfo
- Publication number
- EP2693402B1 EP2693402B1 EP20130179484 EP13179484A EP2693402B1 EP 2693402 B1 EP2693402 B1 EP 2693402B1 EP 20130179484 EP20130179484 EP 20130179484 EP 13179484 A EP13179484 A EP 13179484A EP 2693402 B1 EP2693402 B1 EP 2693402B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- region
- feature amount
- image
- pixel value
- lesion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims description 91
- 238000003672 processing method Methods 0.000 title claims description 4
- 238000000605 extraction Methods 0.000 claims description 58
- 239000005337 ground glass Substances 0.000 claims description 34
- 230000003902 lesion Effects 0.000 claims description 33
- 238000000034 method Methods 0.000 claims description 19
- 230000008859 change Effects 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 description 12
- 239000007787 solid Substances 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 7
- 210000004072 lung Anatomy 0.000 description 6
- 210000000779 thoracic wall Anatomy 0.000 description 6
- 210000004204 blood vessel Anatomy 0.000 description 5
- 239000000470 constituent Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 241001270131 Agaricus moelleri Species 0.000 description 4
- 238000004195 computer-aided diagnosis Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 210000000038 chest Anatomy 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000003211 malignant effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 206010019695 Hepatic neoplasm Diseases 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000009956 adenocarcinoma Diseases 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003748 differential diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 208000014018 liver neoplasm Diseases 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001394 metastastic effect Effects 0.000 description 1
- 206010061289 metastatic neoplasm Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates to an image processing technique of acquiring a lesion region from an image.
- CAD Computer Aided Diagnosis
- the grade of malignancy of a pulmonary nodule is determined based on, for example, the shape feature of the nodule, so it is important to accurately extract a nodule region in differential diagnosis (CADx) by a computer.
- Fig. 13 illustrates an example of pulmonary nodule images. The two left views show solid nodules, and the two right views show nodules (to be also sometimes referred to as "GGOs (Ground Glass Opacities)" hereinafter) having GGOs.
- GGO Ground Glass Opacities
- GGO nodule having a ground glass opacity that is expected to be malignant at a high probability has a vague boundary, so highly accurate region extraction is difficult.
- non-patent literature 1 proposes a method of approximating a GGO region by anisotropic Gaussian fitting.
- non-patent literature 2 shows a method of experimentally obtaining the density ranges of a substantial portion and GGO region from the AUC value of an ROC curve, and performing segmentation of the GGO region and substantial portion by threshold processing.
- a nodule region is approximated not for each pixel but as an ellipsoid. This operation is useful in, for example, deriving a rough temporal change rate of the nodule size, while information associated with the detailed shape cannot be obtained.
- Non-patent Literature 1 K. Okada: Ground-Glass Nodule Characterization in High-Resolution CT Scans. In Lung Imaging and Computer Aided Diagnosis, Taylor and Francis, LLC, 2011
- Non-patent Literature 2 T. Okada, S. Iwano, T. Ishigaki, et al: Computer-aided diagnosis of lung cancer: definition and detection of ground-glass opacity type of nodules by high-resolution computed tomography. Japan Radiological Society, 27:91-99, 2009
- Non-patent Literature 3 Y. Boykov, M.P. Jolly: Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images. In IEEE Int. Conf. on Computer Vision, 1:105-112, 2001
- Non-patent Literature 4 H. Ishikawa: Graph Cuts. Research Report by Information Processing Society of Japan, CVIM, 158: 193 - 204, 2007
- Non-patent Literature 5 M. Takagi, H. Shimoda: Image Analysis Handbook, New Edition, Tokyo University Press, Tokyo, 2004, 1260 - 1265
- Non-patent Literature 6 H. Kanamori, N. Murata: Commentary of Boosting and Its Increase in Robustness. The Journal of the Institute of Electronics, Information and Communication Engineers, 86, 10: 769 - 772, 2003
- Non-patent Literature 7 T. Narihira, A. Shimizu, H. Kobatake, et al: Boosting algorithms for segmentation of metastatic liver tumors in contrast-enhanced computed tomography.
- Int. J CARS 2009, 4: S318, 2009 US2006/153451 discloses the normalization of intensity of each voxel within a volume of interest by rescaling to a predetermined intensity range.
- the present invention has been made in consideration of the above-mentioned problems, and provides an image processing technique of accurately extracting a lesion with a light shade.
- the present invention in its first aspect provides an image processing apparatus as specified in claims 1 to 8.
- the present invention in its second aspect provides an image processing method as specified in claim 9.
- the present invention in its third aspect provides a program as specified in claim 10.
- the present invention in its fourth aspect provides a computer-readable storage medium as specified in claim 11.
- Fig. 1 is a block diagram illustrating an example of the configuration of an image processing system including an image processing apparatus 1, and an imaging apparatus 100 connected to it according to the first embodiment.
- the image processing apparatus 1 can be implemented by, for example, a personal computer (PC), and includes a central processing unit (CPU) 10, main memory 11, magnetic disk 12, display memory 13, monitor 14, mouse 15, and keyboard 16.
- CPU central processing unit
- the CPU 10 mainly controls the operation of each constituent element of the image processing apparatus 1.
- the main memory 11 stores a control program to be executed by the CPU 10, and provides a work area in program execution by the CPU 10.
- the magnetic disk 12 stores, for example, various types of application software including an operating system (OS), device drivers of peripheral devices, and a program for executing, for example, deformation estimation processing (to be described later).
- the display memory 13 temporarily stores display data for the monitor 14.
- the monitor 14 is, for example, a CRT monitor or a liquid crystal monitor, and displays an image based on the data from the display memory 13.
- the mouse 15 and keyboard 16 are used to perform pointing input and input of, for example, texts by the user.
- the above-mentioned constituent elements are communicably connected to each other via a common bus 17.
- the image processing apparatus 1 is connected to the imaging apparatus 100 via a local area network (LAN), and can acquire image data from the imaging apparatus 100.
- LAN local area network
- the devices in the image processing apparatus 1 and the imaging apparatus 100 may be connected to each other via a USB or another interface such as IEEE1394.
- the image processing apparatus 1 may be configured to read, via, for example, a LAN, necessary data from a data server which manages these data.
- the image processing apparatus 1 may be connected to a storage device such as an FDD, a CD-RW drive, an MO drive, or a ZIP drive, and read necessary data from these drives.
- the imaging apparatus 100 uses, for example, CT, MRI, or digital radiography in which a two-dimensional radiation image is captured.
- CT will be taken as an example hereinafter.
- Fig. 2 shows the functional configuration of the image processing apparatus 1 according to this embodiment.
- This embodiment exemplifies a three-dimensional chest image captured by CT.
- this configuration is also applicable to, for example, MRI or a two-dimensional radiation image.
- a chest wall information acquisition unit 110 shown in Fig. 2 performs three-dimensional chest image lung field extraction processing to obtain chest wall information.
- a VOI acquisition unit 120 shown in Fig. 2 acquires, from an image of an object, a VOI as an image region of interest, obtained by removing a region other than the lung field, using chest wall information.
- a selection unit 130 selects the type of lesion.
- "Mixed GGO” is the second nodule from the right end of Fig. 13 , and is a nodule having a substantial portion and ground glass opacity which form a core serving as a high pixel value region (to be also sometimes referred to as “Mixed GGO” hereinafter).
- the nodule at the right end of Fig. 13 is a nodule having a main nodule region formed by a ground glass opacity region (to be referred to as “Pure GGO” hereinafter).
- the selection unit 130 selects whether the type of nodule is "Mixed GGO", “Pure GGO", or "Solid nodule” as a lesion other than a GGO.
- a first processing unit 200 performs extraction processing of a lesion region when "Mixed GGO” or “Pure GGO” is selected. Also, a second processing unit 300 performs extraction processing of a lesion region when "Solid nodule” as a lesion other than a GGO is selected.
- step S1100 the chest wall information acquisition unit 110 extracts a lung field region using the technique described in non-patent literature 6.
- the chest wall information acquisition unit 110 stores the position of the outer wall of the lung field region in the main memory 11 as coordinate information.
- step S1101 the VOI acquisition unit 120 acquires a rectangular parallelepiped surrounding a nodule as a VOI while looking up Axial, Sagittal, and Coronal cross-sectional images.
- the VOI acquisition unit 120 extracts a rough isolated shadow. This extraction is merely extraction of a rough region, and is not highly accurate extraction of the contour of a lesion.
- This VOI may be automatically extracted, or manually input via the mouse 15 while looking up Axial, Sagittal, and Coronal cross-sectional images displayed on the monitor 14. As shown in rectangular frames of Fig. 4 , the length of one side is set to be about twice the average diameter of a nodule.
- step S1103 the first processing unit 200 performs extraction processing of a lesion region when "Mixed GGO” or “Pure GGO” is selected.
- "Mixed GGO” formed by a ground glass opacity region, and a substantial region (to be also sometimes referred to as the "core portion” hereinafter) that forms the core have large differences in property within a region included in the nodule, in terms of both the density and texture.
- the range of the pixel value of the ground glass opacity region has a large variation in each individual nodule.
- the first processing unit 200 obtains a texture feature amount for each pixel from an image obtained by changing the pixel value range of candidate regions (other than high pixel value regions such as the core region and blood vessel region, and the background region) for ground glass opacity regions to a predetermined pixel value range.
- the first processing unit 200 performs enhancement processing of obtaining an output value for each pixel by first conversion processing based on a plurality of texture feature amounts.
- the enhancement processing means herein for example, processing of applying, to pixels in the ground glass opacity region, a numerical value larger than those of pixels in the remaining region. This makes it easy to identify a ground glass opacity region from the remaining region.
- the first conversion processing can be done using a function that deforms a plurality of input values into a non-linear value and outputs it as one output value. Such a function is set to associate the relationship between the input value and the output value, and the constituent process will be referred to as learning hereinafter.
- the first processing unit 200 can obtain ground glass opacity region information of "Mixed GGO" with high resolution based on the output value obtained by the first conversion processing.
- the first processing unit 200 extracts a "Pure GGO" region when "Pure GGO” is selected.
- a difference in processing from "Mixed GGO” lies in that second conversion processing for "Pure GGO" different from the first conversion processing is performed as conversion processing of performing enhancement processing.
- the enhancement processing means herein for example, processing of applying, to pixels in the ground glass opacity region, a numerical value larger than those of pixels in the remaining region.
- the first conversion processing is done using a function learned using a feature amount obtained from the ground glass opacity region of "Mixed GGO" of an image changed to a predetermined pixel value range.
- the second conversion processing is done using a function learned using a feature amount obtained from the ground glass opacity region of "Pure GGO" of an image changed to a predetermined pixel value range. This makes it easy to identify a ground glass opacity region from the remaining region.
- the first processing unit 200 obtains a texture feature amount for each pixel from an image obtained by changing the pixel value range of candidate regions (other than high pixel value regions such as the core region and blood vessel region, and the background region) for ground glass opacity regions to a predetermined pixel value range.
- the first processing unit 200 obtains an output value for each pixel by second conversion processing, based on the texture feature amount.
- the first processing unit 200 can obtain ground glass opacity region information of "Pure GGO" as well with high resolution based on the output value.
- ground glass opacity region information can be extracted with high resolution in the two lesions.
- the extraction of region information means herein obtaining position information associated with a region from image data. Also, information required to express a region, extracted as an image, on an image as a region different from the remaining region is also defined as region information. This makes it possible to obtain lesion contour information, area information, shape information, and information for an image change using the region information.
- the second processing unit 300 extracts a "Solid nodule” region if "Solid nodule” is selected as a lesion other than a GGO.
- the second processing unit 300 obtains a texture feature amount for each pixel from an image obtained by not changing the pixel value range of the image.
- the second processing unit 300 performs enhancement processing of obtaining an output value for each pixel by third conversion processing for another lesion, based on the texture feature amount.
- the enhancement processing means herein for example, processing of applying, to pixels in the "solid module” region, a numerical value larger than those of pixels in the remaining region. This makes it easy to identify a nodule region from the remaining region.
- the second processing unit 300 can obtain "Solid nodule" region information as another lesion with high resolution based on the output value obtained by the third conversion processing.
- the third conversion processing is done using a function learned using a feature amount obtained from a "Solid nodule" region.
- Fig. 5 is a block diagram showing the configuration of the first processing unit 200.
- Fig. 6 is a flowchart showing the sequence of processing by the first processing unit 200.
- Fig. 7 shows a "Mixed GGO" image.
- Figs. 8 and 9 are views for explaining processing of dividing a density region.
- a region extraction unit 205 extracts low density regions (low pixel value regions) and high density regions (high pixel value regions) from region information obtained by a VOI acquisition unit 120.
- the density of black increases as the low density region (low pixel value region) has a lower density, and the density of black decreases to be closer to white as the high density region (high pixel value region) has a higher density.
- a change unit 210 changes the pixel value of a region candidate of a ground glass opacity as a light shade extracted by the region extraction unit 205 into a predetermined pixel value range.
- a feature amount extraction unit 220 obtains a feature amount from an image obtained by changing the pixel value range obtained by the region extraction unit 205, or an image obtained by not changing the pixel value. Also, the feature amount extraction unit 220 includes a first feature amount extraction unit 221 which obtains a feature amount from an image obtained by changing the pixel value range, and a second feature amount extraction unit 222 which obtains a second feature amount from an image obtained by not changing the pixel value range.
- step S2000 the VOI acquisition unit 120 extracts a region almost corresponding to the center of a nodule at a size of about 2 ⁇ .
- region information is manually input via, for example, a mouse 15. Since a VOI is set to have a size of about 2 ⁇ , the boundary of a GGO nodule is present close to a position an R/2 (R is 1/2 of the side length of the VOI) from the center of the VOI, as shown in Fig. 8 . Therefore, the region extraction unit 205 obtains I GGO from a region that falls within a distance of R/4 from the center, and I bkg from a region that falls outside a distance of 3R/4 from the center.
- step S2010 the VOI is divided into annular regions (its center coincides with that of the VOI) with a width of one pixel to obtain the average density (average pixel value) of each annular region.
- Fig. 9 illustrates an example of the average density of each annular region obtained for an image by threshold processing for the image shown in Fig. 7 .
- the abscissa indicates the distance from the center.
- a solid line (upper side) 910 on the high density side indicates the density before high density regions are removed, and a lower solid line 920 indicates the density after these regions are removed.
- the maximum value of the average density in an annular region that falls within a distance of R/4 from the center is set to I GGO
- the minimum value of the average of an annular region density region that falls within a distance of 3R/4 to R from the center is set to the pixel value of the background region I bkg Note that a region having I bkg or less is determined as a background region candidate.
- step S2020 the region extraction unit 205 extracts an image region having a value of I GGO to I bkg as a candidate region for a ground glass opacity region.
- Figs. 10A to 10C show images obtained by normalization processing for a nodule as change processing of the above-mentioned pixel value distribution.
- Figs. 10A and 10B illustrate examples of normalization processing of "Pure GGO”
- Fig. 10C illustrates an example of normalization processing of "Mixed GGO”.
- the change unit 210 reduces variations in density value of the GGO region and background region between images.
- step S2040 the feature amount extraction unit 220 (feature amount calculation unit) extracts a first texture feature amount as a first feature amount from each image.
- the first feature amount extraction unit 221 calculates a first feature amount from an image obtained by changing the pixel value distribution to a predetermined pixel value range.
- the first feature amount extraction unit 221 calculates a first feature amount from the VOI after the above-mentioned density normalization.
- a texture statistics is used as a concrete feature amount. For example, 15 types of Haralick texture statistics obtained from a co-occurrence matrix are adopted.
- Fig. 11 illustrates an example of the original CT image, and the texture feature amounts extracted by the feature amount extraction unit 220.
- a first feature amount is calculated for each pixel which constitutes image data from a predetermined range including pixels.
- the co-occurrence matrix has variations of two types of gray scales (8 and 24 gray scales), two displacements (1 and 2 pixels), and two ROI sizes (3 ⁇ 3 ⁇ 3 and 7 ⁇ 7 ⁇ 7 pixels), and the feature amount extraction unit 220 performs calculation for 18 directions.
- an identifying unit 230 independently obtains an enhanced image.
- the identifying unit 230 includes a first identifying unit 231 corresponding to a ground glass opacity as a light shade, and a second identifying unit 232 corresponding to a substantial portion that forms the core.
- An identifying unit robust against outliers for the feature vector is used as the first identifying unit 231.
- the first identifying unit 231 is, for example, a low density identifier obtained by learning (to be described later) the respective regions of low and high densities, independently of each other, using MadaBoost. Since MadaBoost is a known technique described in, for example, non-patent literatures 6 and 7, a description thereof will not be given.
- the first identifying unit 231 receives a feature amount obtained by the first feature amount extraction unit 221 to obtain an image having undergone region enhancement.
- the second feature amount extraction unit 222 calculates a second texture feature amount as a second feature amount from an image obtained by not changing the pixel value distribution.
- the second feature amount extraction unit 222 calculates a second texture feature amount from a VOI not to be normalized.
- 15 types of Haralick texture statistics obtained from a co-occurrence matrix are adopted.
- the co-occurrence matrix has variations of two types of gray scales (8 and 24 gray scales), two displacements (1 and 2 pixels), and two ROI sizes (3 ⁇ 3 ⁇ 3 and 7 ⁇ 7 ⁇ 7 pixels), and the second feature amount extraction unit 222 performs calculation for 18 directions.
- Fig. 11 illustrates an example of the original CT image, and the texture feature amounts extracted by the feature amount extraction unit 220.
- the second identifying unit 232 is, for example, a high density identifier obtained by learning (to be described later) the respective regions of low and high densities, independently of each other, using MadaBoost robust against outliers for the feature vector.
- the second identifying unit 232 receives a feature amount obtained by the second feature amount extraction unit 222 to perform region enhancement.
- Figs. 12A and 12B illustrate examples of the enhancement results of respective regions as examples of images in which the target regions are selectively enhanced. With this operation, the target regions are selectively enhanced.
- an extraction unit 240 can obtain the information of a ground glass opacity region rougher than graph cuts (to be described later) by threshold processing for the output value of the first identifying unit 231 as well. Also, the extraction unit 240 (to be described later) can obtain core region information rougher than graph cuts (to be described later) by threshold processing for the image enhanced by the second identifying unit 232.
- These extraction methods are useful in, for example, deriving a rough temporal change rate of the nodule size. Hence, these extraction methods can be used in accordance with the purpose of use while switching them with highly accurate extraction using graph cuts (to be described later).
- step S2080 the maximum values of two enhancement results obtained by the first identifying unit 231 and second identifying unit 232 are obtained for each pixel to obtain a combined image as the enhancement result of the entire GGO nodule region.
- Fig. 12C illustrates an example of such a combined image.
- the extraction unit 240 obtains a GGO region from the combined image.
- the extraction unit 240 performs, for example, region extraction processing using graph cuts. This processing is based on energy minimization. The likelihood of the region interior and the certainty of the boundary can be reflected with good balance to allow global energy optimization. Also, this processing is advantageous in terms of ease of extension to multidimensional data.
- the extraction accuracy can further be improved by setting an appropriate shape energy.
- the extraction unit 240 identifies a set of labels which minimize equation (2) as a nodule region obj and background region bkg.
- the result of multiplying the logarithm of the likelihood (normal distribution approximate expression) of the enhancement result B u for each region by (-1) is set to f u
- g u,v is set as a function obtained by exponential transformation of the square of the difference in enhancement result between adjacent pixels.
- the seed of a graphics is a voxel that falls within a distance of 0.15 R (R is 1/2 of the maximum side length of the VOI) from the center of the VOI
- the sheet of the background is a voxel that falls within a distance of 0.1 R from the boundary of the VOI. Thresholds for these distances are determined by experiments. With this operation, a label is assigned to each pixel of image data, and the coordinate information of pixels assigned with the label of the nodule region obj is obtained as lesion region information. The thus combined enhanced image is processed by graph cuts to produce an effect of obtaining GGO region information.
- the learning uses data of only three Axial, Sagittal, and Coronal cross-sections that pass through the nodule center.
- a texture feature amount extracted by the feature amount extraction unit 220 is obtained for correct answer regions (three sections) of the GGO nodule.
- a feature amount is three-dimensionally measured using adjacent slice information.
- the measured data is used for learning using MadaBoost and graph cuts.
- MadaBoost learning is performed to minimize loss for learned data.
- graph cuts the parameters ⁇ and ⁇ are changed within a certain range to select parameters that maximizes the performance of learned data. Note that this time parameter determination is done for each of groups of "Pure GGO", “Mixed GGO", and "Solid nodule” to build dedicated processing. That is, the boosting output as an input is changed between groups to determine optimum graph cut parameters for each output.
- a mechanism of accurately extracting a lesion having a light shade for example, a ground glass opacity
- aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s).
- the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Claims (11)
- Bildverarbeitungsvorrichtung (200), die eine eine Mattglastrübung aufweisende Läsion und ein Kerngebiet aus einem Bild extrahiert, wobei die Vorrichtung umfasst:eine Änderungseinrichtung (220) zum Ändern eines Pixelwertbereichs eines Kandidatengebiets für die Mattglastrübung in einen vorbestimmten Pixelwertbereich, wobei das Kandidatengebiet für die Mattglastrübung ein aus Bildpixeln mit Werten zwischen einem niedrigen Hintergrundpixelwert und einem hohen Pixelwert gebildetes Bildgebiet ist;eine erste Merkmalbetragsextrahiereinrichtung (221) zum Erhalten eines ersten Merkmalbetrags aus dem Bild, dessen Pixelwertbereich durch die Änderungseinrichtung geändert wird;eine zweite Merkmalbetragsextrahiereinrichtung (222) zum Erhalten eines zweiten Merkmalbetrags aus dem Bild bevor der Pixelwertbereich geändert wird; sowieeine Extrahiereinrichtung (240) zum Extrahieren der Läsion aus dem Bild, basierend auf dem ersten Merkmalbetrag und dem zweiten Merkmalbetrag,wobei die Extrahiereinrichtung eine erste
Identifizierungseinrichtung (231) umfasst zum Ausgeben eines einer Mattglastrübung entsprechenden ersten Werts basierend auf dem ersten Merkmalbetrag, und eine zweite Identifizierungseinrichtung (232) zum Ausgeben eines einem Kerngebiet entsprechenden zweiten Werts basierend auf dem zweiten Merkmalbetrag, undwobei die Extrahiereinrichtung ausgebildet ist, basierend auf dem ersten Wert und dem zweiten Wert ein verbessertes Bild zu erzeugen und die Läsion aus dem verbesserten Bild zu extrahieren. - Vorrichtung nach Anspruch 1,
wobei die Änderungseinrichtung das Kandidatengebiet basierend auf einem Bildgebiet mit hohem Pixelwert und einem Hintergrundgebiet mit einem niedrigeren Pixelwert als das Gebiet mit hohem Pixelwert extrahiert. - Vorrichtung nach Anspruch 2, wobei
der erste Merkmalbetrag mehrere Texturmerkmalbeträge beinhaltet, die für jedes Pixel aus dem Gebiet, dessen Pixelwert geändert wird, berechnet werden, und basierend auf einer Statistik eines Pixelwerts in einem das Pixel beinhaltenden vorbestimmten Bereich erhalten wird, und
der zweite Merkmalbetrag mehrere Texturmerkmalbeträge beinhaltet, die für jedes Pixel aus dem Gebiet mit hohem Pixelwert berechnet werden, und basierend auf einer Statistik eines Pixelwerts in einem das Pixel beinhaltenden vorbestimmten Bereich erhalten wird. - Vorrichtung nach Anspruch 3,
wobei der erste Merkmalbetrag und der zweite Merkmalbetrag eine Haralick-Texturstatistik beinhalten. - Vorrichtung nach Anspruch 1,
wobei die erste Identifizierungseinrichtung den aus einem Mattglastrübungsgebiet berechneten Merkmalbetrag als korrekte Antwort im Voraus lernt. - Vorrichtung nach Anspruch 1 oder 5,
wobei die zweite Identifizierungseinrichtung den zweiten Merkmalbetrag, der aus dem als Gebiet mit hohem Pixelwert dienenden Kerngebiet berechnet wird, als korrekte Antwort im Voraus lernt. - Vorrichtung nach einem der Ansprüche 1 bis 6,
wobei die Extrahiereinrichtung weiterhin eine Gebietsextrahiereinrichtung umfasst zum Extrahieren eines Gebiets der Läsion unter Verwendung eines Graphenschnitts basierend auf Informationen eines verbesserten Bilds. - Vorrichtung nach einem der Ansprüche 1 bis 7,
weiterhin umfassend eine Erfassungseinrichtung zum Erfassen eines ein Gebiet der Läsion als ein interessierendes Bildgebiet beinhaltenden Bilds aus einem Objektbild. - Bildverarbeitungsverfahren zum Extrahieren einer eine Mattglastrübung aufweisenden Läsion und einem Kerngebiet aus einem Bild, wobei das Verfahren umfasst:einen Änderungsschritt (S2030) zum Ändern eines Pixelwertbereichs eines Kandidatengebiets für die Mattglastrübung in einen vorbestimmten Pixelwertbereich, wobei das Kandidatengebiet für die Mattglastrübung ein aus Bildpixeln mit Werten zwischen einem niedrigen Hintergrundpixelwert und einem hohen Pixelwert gebildetes Bildgebiet ist;einen ersten Merkmalbetragsextrahierschritt (S2040) zum Erhalten eines ersten Merkmalbetrags aus dem Bild, dessen Pixelwertbereich im Änderungsschritt geändert wird;einen zweiten Merkmalbetragsextrahierschritt (S2060) zum Erhalten eines zweiten Merkmalbetrags aus dem Kerngebiet des Bilds bevor der Pixelwertbereich geändert wird; sowieeinen Extrahierschritt zum Extrahieren der Läsion aus dem Bild basierend auf dem ersten Merkmalbetrag und dem zweiten Merkmalbetrag,wobei der Extrahierschritt einen ersten Identifizierungsschritt (S2050) umfasst zum Ausgeben eines einer Mattglastrübung entsprechenden ersten Werts basierend auf dem ersten Merkmalbetrag, und einen zweiten Identifizierungsschritt (S2070) zum Ausgeben eines einem Kerngebiet entsprechenden zweiten Werts basierend auf dem zweiten Merkmalbetrag, undwobei der Extrahierschritt ausgebildet ist, basierend auf dem ersten Wert und dem zweiten Wert ein verbessertes Bild zu erzeugen (S2080) und die Läsion aus dem verbesserten Bild zu extrahieren (S2090).
- Programm zum Veranlassen eines Computers, als jede Einheit einer Bildverarbeitungsvorrichtung nach einem der Ansprüche 1 bis 8 zu dienen.
- Computerlesbares Speichermedium, das ein Programm nach Anspruch 10 speichert.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012173396A JP5993653B2 (ja) | 2012-08-03 | 2012-08-03 | 画像処理装置、画像処理方法およびプログラム |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2693402A1 EP2693402A1 (de) | 2014-02-05 |
EP2693402B1 true EP2693402B1 (de) | 2015-05-06 |
Family
ID=48979552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20130179484 Active EP2693402B1 (de) | 2012-08-03 | 2013-08-06 | Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren, Programm und Speichermedium |
Country Status (3)
Country | Link |
---|---|
US (1) | US9940706B2 (de) |
EP (1) | EP2693402B1 (de) |
JP (1) | JP5993653B2 (de) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6415878B2 (ja) * | 2014-07-10 | 2018-10-31 | キヤノンメディカルシステムズ株式会社 | 画像処理装置、画像処理方法及び医用画像診断装置 |
JP6642048B2 (ja) * | 2016-01-28 | 2020-02-05 | 富士通株式会社 | 医療画像表示システム、医療画像表示プログラム及び医療画像表示方法 |
JP7054787B2 (ja) | 2016-12-22 | 2022-04-15 | パナソニックIpマネジメント株式会社 | 制御方法、情報端末、及びプログラム |
JP6837376B2 (ja) * | 2017-04-10 | 2021-03-03 | 富士フイルム株式会社 | 画像処理装置および方法並びにプログラム |
JP7005191B2 (ja) | 2017-06-30 | 2022-01-21 | キヤノンメディカルシステムズ株式会社 | 画像処理装置、医用画像診断装置、及びプログラム |
JP7318058B2 (ja) * | 2017-06-30 | 2023-07-31 | キヤノンメディカルシステムズ株式会社 | 画像処理装置 |
JP7179521B2 (ja) * | 2018-08-02 | 2022-11-29 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、画像生成方法、及び画像生成プログラム |
US11914034B2 (en) * | 2019-04-16 | 2024-02-27 | Washington University | Ultrasound-target-shape-guided sparse regularization to improve accuracy of diffused optical tomography and target depth-regularized reconstruction in diffuse optical tomography using ultrasound segmentation as prior information |
JP7423237B2 (ja) * | 2019-09-30 | 2024-01-29 | キヤノン株式会社 | 画像処理装置、画像処理方法、プログラム |
TWI714440B (zh) * | 2020-01-20 | 2020-12-21 | 緯創資通股份有限公司 | 用於電腦斷層攝影的後處理的裝置和方法 |
US11436724B2 (en) | 2020-10-30 | 2022-09-06 | International Business Machines Corporation | Lesion detection artificial intelligence pipeline computing system |
US11688063B2 (en) | 2020-10-30 | 2023-06-27 | Guerbet | Ensemble machine learning model architecture for lesion detection |
US11694329B2 (en) | 2020-10-30 | 2023-07-04 | International Business Machines Corporation | Logistic model to determine 3D z-wise lesion connectivity |
US11587236B2 (en) * | 2020-10-30 | 2023-02-21 | International Business Machines Corporation | Refining lesion contours with combined active contour and inpainting |
US11749401B2 (en) | 2020-10-30 | 2023-09-05 | Guerbet | Seed relabeling for seed-based segmentation of a medical image |
US11688517B2 (en) | 2020-10-30 | 2023-06-27 | Guerbet | Multiple operating point false positive removal for lesion identification |
CN114943856B (zh) * | 2021-04-12 | 2024-04-26 | 四川省肿瘤医院 | 一种肺结节区域识别方法、标注方法及识别系统 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08235353A (ja) | 1995-02-24 | 1996-09-13 | Ge Yokogawa Medical Syst Ltd | 画像処理方法および装置 |
JPH09179977A (ja) | 1995-12-21 | 1997-07-11 | Shimadzu Corp | 医用画像の階調自動処理装置 |
JP2002010062A (ja) * | 2000-06-22 | 2002-01-11 | Hitachi Medical Corp | 画像構成装置 |
JP4393016B2 (ja) * | 2000-06-30 | 2010-01-06 | 株式会社日立メディコ | 画像診断支援装置 |
US6876999B2 (en) * | 2001-04-25 | 2005-04-05 | International Business Machines Corporation | Methods and apparatus for extraction and tracking of objects from multi-dimensional sequence data |
US6819790B2 (en) * | 2002-04-12 | 2004-11-16 | The University Of Chicago | Massive training artificial neural network (MTANN) for detecting abnormalities in medical images |
US7236619B2 (en) * | 2002-10-31 | 2007-06-26 | University Of Chicago | System and method for computer-aided detection and characterization of diffuse lung disease |
US8045770B2 (en) * | 2003-03-24 | 2011-10-25 | Cornell Research Foundation, Inc. | System and method for three-dimensional image rendering and analysis |
US7356173B2 (en) | 2003-06-11 | 2008-04-08 | Koninklijke Philips Electronics N.V. | Analysis of pulmonary CT data |
US7548642B2 (en) * | 2004-10-28 | 2009-06-16 | Siemens Medical Solutions Usa, Inc. | System and method for detection of ground glass objects and nodules |
US7555152B2 (en) * | 2005-01-06 | 2009-06-30 | Siemens Medical Solutions Usa, Inc. | System and method for detecting ground glass nodules in medical images |
JP4847041B2 (ja) | 2005-05-09 | 2011-12-28 | 株式会社日立メディコ | X線撮影装置 |
US7623692B2 (en) | 2005-07-22 | 2009-11-24 | Carestream Health, Inc. | Pulmonary nodule detection in a chest radiograph |
JP4999163B2 (ja) * | 2006-04-17 | 2012-08-15 | 富士フイルム株式会社 | 画像処理方法および装置ならびにプログラム |
EP1865464B1 (de) * | 2006-06-08 | 2013-11-20 | National University Corporation Kobe University | Rechenvorrichtung und Programmprodukt zur Computergestützten Bildbasierten Diagnostik |
JP2008253292A (ja) * | 2007-03-30 | 2008-10-23 | Fujifilm Corp | 症例画像検索装置及びシステム |
JP5390080B2 (ja) * | 2007-07-25 | 2014-01-15 | 株式会社東芝 | 医用画像表示装置 |
US8144949B2 (en) * | 2007-11-15 | 2012-03-27 | Carestream Health, Inc. | Method for segmentation of lesions |
US8724866B2 (en) * | 2009-09-14 | 2014-05-13 | Siemens Medical Solutions Usa, Inc. | Multi-level contextual learning of data |
-
2012
- 2012-08-03 JP JP2012173396A patent/JP5993653B2/ja active Active
-
2013
- 2013-07-31 US US13/955,049 patent/US9940706B2/en active Active
- 2013-08-06 EP EP20130179484 patent/EP2693402B1/de active Active
Also Published As
Publication number | Publication date |
---|---|
JP5993653B2 (ja) | 2016-09-14 |
JP2014030623A (ja) | 2014-02-20 |
EP2693402A1 (de) | 2014-02-05 |
US9940706B2 (en) | 2018-04-10 |
US20140037170A1 (en) | 2014-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2693402B1 (de) | Bildverarbeitungsvorrichtung, Bildverarbeitungsverfahren, Programm und Speichermedium | |
EP3035287B1 (de) | Bildverarbeitungsvorrichtung und bildverarbeitungsverfahren | |
EP2599036B1 (de) | Verfahren und system zur erkennung von anomalien bei datensätzen | |
Mesanovic et al. | Automatic CT image segmentation of the lungs with region growing algorithm | |
US9277902B2 (en) | Method and system for lesion detection in ultrasound images | |
US8675931B2 (en) | Medical image segmentation | |
EP3188127B1 (de) | Verfahren und system zur durchführung von knochenmehrfachsegmentierung in bilddaten | |
US10290095B2 (en) | Image processing apparatus for measuring a length of a subject and method therefor | |
US20090097728A1 (en) | System and Method for Detecting Tagged Material Using Alpha Matting | |
US20060023927A1 (en) | GGN segmentation in pulmonary images for accuracy and consistency | |
WO2009138202A1 (en) | Method and system for lesion segmentation | |
US9014447B2 (en) | System and method for detection of lesions in three-dimensional digital medical image | |
KR101586276B1 (ko) | 사전 통계 정보를 이용한 유방 밀도 자동 측정 및 표시 방법과 이를 이용한 유방 밀도 자동 측정 시스템 및 컴퓨터 프로그램 저장 매체 | |
EP1782384B1 (de) | System und verfahren zur darmwandextraktion bei anwesenheit von markierten fäkalien oder kollabierten darmregionen | |
JP6415878B2 (ja) | 画像処理装置、画像処理方法及び医用画像診断装置 | |
US9947094B2 (en) | Medical image processing device, operation method therefor, and medical image processing program | |
Myint et al. | Effective kidney segmentation using gradient based approach in abdominal CT images | |
JP6397453B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
Macho et al. | Segmenting Teeth from Volumetric CT Data with a Hierarchical CNN-based Approach. | |
Li et al. | A new efficient 2D combined with 3D CAD system for solitary pulmonary nodule detection in CT images | |
Horsthemke et al. | Predicting LIDC diagnostic characteristics by combining spatial and diagnostic opinions | |
Schneider et al. | Automated lung nodule detection and segmentation | |
EP4270305A1 (de) | Lernvorrichtung, -verfahren und -programm sowie vorrichtung zur verarbeitung medizinischer bilder | |
Kayode et al. | Preparing mammograms for classification task: Processing and analysis of mammograms | |
JP2007534352A (ja) | すりガラス様小結節(ggn)セグメンテーションを行うためのシステムおよび方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20140805 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20141202 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 726171 Country of ref document: AT Kind code of ref document: T Effective date: 20150615 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602013001683 Country of ref document: DE Effective date: 20150618 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 726171 Country of ref document: AT Kind code of ref document: T Effective date: 20150506 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20150506 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150907 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150806 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150807 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150806 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150906 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602013001683 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: RO Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150506 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: LU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150806 |
|
26N | No opposition filed |
Effective date: 20160209 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20160429 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150806 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150831 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160831 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20160831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20130806 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20170806 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170806 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20150506 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240723 Year of fee payment: 12 |