JP6415878B2 - Image processing apparatus, image processing method, and medical image diagnostic apparatus - Google Patents

Image processing apparatus, image processing method, and medical image diagnostic apparatus Download PDF

Info

Publication number
JP6415878B2
JP6415878B2 JP2014142686A JP2014142686A JP6415878B2 JP 6415878 B2 JP6415878 B2 JP 6415878B2 JP 2014142686 A JP2014142686 A JP 2014142686A JP 2014142686 A JP2014142686 A JP 2014142686A JP 6415878 B2 JP6415878 B2 JP 6415878B2
Authority
JP
Japan
Prior art keywords
region
small
image processing
structure
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2014142686A
Other languages
Japanese (ja)
Other versions
JP2016016265A (en
Inventor
敦司 谷口
敦司 谷口
智也 岡崎
智也 岡崎
智行 武口
智行 武口
Original Assignee
キヤノンメディカルシステムズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノンメディカルシステムズ株式会社 filed Critical キヤノンメディカルシステムズ株式会社
Priority to JP2014142686A priority Critical patent/JP6415878B2/en
Publication of JP2016016265A publication Critical patent/JP2016016265A/en
Application granted granted Critical
Publication of JP6415878B2 publication Critical patent/JP6415878B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

  Embodiments described herein relate generally to an image processing apparatus, an image processing method, and a medical image diagnostic apparatus.

  Conventionally, a technique for automatically extracting a region of interest included in three-dimensional image data collected by a medical image diagnostic apparatus such as an X-ray CT (Computed Tomography) apparatus has been known. For example, it is used for differential diagnosis of a tumor. Has been. As an example, by extracting nodule regions included in multiple image data collected at different times by an X-ray CT apparatus and examining changes in the volume and diameter of the extracted nodule regions, the nodule is malignant. Differentiate whether it is benign or not.

  Such differential diagnosis is performed on a patient suspected of having lung cancer, for example. In recent years, not only a nodule region is extracted, but also the nodule region is ground glass-like translucent image region GGO (Ground It is classified into a glass (Opacity) area and a solid (solid) area, which is an image area with higher brightness than GGO, and differential diagnosis is performed based on changes in volume and diameter. In order to improve the accuracy of such differential diagnosis, it is important to accurately extract the region of interest.

  Currently, as a method of extracting a region of interest from three-dimensional image data, for example, a three-dimensional ROI (Region Of Interest) around a tumor candidate point designated by a user or a CAD (Computer-Aided-Diagnosis) system. A tumor candidate region is extracted by setting a certain VOI (Volume Of Interest) and classifying voxels included in the set VOI by threshold processing of luminance. A method is known in which a region such as a blood vessel is removed from the extracted tumor candidate region and the composition is classified into a Solid region and a GGO region.

JP 2009-28161 A

  The problem to be solved by the present invention is to provide an image processing apparatus, an image processing method, and a medical image diagnostic apparatus that can improve the accuracy of region extraction from three-dimensional image data.

The image processing apparatus according to the embodiment includes a first extraction unit and a second extraction unit. The first extraction unit has a relatively high luminance value from the three-dimensional image data based on a threshold corresponding to the CT value, and a relatively low luminance value from the first region including the structure to be extracted , A second region that is an air region in the three-dimensional image data is extracted. The second extraction unit, said first region is divided into a plurality of small regions on the basis of the feature quantity similarity and the positional relationship of the voxel, adjacent the leading Stories second region, the second region by similarity to exclude the relatively high small region from said plurality of small regions, extracting a small area contained from the plurality of small areas in the structure, it extracts an area of the structure.

1 is a diagram illustrating an example of a medical image processing system according to a first embodiment. The figure for demonstrating an example of the area | region extraction from volume data. The figure for demonstrating the subject which concerns on a prior art. 1 is a diagram illustrating an example of an image processing apparatus according to a first embodiment. 6 is a flowchart illustrating a processing procedure performed by the image processing apparatus according to the first embodiment. The figure which shows an example of extraction of a 1st area | region. The figure for demonstrating an example of determination of a threshold value. The schematic diagram for demonstrating the concept of the process by the 2nd extraction part which concerns on 1st Embodiment. The figure which shows an example of the area | region division by the 2nd extraction part which concerns on 1st Embodiment. The schematic diagram for demonstrating an example of the clustering by the 2nd extraction part which concerns on 1st Embodiment. The figure which shows an example of the extraction process by the 2nd extraction part which concerns on 1st Embodiment. The figure which shows an example of the image processing apparatus which concerns on 2nd Embodiment. 9 is a flowchart illustrating a processing procedure performed by the image processing apparatus according to the second embodiment. The figure for demonstrating an example of the process by the identification part which concerns on 2nd Embodiment. 1 is a diagram illustrating a hardware configuration of an image processing apparatus according to an embodiment.

  Hereinafter, an image processing apparatus, an image processing method, and a medical image diagnostic apparatus according to embodiments will be described with reference to the accompanying drawings. In the following embodiments, the parts denoted by the same reference numerals perform the same operation, and redundant description will be omitted as appropriate.

(First embodiment)
First, a medical information processing system including an image processing apparatus according to the first embodiment will be described. FIG. 1 is a diagram illustrating an example of a medical information processing system 1 according to the first embodiment. As illustrated in FIG. 1, the medical information processing system 1 according to the first embodiment includes an image processing device 100, a medical image diagnostic device 200, and an image storage device 300. Each apparatus illustrated in FIG. 1 is in a state where it can communicate with each other directly or indirectly by, for example, a hospital LAN (Local Area Network) installed in a hospital. For example, PACS (Picture Archiving and Communication System) has been introduced into the medical information processing system 1, and each device mutually exchanges three-dimensional image data (volume data) in accordance with the DICOM (Digital Imaging and Communications in Medicine) standard. To send and receive.

  The medical diagnostic imaging apparatus 200 includes an X-ray diagnostic apparatus, an X-ray CT (Computed Tomography) apparatus, an MRI (Magnetic Resonance Imaging) apparatus, an ultrasonic diagnostic apparatus, a SPECT (Single Photon Emission Computed Tomography) apparatus, and a PET (Positron Emission computed Tomography). ) Apparatus, a SPECT-CT apparatus in which a SPECT apparatus and an X-ray CT apparatus are integrated, a PET-CT apparatus in which a PET apparatus and an X-ray CT apparatus are integrated, or a group of these apparatuses. Further, the medical image diagnostic apparatus 200 according to the first embodiment can generate volume data.

  Specifically, the medical image diagnostic apparatus 200 generates volume data by imaging a subject. For example, the medical image diagnostic apparatus 200 collects data such as projection data and MR signals by photographing a subject, and medical image data of a plurality of axial surfaces along the body axis direction of the subject from the collected data. By reconfiguring, volume data is generated. For example, the medical image diagnostic apparatus 200 reconstructs 500 pieces of medical image data on the axial plane. The 500 axial medical image data groups are volume data. It should be noted that the projection data of the subject imaged by the medical image diagnostic apparatus 200, the MR signal, etc. itself may be used as volume data.

  In addition, the medical image diagnostic apparatus 200 transmits the generated volume data to the image storage apparatus 300. The medical image diagnostic apparatus 200 identifies, for example, a patient ID for identifying a patient, an examination ID for identifying an examination, and the medical image diagnostic apparatus 200 as supplementary information when transmitting volume data to the image storage apparatus 300. A device ID, a series ID for identifying one shot by the medical image diagnostic apparatus 200, and the like are transmitted.

  The image storage device 300 is a database that stores medical images. Specifically, the image storage device 300 stores the volume data transmitted from the medical image diagnostic device 200 in the storage unit and stores it. Note that this embodiment may be a case where the image processing apparatus 100 illustrated in FIG. 1 and the image storage apparatus 300 are integrated by using the image processing apparatus 100 that can store a large-capacity image. That is, this embodiment may be a case where volume data is stored in the image processing apparatus 100 itself.

  In the first embodiment, the volume data stored in the image storage device 300 is stored in association with the patient ID, examination ID, device ID, series ID, and the like. Therefore, the image processing apparatus 100 acquires necessary volume data from the image storage apparatus 300 by performing a search using a patient ID, an examination ID, an apparatus ID, a series ID, and the like.

  The image processing apparatus 100 receives volume data from the medical image diagnostic apparatus 200 or the image storage apparatus 300, and extracts a predetermined area included in the received volume data. Here, the image processing apparatus 100 according to the present embodiment improves the accuracy of region extraction from volume data as compared with the conventional technique. Specifically, the image processing apparatus 100 improves accuracy when extracting an area including a structure to be extracted from volume data.

  Here, an example of region extraction from volume data and a problem in the prior art will be described with reference to FIGS. FIG. 2 is a diagram for explaining an example of region extraction from volume data. In addition, in FIG. 2, the schematic diagram about the case where a nodule area | region is extracted from the CT image data of the chest collected with the X-ray CT apparatus and the volume for every composition (Solid area | region and GGO area | region) is estimated is shown.

  In the region extraction from the volume data, for example, as shown in FIG. 2, first, CT volume data of the chest is received as “input”. Here, the center point of the nodule to be extracted (x mark in the figure) and the lung field region including the nodule are designated. At this time, the center point and the lung field region are designated manually by an operator or automatically by a CAD (Computer-Aided-Diagnosis) system. When the extraction target is specified in this way, the foreground region including the extraction target is extracted.

  For example, as shown in “Foreground region” in FIG. 2, a foreground region including a nodule to be extracted is extracted from the designated lung field region. Here, in the extraction of the foreground region, the foreground region is extracted based on the luminance value in the voxel, and thus a blood vessel having a high luminance value is extracted as the foreground region similarly to the nodule. Therefore, the blood vessel region included in the extracted foreground region is then removed (“blood vessel removal” in the figure), and the boundary indicating the nodule region is corrected (“border correction” in the figure), A nodule region is extracted from the volume data. Thereafter, the solid region and the GGO region in the extracted nodule region are classified (“composition classification” in the figure), and an estimated volume value is displayed as an “output” together with an image showing each region. .

  As described above, for extraction of the extraction target from the volume data, first, the foreground region including the extraction target is extracted, and the target region is extracted from the extracted foreground region. However, in the prior art, there is a certain limit to the accuracy of the foreground region extraction described above, and therefore the region display for each composition (structure) to be extracted, which is the final output, and the accuracy of volume estimation are also constant. There was a limit.

  For example, when extracting a nodule from CT volume data, a foreground region is extracted by comparing a CT value (luminance value) in a voxel with a threshold value. The CT value indicates the X-ray absorption rate, and is used in units of HU (Hounsfield Unit) in which “air” is defined as −1000 HU and “water” is defined as 0 HU. CT values generally range from -1000 to 1000 on the basis of air (-1000), and CT values of various anatomical structures in the body are distributed from -800 to 300. For example, the GGO region is distributed in the range of −800 to −500, the blood vessel and the Solid region in the range of −500 to 0, and the chest wall and the bone are often distributed in the range of 0 to 500. Therefore, when these anatomical structures are extracted by the threshold processing of luminance values, it is desirable to set the threshold to about −800 at which the GGO region can be extracted.

  However, the CT volume data often has higher brightness values near the boundary of the anatomical structure due to blurring due to parameters such as slice thickness and resolution, reconstruction function, and the like. When threshold processing of luminance values is performed on such data, an area near a boundary having a luminance value slightly higher than air is overdetected. In particular, when tumors and blood vessels are intricately complicated, there is a problem that adjacent structures are connected to each other via an overdetection region, and it is difficult to correctly separate even if removal processing is performed later. For example, as shown in FIG. 3, when threshold value processing of luminance values is performed on CT volume data including nodules and blood vessels (“foreground extraction based on luminance values” in the figure), regions around nodules and blood vessels are Over-detection (“over-detection area” in the figure) is difficult, and it is difficult to perform highly accurate classification in subsequent “classification”. For such a problem, it is conceivable to raise the threshold value as a countermeasure. However, for example, a GGO region of about -800 HU may not be detected, and it is difficult to cope with it simply by adjusting the threshold value. . In addition, FIG. 3 is a figure for demonstrating the subject which concerns on a prior art.

  Therefore, the image processing apparatus 100 according to the present application can improve the accuracy of region extraction from volume data with the configuration described in detail below. Specifically, the image processing apparatus 100 improves the accuracy of extracting a region from volume data by improving the accuracy of extracting a foreground region. FIG. 4 is a diagram illustrating an example of the image processing apparatus 100 according to the first embodiment. As illustrated in FIG. 4, the image processing apparatus 100 includes an input unit 110, a display unit 120, a storage unit 130, and a control unit 140. For example, the image processing apparatus 100 is a workstation, an arbitrary personal computer, or the like, and is connected to the medical image diagnostic apparatus 200, the image storage apparatus 300, and the like via a network.

  The input unit 110 is a pointing device such as a mouse or an input device such as a keyboard, and receives input of various operations on the image processing apparatus 100 from an operator. For example, the input unit 110 receives an extraction target designation operation included in the volume data. The display unit 120 is a display device such as a liquid crystal display and displays various types of information. Specifically, the display unit 120 displays a GUI (Graphical User Interface) for receiving various operations from the operator and a processing result by the control unit 140 described later.

  The storage unit 130 is, for example, a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk, and volume data acquired by the control unit 140 described later. Memorize etc. The storage unit 130 stores various information used by the control unit 140 described later. Further, the storage unit 130 stores a processing result by the control unit 140 described later. For example, the storage unit 130 includes an image data storage unit 131 as illustrated in FIG. The image data storage unit 131 stores volume data acquired by the control unit 140 described later.

  The control unit 140 is, for example, an electronic circuit such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit), an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array), and an image processing apparatus 100 total control is performed. As illustrated in FIG. 4, the control unit 140 includes, for example, an image data acquisition unit 141, a first extraction unit 142, and a second extraction unit 143.

  The image data acquisition unit 141 acquires volume data generated by the medical image diagnostic apparatus 200. For example, the image data acquisition unit 141 acquires volume data designated by the operator via the input unit 110 from the medical image diagnostic apparatus 200 or the image storage apparatus 300 and stores it in the image data storage unit 131.

  The first extraction unit 142 extracts a first area including the structure to be extracted from the volume data. Specifically, the first extraction unit 142 extracts a region where the luminance value of the voxel is equal to or greater than a predetermined threshold in the volume data as the first region.

  The second extraction unit 143 divides the first region into a plurality of small regions based on the similarity and the positional relationship of the feature amount of the voxel, and a second region that is a region other than the first region in the volume data The region of the structure is extracted by extracting the small region included in the structure from the plurality of small regions based on the similarity and the positional relationship. Specifically, the second extraction unit 143 has a feature amount distribution similarity with the second region that is smaller than or equal to a predetermined threshold among small regions that are in a predetermined positional relationship with the second region, or a feature. A small region whose distance of the quantity distribution is not less than a predetermined threshold is extracted as a small region included in the structure. Here, the second extraction unit 143 divides the first region into a plurality of small regions by clustering the voxels included in the first region using the luminance value and the three-dimensional coordinates.

  FIG. 5 is a flowchart illustrating a processing procedure performed by the image processing apparatus 100 according to the first embodiment. As shown in FIG. 5, the image processing apparatus 100 according to the first embodiment can perform foreground region extraction processing including an extraction target from volume data in a substantially automated flow. Specifically, the image processing apparatus 100 according to the first embodiment extracts the first area from the volume data, and divides the extracted first area into a plurality of small areas. Then, the image processing apparatus 100 determines whether each small region is a part of the extraction target based on the similarity and the positional relationship with the second region other than the first region in the volume data, A small area determined to be a part of the extraction target is extracted as a foreground area. Hereinafter, the processing procedure in the first embodiment will be described with reference to FIGS. 6A to 9 together. In the following, a case where a nodule region is extracted from CT volume data will be described as an example.

  Step S101: First, the image data acquisition unit 141 accepts designation of three-dimensional image data (volume data) by the operator on the GUI via the input unit 110, and obtains volume data according to the accepted designation.

  Step S102: Subsequently, the first extraction unit 142 determines a threshold for extracting the first region. FIG. 6A is a diagram illustrating an example of extraction of the first region by the first extraction unit 142 according to the first embodiment. For example, as illustrated in FIG. 6A, the first extraction unit 142 extracts a first region including a nodule region from the CT volume data acquired by the processing of the image data acquisition unit 141 using a luminance value. To decide. Here, a region other than the first region extracted by the first extraction unit 142 is set as a second region as shown in FIG. 6A. Note that the extraction of the first region based on the luminance value executed by the first extraction unit 142 corresponds to the extraction of the foreground region in the prior art.

  Here, the determination of the threshold value by the first extraction unit 142 may be a case where the threshold value specified by the operator via the input unit 110 is determined as the threshold value for extracting the first region. Alternatively, it may be determined adaptively for each input image. In addition, as a threshold value designated by the operator, for example, a case where a fixed value of about “−800 HU” is used so that a GGO area is possible may be used.

  Hereinafter, the case where it determines adaptively is demonstrated. FIG. 6B is a diagram for describing an example of threshold determination by the first extraction unit 142 according to the first embodiment. For example, as illustrated in (A) of FIG. 6B, the first extraction unit 142 generates a histogram of luminance values of each voxel of CT volume data, and determines a threshold value based on the generated histogram. Here, the region used for generating the histogram may be the entire volume data or a VOI limited to a specific region.

  Here, since the anatomical structures such as the Solid region and the GGO region each have a specific luminance value, the luminance value distribution of each structure can be expressed by a probability distribution having parameters such as an average value corresponding to each structure. Assuming that, the histogram generated from the CT volume data shown in (A) of FIG. 6B can be expressed as the sum of probability distributions corresponding to the plurality of structures. As the probability distribution, for example, a gamma distribution or a student's-t distribution may be used, but a method using a Gaussian distribution will be described below as an example. For example, the mixed Gaussian distribution with the luminance value “v = I (r)” of the coordinates “r (position vector r) = (x, y, z)” in the CT volume data of each voxel as a random variable is expressed by the following equation: It can be represented by (1).

Here, “q” in Equation (1) indicates a probability density function, “N (v | μ k , σ k )” indicates a one-dimensional Gaussian distribution, “μ k ” indicates an average value, and “σ “ k ” indicates a standard deviation, “π k ” indicates a mixing ratio, and “K” indicates the number of mixing elements. Parameters π k , μ k , and σ k are known non-patent literatures such as “AP Dempster, NM Laird, and DB Rubin,“ Maximum Likelihood from Complete Data EM ”. Journal of the Royal Statistical Society B, Vol. 39, pp. 1-38, 1977. ”.

For example, the parameters π k , μ k , and σ k are estimated so that the sum of the Gaussian distribution (normal distribution) of each element (Solid region, GGO region, etc.) approximates the histogram. Here, the number K of mixed elements can be arbitrarily set. For example, as shown in FIG. 6B (B), when estimating as “K = 3”, the sum of the Gaussian distributions of the respective elements is used. Is approximated to the histogram of FIG. 6B (A), the parameters π k , μ k , and σ k of each element are estimated. That is, as shown in FIG. 6B (B), a Gaussian distribution of elements including air (elements corresponding to k = 1) and elements including GGO regions (ground glass-like shadows) (elements corresponding to k = 2). And a Gaussian distribution of elements (elements corresponding to k = 3) including blood vessels and solid regions (solid shadows).

  Here, when the structure to be extracted is a GGO region, a Solid region, or a blood vessel, only an element corresponding to k = 2 and an element corresponding to k = 3 may be selected. Therefore, the probability l (v) that the luminance value v of the coordinate r of each voxel is the structure to be extracted can be calculated by the following equation (2) based on Bayes' theorem.

The denominator in equation (2) indicates a mixed probability distribution of probability distributions of all elements (k = 1 to 3), and the numerator indicates a mixed probability of a probability distribution corresponding to k = 2 and a probability distribution corresponding to k = 3. Show the distribution. Here, the threshold “v th ” for extracting the first region from the CT volume data is set to be “l (v th ) = β”. Note that the probability l (v) takes the range of 0 ≦ l (v) ≦ 1 and expresses the posterior probability. Therefore, β indicating the threshold may be set to β = 0.5 from the viewpoint of the minimum error rate in Bayesian theory. Alternatively, other values within the range of 0 ≦ β ≦ 1 may be selected.

Step S103: The first extraction unit 142 determines the threshold value as described above. For example, as illustrated in FIG. 6B (B), the Gaussian distribution of the elements including air and the Gaussian distribution including the GGO region The vicinity of the intersection is set as a threshold “v th ”, and the first region is extracted as a set of voxels satisfying “v> v th ”. That is, the 1st extraction part 142 extracts a 1st area | region as following formula (3). On the other hand, the 1st extraction part 142 extracts a 2nd area | region as following formula (4). Note that “F” in equation (3) indicates the first region, and “B” in equation (4) indicates the second region.

  Note that the image on the right side shown in FIG. 6A is an image expressed by binarizing values of voxels included in the first area and voxels included in the second area, and voxels included in the first area are represented by binarized values. The display is performed with “1” “white” gradation, and the voxels included in the second area are displayed with “0” “black” gradation. That is, in the example of FIG. 6A, the first extraction unit 142 adaptively determines a threshold value for each input volume data, as described above, for the area displayed with “white” gradation. , Extracted as the first region.

  Step S104: Returning to FIG. 5, next, the second extraction unit 143 divides the first area extracted in Step S103 into small areas. Specifically, the second extraction unit 143 converts the first region into a plurality of small regions by clustering using the feature amount (for example, luminance value) of the voxel included in the first region and three-dimensional coordinates. To divide. Thereby, since the spatial continuity of the feature amount (luminance value) is taken into consideration, the region division along the boundary of the anatomical structure such as the Solid region or the GGO region can be performed.

  FIG. 7 is a schematic diagram for explaining the concept of processing by the second extraction unit 143 according to the first embodiment. In FIG. 7, the horizontal axis represents the coordinate space in the volume data, and the vertical axis represents the feature amount (luminance value) of the voxel. For example, the second extraction unit 143 clusters a plurality of voxels included in the predetermined space shown in FIG. 7A as one small region from the distribution of feature values (luminance values). For example, as shown in FIG. 7B, the second extraction unit 143 sets the voxel P1 having a feature amount equal to or less than a certain threshold to the characteristics of other voxels within a predetermined space (within a distance). Clustering is performed as the same small region according to the distribution of the quantity (luminance value). That is, the second extraction unit 143 divides the voxel P1 as the same small region as other voxels exceeding the threshold value of the feature amount (luminance value). In other words, the second extraction unit 143 determines which region each voxel included in the volume data (first region) belongs to based on the similarity with other spatially close voxels, and the determination result The area is divided according to

  Here, the second extraction unit 143 can use, for example, the Meanshift method, the Mediashift method, the SLIC method, or the Quickshift method for the clustering described above. In the following, known non-patent literature “A. Vedaldi and S. Soatto,“ Quick Shift and Kernel Methods for Mode Seeking, “in Processes of the Vs. The case where the method is used will be described.

  The Quickshift method is a method for searching a cluster center defined as an extreme value of a kernel density function, and is executed in three steps: (1) calculation of the kernel density function, (2) nearest neighbor search, and (3) clustering.

(Calculation of kernel density function)
First, the second extraction unit 143 calculates a kernel density function value for each voxel included in the volume data. Here, when the Quickshift method is applied to volume data, a four-dimensional combination in which a one-dimensional luminance value v = I (r) and a three-dimensional image coordinate r = (x, y, z) are combined. Using the space R 4 as a feature space, a kernel density function is calculated by the following equation (5) defined by the product of the kernels in the luminance value space (luminance space) and the image space.

In equation (5), “p (v, r)” represents the kernel density function, “d v ” and “h v ” represent the distance scale and kernel width of the luminance space, respectively, and “d r ” and “ “h r ” indicates the distance measure and kernel width of the image space, “n” indicates the number of voxels, “k” indicates the kernel function, and “C” indicates a constant. Here, the distance scale in each space is calculated by the Euclidean distance expressed by the following equation (6), and it is desirable to use the Epanechnikov kernel expressed by the following equation (7) as the kernel function k. Thereby, the amount of calculation can be reduced and the processing load of the apparatus can be reduced.

That is, as shown in Expression (5), the second extraction unit 143 calculates, for each voxel included in the volume data, the value of the kernel density function that defines the similarity in a predetermined range for each of the luminance value and coordinates. To do. Here, the range (kernel width) in each of the luminance value and the coordinates can be arbitrarily set depending on the extraction target and the imaging conditions. For example, as the kernel width “h v ” of the luminance space, a difference between luminance values to be separated is set. For example, in order to accurately separate an element including air and an element including a GGO region, the kernel width “h v ” of the luminance space indicates a CT value “−1000HU” indicating air and a GGO region. It is desirable to set the value around “200”, which is the difference from the CT value “−800”.

Further, for example, a kernel width of the image space "h r" is set based on the size of the extraction object. For example, when extracting an area whose minimum size to be extracted is “radius: 1.5 mm” from volume data with a pixel pitch of “0.6 mm” at the time of image collection, the kernel width “ It is desirable to set “h r ” around “2.5 = (1.5 / 0.6)”. Note that these kernel widths can also be automatically set by the second extraction unit 143 based on information to be extracted and accompanying information of volume data (for example, DICOM tags). In such a case, for example, the second extraction unit 143 reads the kernel width of the luminance space for each extraction target stored in the storage unit 130 in advance, and also stores information on the pixel pitch acquired from the volume data and the storage unit 130 in advance. The kernel width of the image space is calculated from the size of the extraction target stored in (1).

(Nearest neighbor search)
As described above, when the value of the kernel density function of each voxel included in the volume data is calculated, the second extraction unit 143 causes each voxel “i” included in the first region represented by the above-described formula (3). The voxel “N (i)” having the highest probability density in the vicinity “D i ” of “” is searched by the following equation (8).

Here, the neighborhood D i can be arbitrarily set, but in order to reduce the processing load related to the search and to prevent the distant voxels from being assigned, as defined by the following formula (9), It is desirable to use the kernel width range described above.

Here, “j” shown in the equations (8) and (9) indicates a voxel included in the vicinity “D i ” of the voxel “i”, and as shown in the equation (9), The difference between the luminance values is within the kernel width of the luminance space, and the distance between the coordinates with the voxel “i” is the voxel within the kernel width of the image space. At this time, the voxel “j” may be a voxel included in the second region represented by the above-described formula (4).

  Further, “d (j || i)” shown in Expression (8) is a distance scale between voxel “i” and voxel “j”, and Euclidean in the luminance space and image space shown in Expression (10) below. Calculated by weighted sum of distances.

As described above, the second extraction unit 143 searches for the voxel “N (i)” having the highest probability density in the vicinity “D i ” of each voxel “i” included in the first region. That is, the second extraction unit 143 searches for a voxel “N (i)” corresponding to a position where the density of similar voxels is the highest in the vicinity of the voxel “i”.

(Clustering)
When the nearest neighbor search described above is performed, the second extraction unit 143 then divides the first region into a plurality of small regions using the result of the nearest neighbor search. FIG. 8A is a diagram illustrating an example of region division by the second extraction unit 143 according to the first embodiment. For example, as illustrated in FIG. 8A, the second extraction unit 143 divides the first region extracted by the first extraction unit 142 into a plurality of small regions. Here, the second extraction unit 143 searches the center (cluster center) of each small region by recursively repeating the voxel transition operation using the result of the nearest neighbor search, and is included in each small region. Extract voxels.

  FIG. 8B is a schematic diagram for explaining an example of clustering by the second extraction unit 143 according to the first embodiment. For example, as illustrated in FIG. 8B, the second extraction unit 143 uses the voxel “i” included in the first region as one node on the graph, and N searched for by the nearest neighbor search with the voxel “i”. An edge is stretched between (i). The second extraction unit 143 expresses the entire image with a plurality of tree structures by extending edges in the same manner for all voxels included in the first region. Here, since the cluster center is defined as a local maximum value of the probability density function, it can be interpreted that there is no point having a high probability density in the vicinity region. That is, the voxel corresponding to the cluster center point does not have an edge because its point is N (i), and corresponds to the root of the tree structure.

  Therefore, the second extraction unit 143 performs “i → N (i) → N (N (i))” illustrated in FIG. 8B for all voxels “i” included in the first region. The root of the tree structure “R (i)” is searched by recursively executing the transition operation. That is, as illustrated in FIG. 8B, the second extraction unit 143 starts from the voxel “i” and transitions to the voxel “N (i)” that is the result of the nearest neighbor search. Then, the second extraction unit 143 transitions to the voxel “N (N (i))” that is the result of the nearest neighbor search of the voxel “N (i)”. Then, the second extraction unit 143 searches for the root “R (i)” of the tree structure in which the result of the nearest neighbor search becomes its own voxel.

Here, when the root voxel position “r R (i) ” is included in the second region (r R (i) ∈B), the second extraction unit 143 sets the start voxel “i” to the first voxel position “i”. Exclude from the area. On the other hand, when the root voxel position “r R (i) ” is included in the first region (r R (i) εF), the second extraction unit 143 determines ( Root) A cluster (small area) having “R (i)” as the cluster center is assigned. At this time, if the difference between the luminance value “v i ” of the voxel “ i ” and the luminance value “v R (i) ” of the root voxel “R (i)” is expressed by the following equation (11), the tree Into a new cluster. In Expression (11), “a” is a constant set in advance, and is set to “a = 1” or “a = 2”, for example. This makes it possible to more accurately perform region division at the boundary of the anatomical structure.

  The second extraction unit 143 performs the above-described clustering on all the voxels included in the first region, so that each of the roots “R (i)” on the graph is the cluster center. Is divided into a plurality of small areas.

  Step S105: Returning to FIG. 5, when the first area is divided into small areas, the second extraction unit 143 determines whether or not specific information has been received. Specifically, whether the second extraction unit 143 has received specification information (position information) for specifying whether one or more voxels included in the volume data correspond to the structure to be extracted Determine whether. Here, the position information may be, for example, a point or region designated by an operator's mouse click or drag operation, or may be a point or region automatically detected by CAD.

  Here, when specific information is received (step S105, Yes), the 2nd extraction part 143 extracts the small area | region which is a part of object structure, referring specific information (step S106). That is, the second extraction unit 143 extracts a voxel or a small region including the region that has received the specific information as a foreground region (a part of the target structure). On the other hand, when specific information is not received (step S105, No), the small area | region which is a part of object structure is extracted (step S107).

  Step S106, Step S107: When the specific information is not received in Step S105, and when the small area to be processed does not include the voxel or the area for which the specific information is received, the second extraction unit 143 It is determined whether the small area is a part of the structure to be extracted, and the small area determined to be a part of the structure to be extracted is extracted as a foreground area. FIG. 9 is a diagram illustrating an example of extraction processing by the second extraction unit 143 according to the first embodiment. For example, as illustrated in FIG. 9, the second extraction unit 143 extracts, as the foreground region, a small region that excludes small regions that are not part of the structure to be extracted from the divided small regions.

  Here, the exclusion process for the small area is executed by a method based on the Quickshift method used when dividing the first area into small areas. In addition, the exclusion process with respect to a small area | region is well-known nonpatent literature "J. Ning, L. Zhang, D. Zhang, and C. Wu," Interactive image segmentation by maximal similarity basing region regi merging, Preg. 43, No. 2, pp. 445-456, 2010. "can be executed in the same manner as the Quickshift method.

  The Quickshift method can be interpreted as a method of calculating the value of the kernel density function at each node of the graph structure and searching for the maximum value while transitioning on the graph in the direction of increasing density. In the above-described division of the small area, each node corresponds to a voxel, and the distance between the nodes is defined as the luminance value and the distance on the image. In the small area exclusion process, each node is set as a small area, a graph structure is formed from the positional relationship (adjacent relationship), and the distance between the nodes (regions) may be appropriately defined. The basic flow is executed in three steps: (1) calculation of kernel density function, (2) nearest neighbor search, and (3) clustering, similar to the above-described division of small regions.

(Calculation of kernel density function)
First, the second extraction unit 143 performs the following for each of M + 1 areas s j (j = 1, 2,..., M + 1) obtained by adding the second area to the obtained M small areas. The value of the kernel density function defined by equation (12) is calculated.

Here, “h (s)” in Expression (12) indicates a histogram of luminance values of voxels included in the small region “s”, “d H ” indicates the distance between the histograms, and “d G ” is small. A function defining a graph distance between regions is shown, and “h H ” and “h G ” indicate a kernel width between histograms and a kernel width between small regions, respectively. The d H may be used such as Kullback-Leibler distance, but in the present embodiment will be described using the Hellinger distance defined by equation (13) below.

Here, “N b ” in equation (13) indicates the number of bins in the histogram, and “B” indicates the Battacharyya coefficient. d G may use the Euclidean distance between the barycentric coordinates of the voxels contained in the small area, but if the area is not convex (if the area has a dent or a hole near the center (other small areas)) In some cases, the coordinates of the center of gravity may be out of the area. For example, when there is another small region near the center of the region, the center of gravity of the region may be within the other small region.

Therefore, a graph structure is generated based on an adjacency matrix representing the adjacency relationship of small areas (each i and j component is 1 when the area s i and the area s j are adjacent to each other, and 0 when the area s j is not adjacent). Then, d G may be set based on a distance on the graph, for example, a minimum spanning tree or a shortest path (the minimum number of edges passing through a path connecting two nodes). That is, the distance on the graph is set based on a graph structure in which edges are extended only between adjacent nodes (small regions). In the present embodiment, a case where the shortest path is used will be described. Here, assuming that the kernel function is an Epanechnikov kernel and the kernel width is h H = 1 and h G = 2 in order to reduce the processing load, the kernel density function can be defined as the following equation (14).

Here, S: = {s i | d G (s, s i ) <2}. Since d G (s, s i ) is the number of edges and is limited to an integer value, d G (s, s i ) <2 is d G (s, s i ) = 0, Note that d G (s, s i ) = 1, the kernel density function can be further defined as the following equation (15). It should be noted that the common constant portion is omitted in equation (15).

Here, “N s ” in Expression (15) indicates a set of small areas adjacent to the small area s. By setting in this way, the kernel density function can be calculated as the sum of Battacharyya coefficients with regions adjacent to each region s, as shown in Expression (15).

(Nearest neighbor search)
As described above, when the value of the kernel density function of each small region is calculated, the second extraction unit 143 causes the small region “N” where the probability density increases most in the vicinity “D sj ” of each small region “s j ”. (S j ) ”is searched by the following equation (16).

Here, the neighborhood D sj can be arbitrarily set. However, in order to reduce the processing load related to the search and to prevent the allocation of a small area apart from each other, the neighborhood D sj is defined by the following equation (17). It is desirable to set.

Here, “T H ” shown in the equation (17) indicates a threshold regarding the distance between the histograms, and “T G ” indicates a threshold regarding the graph distance. These threshold values can also be set as the following equation (18) when the Hellinger distance and the shortest path are used.

As described above, the second extraction unit 143 searches for a small region “N (s j )” in which the probability density increases most in the vicinity “D sj ” of each small region “s j ”. That is, the second extraction unit 143 searches for a small area “N (s j )” corresponding to a position where the density of similar small areas is the highest in the vicinity of the small area “s j ”.

(Clustering)
When the above-mentioned nearest neighbor search is performed, the second extraction unit 143 next extracts the foreground region using the result of the nearest neighbor search. Here, the second extraction unit 143 searches the cluster center by recursively repeating the small region transition operation using the result of the nearest neighbor search, and each small region is a part of the structure to be extracted. It is determined whether or not there is a part, and if it is a part, it is extracted as a foreground area (if it is not a part, it is excluded from the first area).

For example, the second extraction unit 143 uses each small region “s j ” as one node on the graph, and makes an edge between the small region “s j ” and N (s j ) searched by the nearest neighbor search. Hang. The second extraction unit 143 expresses the entire image with a plurality of tree structures by extending edges in the same manner for all the small regions. Here, since the cluster center is defined as the maximum value of the probability density function, it can be interpreted as a small region in which there is no small region having a high probability density due to its neighborhood. In other words, the small area at the center of the cluster is N (s j ), so that the edge is not stretched and corresponds to the root of the tree structure.

Therefore, the second extraction unit 143 recursively performs a transition operation such as “s j → N (s j ) → N (N (s j ))” for all the small regions “s j ”. The root of the tree structure “R (s j )” is searched by repeatedly executing. That is, the second extraction unit 143 starts from the small area “s j ” and transitions to the small area “N (s j )” that is the result of the nearest neighbor search. Then, the second extraction unit 143 transitions to the voxel “N (N (s j ))” that is the result of the nearest neighbor search of the small region “N (s j )”. Then, the second extraction unit 143 searches for the root “R (s j )” of the tree structure in which the result of the nearest neighbor search becomes its own region.

Here, when the root small region “s R (sj) ” is the second region, the second extraction unit 143 excludes the start small region “s j ” from the first region. On the other hand, when the root small region “s R (sj) ” is the first region, the second extraction unit 143 extracts the start small region “s j ” as the foreground region. The second extraction unit 143 extracts the foreground region by performing the above-described clustering on all the small regions. Here, when one or more of the small areas correspond to specific information (position information) for corresponding to the structure to be extracted, an extraction process according to the specific information is executed. For example, a small region including a voxel position of an extraction target structure that is manually designated by an operator or automatically by CAD is not excluded from the first region even if the root region is the second region. It can also be.

  As described above, according to the first embodiment, the first extraction unit 142 extracts the first region including the structure to be extracted from the three-dimensional image data. The second extraction unit 143 divides the first region into a plurality of small regions based on the similarity and positional relationship of the voxel feature amount, and the second region is a region other than the first region in the three-dimensional image data. The region of the structure is extracted by extracting the small region included in the structure from the plurality of small regions based on the similarity and the positional relationship with the region. Therefore, the image processing apparatus 100 according to the first embodiment considers not only the luminance value for each voxel but also the similarity of the distribution of luminance values between regions and the spatial adjacency relationship, and thus the boundary of the anatomical structure. Region extraction can be performed, and the accuracy of region extraction from three-dimensional image data can be improved.

(Second Embodiment)
In the first embodiment, a case has been described in which a small area determined to be a part of the structure to be extracted is extracted as a foreground area. In the second embodiment, a case will be described in which the extraction target structure is identified by the extracted small region. FIG. 10 is a diagram illustrating an example of the image processing apparatus 100 according to the second embodiment. As illustrated in FIG. 10, the image processing apparatus 100 according to the second embodiment has a point that the storage unit 130 has a new estimation reference storage unit 132 and a control unit 140 that has a new identification unit 144. Different from the first embodiment. Hereinafter, these will be mainly described.

  The estimation criterion storage unit 132 stores an estimation criterion for identifying which extraction target structure is the extracted small region. Specifically, the estimation reference storage unit 132 stores classifiers generated by various methods. For example, when the extraction target is a solid region, a GGO region, or a blood vessel region, since the luminance value, the texture and shape in the region are different, a classifier for distinguishing them is stored as a feature amount.

  Since the Solid region has a higher luminance and the inner texture is flat as compared with the GGO region, the Solid region can be identified by using a feature value in consideration of luminance value distribution and gradient information in the region. The blood vessel region has a luminance value similar to that of the Solid region, but has a tubular shape and is different from a lump-shaped nodule region. Therefore, the blood vessel region can be identified by using a feature amount that captures such a three-dimensional shape. . For example, the estimation criterion storage unit 132 may include a distribution of luminance values in a region, a statistic such as an average value and variance of eigenvalues of a Hessian matrix, a total variation, a local binary pattern, a gray-level co-ocurrence matrix, and a histogram. The classifier which discriminate | determines as a feature-value is stored, such as of Oriented Gradient, Haar-like feature, Higher-order Local Auto Correlation. In addition, the estimation reference storage unit 132 stores a combination of the above-described feature amounts, a dictionary of texture patterns learned using Bag of feature or Sparse coding, or a classifier that determines a feature amount using the texture pattern dictionary.

  In such a case, for example, the control unit 140 first extracts feature amounts of each structure in a plurality of supervised images in which the drawn structure is determined, and extracts the extracted feature amounts. The classifier is generated and stored in the estimation reference storage unit 132. The classifier may be updated each time it is identified by the identification unit 144.

  The identification unit 144 extracts the various feature amounts described above for each small region, and identifies which extraction target structure the small region is using the classifier stored by the estimation reference storage unit 132. Further, after identifying the small region, the identification unit 144 may accept a determination result as to which of the extraction target structures is from a doctor or the like via the input unit 110. And the identification part 144 can also update the classifier stored in the estimation reference | standard memory | storage part 132 using the identification result by a classifier, and the received discrimination | determination result. Thus, a classifier with higher accuracy can be generated by updating the classifier every time the identification unit 144 identifies a feature amount of a small region.

  Various methods can be used for generating the classifier, such as Discriminant Analysis, Logistic Regression, Support Vector Machine, Normal Network, Randomized Tree, Sub-Space, and the like. Here, it is preferable to perform identification in consideration of the adjacent relationship of each small region by combining with Conditional Random Fields and Graph Cut.

  FIG. 11 is a flowchart illustrating a processing procedure performed by the image processing apparatus according to the second embodiment.

  Step S201: When a small area is extracted in steps S101 to S107, the identifying unit 144 identifies which extraction target structure is the extracted small area.

  Step S202: Thereafter, the identification unit 144 outputs an area divided for each extraction target structure based on the identification result. FIG. 12 is a diagram for explaining an example of processing performed by the identification unit 144 according to the second embodiment. For example, as illustrated in FIG. 12, the identification unit 144 identifies whether each of the small regions extracted by the second extraction unit 143 is a Solid region, a GGO region, or a blood vessel region, and the identification result is the same The result of segmenting adjacent small regions as one region with the structure of is output. That is, the identification unit 144 outputs the small areas identified as the GGO areas in the adjacent small areas as one GGO area. Similarly, the identification unit 144 outputs a segmented region for the Solid region and the blood vessel region.

  As described above, according to the second embodiment, the identification unit 144 identifies which of the extraction targets each of the small areas included in the structure area. Therefore, the image processing apparatus 100 according to the second embodiment performs not only the identification for each voxel but also the identification for each area, thereby making it possible to use the statistical feature amount in the area and perform robust identification against noise. Make it possible. Further, since the small area generated by the second extraction unit 143 is divided along the boundary of the anatomical structure, and the overdetection area as shown in FIG. 3 is excluded, the feature extraction is performed. As shown in FIG. 12, it is possible to identify with high accuracy without mixing unnecessary information.

(Third embodiment)
The embodiment is not limited to the first and second embodiments described above.

  In the above-described embodiment, the case where an area is extracted from CT volume data collected by an X-ray CT apparatus has been described. However, the embodiment is not limited to this, and the region may be extracted from volume data collected by other modalities such as an MRI apparatus or an ultrasonic diagnostic apparatus.

  In the above-described embodiment, the case where the image data acquisition unit 141 acquires volume data from the image storage device 300 or the medical image diagnostic device 200 has been described. However, the embodiment is not limited to this. For example, a doctor carries medical image data on a portable storage medium such as a flash memory or an external hard disk and stores it in the image data storage unit 131 of the image processing apparatus 100. It may be the case. In such a case, the acquisition of volume data by the image data acquisition unit 141 may not be executed.

  In the above-described embodiment, the image processing apparatus 100 has been described. However, the embodiment is not limited thereto, and for example, the control unit 140 of the image processing apparatus 100 illustrated in FIG. It may be a case where it is incorporated in the diagnostic apparatus 200 and the above-described process is executed by the medical image diagnostic apparatus.

  The instructions shown in the processing procedures shown in the above-described embodiments can be executed based on a program that is software. The general-purpose computer stores this program in advance and reads this program, so that the same effect as that obtained by the image processing apparatus 100 of the above-described embodiment can be obtained. The instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ± R, DVD ± RW, etc.), semiconductor memory, or a similar recording medium. As long as the computer or embedded system can read the storage medium, the storage format may be any form. If the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program, the same operation as the image processing apparatus 100 of the above-described embodiment can be realized. . Further, when the computer acquires or reads the program, it may be acquired or read through a network.

  Further, an OS (Operating System) operating on a computer based on instructions from a program installed in a computer or an embedded system from a storage medium, database management software, MW (Middleware) such as a network, etc. A part of each process for realizing the above may be executed. Furthermore, the storage medium is not limited to a medium independent of a computer or an embedded system, but also includes a storage medium in which a program transmitted via a LAN (Local Area Network) or the Internet is downloaded and stored or temporarily stored. Further, the number of storage media is not limited to one, and the processing in the above-described embodiment is executed from a plurality of media, and the storage medium in the embodiment may be included in any configuration.

  The computer or the embedded system in the embodiment is for executing each process in the above-described embodiment based on a program stored in a storage medium, and includes a single device such as a personal computer or a microcomputer. The system may be any configuration such as a system connected to the network. In addition, the computer in the embodiment is not limited to a personal computer, and includes an arithmetic processing device, a microcomputer, and the like included in an information processing device, and is a generic term for devices and devices that can realize the functions in the embodiment by a program. .

(Hardware configuration)
FIG. 13 is a diagram illustrating a hardware configuration of the image processing apparatus according to the embodiment. The image processing apparatus according to the above-described embodiment is connected to a control device such as a CPU (Central Processing Unit) 40, a storage device such as a ROM (Read Only Memory) 50 and a RAM (Random Access Memory) 60, and a network. A communication I / F 70 that performs communication and a bus 80 that connects each unit are provided.

  A program executed by the image processing apparatus according to the above-described embodiment is provided by being incorporated in advance in the ROM 50 or the like. In addition, the program executed by the image processing apparatus according to the above-described embodiment may cause the computer to function as each unit (for example, the second extraction unit 143) of the above-described image processing apparatus. In this computer, the CPU 40 can read and execute a program from a computer-readable storage medium onto a main storage device.

  As described above, according to the first to third embodiments, the accuracy of region extraction from three-dimensional image data can be improved.

  Although several embodiments of the present invention have been described, these embodiments are presented by way of example and are not intended to limit the scope of the invention. These embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and their modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalents thereof.

DESCRIPTION OF SYMBOLS 100 Image processing apparatus 142 1st extraction part 143 2nd extraction part

Claims (12)

  1. Based on the threshold value corresponding to the CT value, the luminance value is relatively high from the three-dimensional image data , the first region including the structure to be extracted, and the luminance value is relatively low, and the air in the three-dimensional image data A first extraction unit that extracts a second region that is a region ;
    Divided into a plurality of small regions on the basis of the first region with the feature value of similarity and the positional relationship of the voxel, adjacent the leading Stories second region, the similarity between the second region is relatively by excluding high small region from said plurality of small regions, extracting a small area contained from the plurality of small areas in the structure, and a second extraction unit for extracting a region of the structure,
    An image processing apparatus comprising:
  2.   The second extraction unit is configured such that, in a small region having a predetermined positional relationship with the second region, the similarity of the feature amount distribution with the second region is a predetermined threshold value or less, or the feature amount distribution The image processing apparatus according to claim 1, wherein a small region whose distance is equal to or greater than a predetermined threshold is extracted as a small region included in the structure.
  3.   The said 2nd extraction part extracts the small area | region contained in the said structure from these several small area | region based on the specific information for specifying whether it corresponds to the structure of the said extraction object. The image processing apparatus described.
  4.   The said 2nd extraction part divides | segments the said 1st area | region into several small area | regions by clustering the voxel contained in the said 1st area | region using a luminance value and a three-dimensional coordinate. The image processing apparatus according to any one of the above.
  5.   The image processing apparatus according to claim 2, wherein the feature amount distribution is a distribution of luminance values of voxels.
  6.   6. The image processing according to claim 1, wherein the first extraction unit extracts, as the first region, a region in which the luminance value of a voxel is equal to or greater than a predetermined threshold in the three-dimensional image data. apparatus.
  7.   The image processing apparatus according to claim 1, wherein the three-dimensional image data is CT image data, and the structure to be extracted is an anatomical structure including a tumor and a blood vessel.
  8.   The image processing apparatus according to claim 7, wherein the tumor is observed as at least one of a solid shadow and a ground glass-like shadow.
  9.   The image processing device according to claim 1, further comprising an identification unit that identifies a small region included in the region of the structure.
  10.   The image processing apparatus according to claim 3, wherein the specific information is information specified by an operator or information specified by CAD.
  11. An image processing method executed by an apparatus for extracting a region to be extracted from three-dimensional image data,
    Based on the threshold value corresponding to the CT value, the luminance value is relatively high from the three-dimensional image data, and the first region including the structure to be extracted is relatively low in luminance, and the three-dimensional image data A first extraction step of extracting a second region that is an air region ;
    Divided into a plurality of small regions on the basis of the first region with the feature value of similarity and the positional relationship of the voxel, adjacent the leading Stories second region, the similarity between the second region is relatively by excluding high small region from said plurality of small regions, extracting a small area included in the structure from the plurality of small areas, and a second extraction step of extracting a region of the structure,
    An image processing method including:
  12. An image data collection unit for collecting three-dimensional image data;
    Based on the threshold value corresponding to the CT value, the luminance value is relatively high from the three-dimensional image data, and the first region including the structure to be extracted is relatively low in luminance, and the three-dimensional image data A first extraction unit that extracts a second region that is an air region ;
    Divided into a plurality of small regions on the basis of the first region with the feature value of similarity and the positional relationship of the voxel, adjacent the leading Stories second region, the similarity between the second region is relatively by excluding high small region from said plurality of small regions, extracting a small area contained from the plurality of small areas in the structure, and a second extraction unit for extracting a region of the structure,
    A medical image diagnostic apparatus comprising:
JP2014142686A 2014-07-10 2014-07-10 Image processing apparatus, image processing method, and medical image diagnostic apparatus Active JP6415878B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014142686A JP6415878B2 (en) 2014-07-10 2014-07-10 Image processing apparatus, image processing method, and medical image diagnostic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2014142686A JP6415878B2 (en) 2014-07-10 2014-07-10 Image processing apparatus, image processing method, and medical image diagnostic apparatus

Publications (2)

Publication Number Publication Date
JP2016016265A JP2016016265A (en) 2016-02-01
JP6415878B2 true JP6415878B2 (en) 2018-10-31

Family

ID=55231970

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014142686A Active JP6415878B2 (en) 2014-07-10 2014-07-10 Image processing apparatus, image processing method, and medical image diagnostic apparatus

Country Status (1)

Country Link
JP (1) JP6415878B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018061771A (en) * 2016-10-14 2018-04-19 株式会社日立製作所 Image processing apparatus and image processing method
KR20200027660A (en) * 2018-09-05 2020-03-13 주식회사 실리콘사피엔스 Method and system for automatic segmentation of vessels in medical images using machine learning and image processing algorithm

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4020465B2 (en) * 1997-09-22 2007-12-12 オリンパス株式会社 Detection device and method for tumor shadow in image and recording medium recording tumor shadow detection program
EP1129426A4 (en) * 1998-11-13 2002-08-21 Arch Dev Corp System for detection of malignancy in pulmonary nodules
JP2003265463A (en) * 2002-03-13 2003-09-24 Nagoya Industrial Science Research Inst Image diagnosis support system and image diagnosis support program
JP4184842B2 (en) * 2003-03-19 2008-11-19 富士フイルム株式会社 Image discrimination device, method and program
JP4146438B2 (en) * 2005-01-19 2008-09-10 ザイオソフト株式会社 Identification method
JP4599191B2 (en) * 2005-03-01 2010-12-15 国立大学法人神戸大学 Diagnostic imaging processing apparatus and diagnostic imaging processing program
US8265367B2 (en) * 2007-06-04 2012-09-11 Siemens Computer Aided Diagnostics, Ltd. Identifying blood vessels in lung x-ray radiographs
JP5390080B2 (en) * 2007-07-25 2014-01-15 株式会社東芝 Medical image display device
JP5536987B2 (en) * 2008-05-16 2014-07-02 株式会社東芝 Image processing device
JP5993653B2 (en) * 2012-08-03 2016-09-14 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP5961512B2 (en) * 2012-09-28 2016-08-02 富士フイルム株式会社 Image processing apparatus, operation method thereof, and image processing program

Also Published As

Publication number Publication date
JP2016016265A (en) 2016-02-01

Similar Documents

Publication Publication Date Title
Mansoor et al. A generic approach to pathological lung segmentation
Messay et al. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the lung image database consortium and image database resource initiative dataset
Candemir et al. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration
US20190108632A1 (en) Advanced computer-aided diagnosis of lung nodules
Narkhede Review of image segmentation techniques
Li et al. Pulmonary nodule classification with deep convolutional neural networks on computed tomography images
Choi et al. Genetic programming-based feature transform and classification for the automatic detection of pulmonary nodules on computed tomography images
US10339648B2 (en) Quantitative predictors of tumor severity
Linguraru et al. Tumor burden analysis on computed tomography by automated liver and tumor segmentation
Häme et al. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation
Mahapatra Analyzing training information from random forests for improved image segmentation
US10388020B2 (en) Methods and systems for segmentation of organs and tumors and objects
Sharma et al. Identifying lung cancer using image processing techniques
Dehmeshki et al. Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach
Soliman et al. Accurate lungs segmentation on CT chest images by adaptive appearance-guided shape modeling
van Ginneken Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning
US9990712B2 (en) Organ detection and segmentation
Maitra et al. Technique for preprocessing of digital mammogram
Despotović et al. MRI segmentation of the human brain: challenges, methods, and applications
Rahmati et al. Mammography segmentation with maximum likelihood active contours
JP5795717B2 (en) Image processing method, image processing apparatus, computer-readable medium, and computer program
US8724866B2 (en) Multi-level contextual learning of data
Sluimer et al. Toward automated segmentation of the pathological lung in CT
Juan-Albarracin et al. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification
Bilic et al. The liver tumor segmentation benchmark (lits)

Legal Events

Date Code Title Description
RD01 Notification of change of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7421

Effective date: 20151102

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20160317

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20160929

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20161021

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170613

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20180319

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180327

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180525

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180904

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20181003

R150 Certificate of patent or registration of utility model

Ref document number: 6415878

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150