WO2012011579A1 - 病理組織画像の領域分割画像データ作成システム及び病理組織画像の特徴抽出システム - Google Patents
病理組織画像の領域分割画像データ作成システム及び病理組織画像の特徴抽出システム Download PDFInfo
- Publication number
- WO2012011579A1 WO2012011579A1 PCT/JP2011/066744 JP2011066744W WO2012011579A1 WO 2012011579 A1 WO2012011579 A1 WO 2012011579A1 JP 2011066744 W JP2011066744 W JP 2011066744W WO 2012011579 A1 WO2012011579 A1 WO 2012011579A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- pathological tissue
- region
- mask
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/36—Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Definitions
- the present invention relates to a pathological tissue image region division image data creation system and a pathological tissue image feature extraction system and method.
- cancer in this specification, “cancer” is used to indicate an entire malignant tumor, and “cancer” is used to indicate a malignant tumor derived from an epithelial tissue such as a stomach biopsy tissue).
- a pathological diagnosis performed as a definitive diagnosis a pathological specimen (hereinafter referred to as a pathologist) having specialized knowledge and experience is used to observe a pathological specimen of a tissue collected from a patient's body by surgery or examination under a microscope. has been implemented.
- Non-Patent Document 1 a technique for extracting cell nuclei and cell features from a pathological tissue image and performing automatic diagnosis has been proposed [see Non-Patent Document 1 and Non-Patent Document 2].
- the information obtained by the methods shown in Non-Patent Documents 1 and 2 has a problem that it is greatly influenced by the accuracy of the cell nucleus cut-out process.
- Non-Patent Document 3 a pathological diagnosis support technique using a higher-order local auto-correlation (HLAC) feature
- Non-Patent Document 4 a pathological diagnosis support technique using a higher-order local auto-correlation (HLAC) feature
- HLAC local auto-correlation
- Patent Document 1 typical color information of cell nuclei is stored in advance using a number of pathological tissue images, and the color of the color in the pathological tissue image to be examined is stored.
- JP 2009-9290 A Patent Document 2
- a saturation (S) component and a lightness (V) component obtained by HSV color space conversion of a pathological tissue image are binarized by discriminant analysis, and logical product is obtained.
- the zero area is determined as the background.
- cell nuclei are extracted by binarizing the V component by discriminant analysis.
- the histogram of the area ratio of the cytoplasm and cell nucleus for every cell is employ
- Non-Patent Documents 5 and 6 the number of samples not unusual in non-cancer image to be recognized as a normal has been a recognized erroneously detected as abnormal indicating the suspicion of cancer. In order to effectively reduce the burden on the doctor by this method, it is necessary to suppress this erroneous detection as much as possible.
- An object of the present invention is to provide region-division image data of a pathological tissue image that can generate region-division image data necessary for generating a region-division image in which a background region, a cytoplasm region, and a cell nucleus region are clearer than before.
- Another object of the present invention is to provide a feature extraction system and method for a pathological tissue image that can improve the image recognition accuracy as compared with the prior art using higher-order local autocorrelation features.
- the present application proposes a region division image data creation system for a pathological tissue image that creates region division image data for suppressing erroneous detection in abnormality detection from a pathological tissue image, and the present application provides a pathologist who performs diagnosis at the time of diagnosis.
- This paper proposes a feature extraction system and method for a pathological tissue image that makes use of the importance of the tissue as a clue and the characteristics peculiar to the pathological tissue image for feature extraction.
- a pathological tissue image is divided into three regions, a cell nucleus region, a cytoplasm region, and a background region, and each region is ternarized by a level value obtained by quantifying the importance of each tissue.
- HLAC feature extraction considering rotation and inversion is performed from the pathological tissue image to extract the feature of the pathological tissue image.
- the first invention of the present application provides region-divided image data necessary for generating a region-divided image in which the background region, the cytoplasmic region, and the cell-nuclear region are clear from the pathological tissue image data including the background, cytoplasm, and cell nucleus.
- the present invention is intended for a region division image data creation system for a pathological tissue image to be created.
- the pathological tissue image data is composed of pixel data for a plurality of pixels displaying the background, cytoplasm, and cell nucleus.
- the region-division image data creation system for a pathological tissue image includes a first binarized image data creation unit, a second binarized image data creation unit, and a ternary image data creation unit.
- the first binarized image data creation unit creates first binarized image data from which the cell nucleus region and other regions can be distinguished from the pathological tissue image data.
- the second binarized image data creation unit creates second binarized image data from which the background region and other regions can be distinguished from the pathological tissue image data.
- the ternarized image data creation unit discriminates the cytoplasm region by taking the negative OR of the first binarized image data and the second binarized image data, and the ternary image data that becomes the region-divided image data Creating data
- the negative logic of the first binarized image data in which the cell nucleus region and other regions can be distinguished and the second binarized image data in which the background region and other regions can be distinguished.
- the ternary image data serving as the divided region image data is generated by summing up the cytoplasm regions, it is possible to generate a divided region image in which the background region, the cytoplasm region, and the cell nucleus region are clear.
- two types of binarized image data with different types of regions created by a distinction method that matches two types of distinctive regions one by one according to the features are divided when distinguishing three types of regions.
- a region portion with unclear characteristics for distinction becomes clear, and it becomes possible to output three types of regions in a clear state by combining with a clear region portion already divided. It is.
- the first binarized image data creation unit separates the R component from the RGB image data of the pathological tissue image, for example, and binarizes the separated R component by the discriminating binarization method to obtain the cell nucleus region and the other
- the first binarized image data that can be distinguished from the region can be created.
- the RGB image data is image data expressed by a method of expressing a color by three elements of a red component signal (R), a green component signal (G), and a blue component signal (B).
- the B component is subtracted from the R component on the RGB color space in all the pixels of the pathological tissue image, and the pixel value when the subtraction result is smaller than 0 is obtained.
- Redundant component-removed RGB image data that has been subjected to redundant component removal that is zero can be used. By performing such redundant component removal, it is possible to remove pixel information that contains a large amount of B component that hinders extraction of the cell nucleus region. Further, when the value obtained by subtracting the R component from the B component in the RGB color space is larger than a predetermined value in all the pixels included in the RGB data removed from the redundant component, the value obtained by subtracting the R component from the B component is a predetermined value.
- the B component that has been subjected to the clipping process for setting the B component within a predetermined region so as to be a value may be used as image data for obtaining the first binarized image data.
- clipping it is possible to significantly reduce the influence of noise in the pathological specimen image and staining unevenness in the pathological specimen image.
- the second binarized image data creation unit separates the V component from the YUV image data of the pathological tissue image, and binarizes the separated V component by the discriminating binarization method.
- the second binarized image data that can distinguish the background area from other areas is created.
- the YUV image data is image data expressed by a method of expressing a color by three elements of a luminance signal (Y), a blue component difference signal (U), and a red component difference signal (V). is there.
- the second binarized image data creation unit may be configured to project the entire pixel data of the YUV image data onto the V axis in the YUV color space and separate the V component.
- the first and second binarized image data obtained from two types of image data, RGB image data and YUV image data of different data types, contain components that clarify the cell nucleus region and the background region, respectively. Therefore, the cytoplasm region can be clarified by the ternary data creation unit.
- the second binarized image data creation unit may be configured to create second binarized image data capable of distinguishing the background region from other regions by principal component analysis of the pathological tissue image data. Good.
- the second binarized image data creation unit projects all pixel data of the pathological tissue image data onto any one of the plurality of principal component axes obtained by the principal component analysis. What is necessary is just to comprise so that 2nd binarized image data may be produced by binarizing the normalized data by the discrimination binarization method.
- the first binarized image data obtained by projecting and normalizing all pixel data of the pathological tissue image data onto the first principal component axis obtained by the principal component analysis is binarized by the discriminating binarization method.
- the second binarized image data creation unit is capable of distinguishing the background region from the other regions based on the analysis result of the second principal component axis obtained by principal component analysis of CIE uv image data of the pathological tissue image. It can be configured to create binary image data.
- the CIE uv image data is image data expressed in the CIE uv color system defined by the International Lighting Commission (Commission Internationale de l'Eclairage).
- the CIELV color system is a uniform color space designed so that the distance on the color space is close to a perceptual color difference by humans. Therefore, it is possible to perform processing with the same color identification feeling as that of a human (pathologist) to distinguish the regions.
- RGB color system data is converted to XYZ color system data, an L value is calculated based on the Y value, and u and U based on the XYZ value and the L value are calculated. v is calculated.
- the feature extraction system for a pathological tissue image includes a higher-order local autocorrelation calculation unit, an element feature vector calculation unit, and a feature extraction unit.
- the higher-order local autocorrelation calculation unit applies a predetermined local pattern mask to the pathological tissue image created by the region-divided image data creation system of the pathological tissue image, and applies a plurality of mask candidates in the mask range of the local pattern mask. Multiply a certain pixel value. Then, while scanning the local pattern mask over the entire image, the multiplication values of the pixel values are integrated to obtain the product sum value of the entire pathological tissue image.
- the pathological tissue image may be divided into a plurality of blocks, and a pixel pattern multiplication value may be obtained while scanning the local pattern mask for each block, and the sum of the plurality of block integration values may be calculated as a product sum value. Good.
- the product sum value obtained at this time is referred to as a feature value in the present application.
- a local pattern mask when m and n are integers, a range of a grid composed of (2m + 1) ⁇ (2n + 1) cells is set as a mask range, and (2m + 1) ⁇ (2n + 1) cells are defined as a plurality of mask candidates. To do.
- a local pattern mask is configured by selecting a mask candidate located at the center of the mask range as a central mask among a plurality of mask candidates, and further selecting an arbitrary number of mask candidates of 0 or more from the mask range.
- 818 includes ⁇ (m, 0), (m, n), (0, n), (- m, n), (- m, 0), (- m, -n), located in (0, -n), (m, -n) ⁇
- a plurality of local pattern masks configured by selecting either 0, 1 or 2 from 8 are recommended. As described above, it is a feature of HLAC to calculate the correlation of a plurality of mask candidates for pixels in the local mask candidates limited by the mask range. By scanning the entire image or a partial area with such a plurality of local pattern masks, it is possible to extract high-quality features.
- the position (coordinates) of mask candidates that can be selected other than the central mask at this time can be defined as having integer coordinates closest to the intersection of the following two formulas.
- the element feature vector generation unit concatenates feature quantities that are product-sum values obtained for each of a plurality of local pattern masks by the higher-order local autocorrelation calculation unit to obtain an element feature vector.
- the element feature vector generation unit divides a plurality of local pattern masks that can be regarded as equivalent when a plurality of local pattern masks are rotated and / or inverted, respectively, into a plurality of invariant feature groups, respectively.
- a linear sum of feature amounts obtained from all local pattern masks belonging to the group is calculated, and an element feature vector is obtained by connecting the linear sums obtained for each invariant feature group.
- the rotation angle may be 45 °, 90 °, 180 °, or the like.
- the inversion may include inversion in the vertical direction (X axis symmetry), left and right direction (Y axis symmetry) and oblique direction (origin symmetry), or a combination thereof.
- the feature extraction unit combines a plurality of element feature vectors obtained from a plurality of local pattern mask sets having different sizes of mask ranges obtained by changing the values of m and n described above to obtain a final feature vector. Is generated.
- the mask range of the local pattern mask set is defined by the above (m, n) binomial set. That is, by preparing a plurality of (m, n) binomial sets (in this case, p) such as (m1, n1), (m2, n2), (mp, np), a plurality ( In this case, p element feature vectors are generated, and the length of the finally obtained feature vector is p times that of each element feature vector.
- a doctor in a pathological diagnosis based on a pathological tissue image, a doctor is not conscious of directionality and not conscious of the front and back, so that a plurality of local pattern masks are rotated and rotated by 45 ° respectively.
- the image recognition accuracy can be improved as compared with the conventional technique even if the feature amount as a basis for the determination is reduced. This is because the features of the pathological tissue are aggregated into a small amount of features without being dispersed into a plurality of features derived when rotation / inversion is distinguished.
- HLAC feature calculation method As the simplest HLAC feature calculation method, if the calculation method using the pixel value as it is for multiplication is used, the difference in the influence of the large value pixel and the small pixel on the feature value becomes large, so the quality of the image feature is degraded. There is a case to let you. Therefore, instead of using the pixel value at the mask position as it is for the multiplication, what is called a CILAC feature is that the frequency of occurrence (number of times) of the combination of the pixel value at the mask position is integrated over the entire image (or partial area). Can also be used.
- the CILAC feature even a pixel having a small value has the same effect as a pixel value having a large value, so that a feature that better indicates the essence of the target can be extracted regardless of the brightness of the image. As a result, when the CILAC feature is used, the determination accuracy can be improved as compared with the case where the HLAC feature is used.
- the local pattern mask that is, the case where the background is included in the local pattern mask is ignored. 29, the local pattern with a circle indicating the background is not used.
- the feature extraction method for a pathological tissue image performs the following steps.
- a plurality of local mask patterns belonging to one invariant feature group are divided into a plurality of invariant feature groups, each of which is regarded as equivalent when the plurality of local mask patterns are rotated by 45 ° and inverted, respectively.
- the order of high-order correlation is either 0, 1 or 2
- the displacement direction is 9 directions around the reference point (no direction / up / down / left / right / upper right / upper left) -35 local pattern masks limited to the lower right and lower left) can be used.
- the vertical and horizontal distances between the reference points that is, the image correlation width
- the image correlation width can be arbitrarily determined by the user according to the purpose. For example, when the length and width are both 1, the correlation of pixels in a narrow area of 3 ⁇ 3 pixels is obtained. It is this vertical / horizontal image correlation width that defines the local pattern mask set.
- a plurality of element feature vectors are generated using a plurality of local pattern mask sets, such as a local pattern mask set (vertical, horizontal) of (1, 1) and a local pattern mask set of (2, 2). These can be combined into a final feature vector.
- FIG. 2 is a block diagram illustrating a configuration of a pathological diagnosis apparatus including a pathological tissue image region division image data creation system, a pathological tissue image feature extraction system, and a diagnosis unit. It is a flowchart which shows the algorithm of the program used when implement
- step ST13 of FIG. It is a detailed flowchart of step ST14 of FIG. It is a figure which shows the data set used by verification experiment.
- A) And (B) is a figure which shows a non-cancer image and a cancer image. It is a figure which shows the method used in the comparative experiment in order to verify the effectiveness of a ternary method.
- (A) to (D) are an original image, a gray scale image, a binarized image, and a region divided image obtained in the present embodiment. It is a figure which shows a verification experiment result.
- A) And (B) is a figure which shows a verification experiment result. It is a block diagram which shows the structure of another pathological diagnosis apparatus.
- FIG. 20 It is a flowchart which shows the algorithm in the case of implementing the ternarization used by the structure of FIG. 20 with software. It is a figure which shows a principal component analysis result notionally. It is a figure which shows the image processing of embodiment of FIG. 20 with an image. It is a block diagram which shows the structure of the area division
- (A) is an original grayscale image
- (B) is an extracted image obtained in the embodiment of FIG. 1
- (C) is an extracted image obtained in the embodiment of FIG. ) Is an extracted image obtained in the embodiment of FIG. (A) shows different mask candidates (black blocks and shaded blocks) for creating a local pattern mask
- (B) is a table showing the determination results of the mask candidates in coordinates. It is a figure which shows the example of the 3 * 3 local pattern mask of CILAC to 1st order.
- the present embodiment aims to suppress false detection in abnormality detection from a pathological tissue image, and uses the importance of the tissue that the pathologist focuses at the time of diagnosis and the characteristics specific to the pathological tissue image for feature extraction.
- a technique for extracting local autocorrelation features (hereinafter abbreviated as HLAC features) is proposed. Specifically, the pathological tissue image is divided into three regions of cell nucleus, cytoplasm, and background, and each region is ternarized by a level value obtained by quantifying the importance of each tissue. Further, since the pathological tissue image has no directional feature, HLAC feature extraction considering rotation and inversion is performed from the pathological tissue image.
- FIG. 1 is a region divided image data creation system 1 of the pathological tissue image of the present invention, it is a block diagram showing a configuration of the pathological diagnosis apparatus and a diagnosis unit 5 and the characteristic extraction system 3 of the pathological tissue image.
- FIG. 2 is a flowchart showing an algorithm of a program used when the pathological diagnosis apparatus of FIG. 1 is realized using a computer.
- FIG. 3 is a flowchart showing an algorithm of a program for realizing the region-division image data creation system 1 for a pathological tissue image.
- the pathological tissue image region-division image data creation system 1 includes an RGB image data generation unit 11, a first binarized image data generation unit 12, a YUV image data generation unit 13, and a second binarized image data.
- the pathological tissue image feature extraction system 3 includes a higher-order local autocorrelation calculation unit 31, an element feature vector calculation unit 32, and a feature extraction unit 33.
- the diagnosis unit 5 performs pathological diagnosis based on the output of the feature extraction system 3 of the pathological tissue image.
- a pathological diagnosis is performed by executing a learning process configured by steps ST1 to ST7 and a test process configured by steps ST8 to ST14.
- a normal subspace is formed by learning using non-cancerous pathological tissue images as teacher data.
- step ST1 first, a non-cancerous pathological tissue image is read as teacher data (pathological tissue image teacher data) (step ST1).
- step ST2 the histological image (pathological tissue image teacher data) is ternarized (step ST2), and HLAC features are extracted from the ternary image (step ST3).
- step ST4 the rotation / inversion invariant HLAC feature is reconstructed (step ST4), and then a feature vector is generated from the reconstructed feature (step ST5). Then, a normal subspace representing a non-cancerous pathological tissue image is formed (step ST7) by principal component analysis of the feature vector (step ST6).
- test data pathological tissue image test data
- pathological tissue image test data including a cancer pathological tissue image is read (step ST8).
- the pathological tissue image pathological tissue image test data
- step ST9 the pathological tissue image (pathological tissue image test data) is ternarized (step ST9), and HLAC features are extracted from the ternary image (step ST10).
- reconstruction to rotation / inversion invariant HLAC features is performed (step ST11). After that, a feature vector is generated from the reconstructed features (step ST12). Then, the degree of deviation from the normal subspace formed in the learning process is calculated for the feature vector, and abnormality detection is performed (steps ST13 and
- the pathological tissue image segmentation image data creation system 1 and the pathological tissue image feature extraction system 3 in FIG. 1 execute the above-described learning process and test process steps ST1 to ST12, and the diagnosis unit 5 performs the steps in FIG. ST13 and ST14 are executed.
- the region-division image data creation system 1 for a pathological tissue image particularly performs the ternarization of steps ST2 and ST9. Therefore, the first binarized image data creation unit 12 can distinguish the cell nucleus region and other regions from, for example, non-cancerous pathological tissue image teacher data and pathological tissue image test data (hereinafter referred to as pathological tissue image data).
- pathological tissue image data pathological tissue image data
- the first binarized image data creation unit 12 takes in RGB image data of a pathological tissue image from the RGB image data generation unit 11.
- the RGB image data is image data that reproduces an image in a wide range of colors by mixing three primary colors of red (Red), green (Green), and blue (Blue).
- First binarized image data that separates the R component from the RGB image data of the pathological tissue image and binarizes the separated R component by the discriminating binarization method to distinguish the cell nucleus region from other regions Create
- the separation of the R component can be performed, for example, by projecting all pixel data of the RGB image data onto the R axis in the RGB color space.
- PI of FIG. 4 shows an example of a pathological tissue image
- PI1 is a first binarized image in which a cell nucleus region created by the first binarized image data creation unit 12 can be distinguished from other regions. It is the 1st binarized image based on data.
- the histopathological image PI was stained (HE stained) with hematoxylin that dyes the cell nucleus region in blue-purple and eosin that dyes the cytoplasm, fibers, erythrocytes, etc. other than the cell nucleus in various shades of red according to their properties. A pathological specimen is shown.
- the second binarized image data creation unit 14 reads the YUV image data of the pathological tissue image PI from the YUV image data generation unit 13.
- YUV image data is a kind of color space, and is an image for expressing a color by three elements of a luminance signal (Y), a blue component difference signal (U), and a red component difference signal (V). It is data.
- the second binarized image data creating unit 14 creates second binarized image data that can distinguish the background region and other regions from the pathological tissue image data. Specifically, the second binarized image data creation unit 14 separates the V component from the YUV image data, binarizes the separated V component by the discriminating binarization method, and obtains the background region and the others. Second binarized image data that can be distinguished from the first region is created. More specifically, the second binarized image data generator 14, all the pixel data of the YUV image data projected to the V axis in the YUV color space to separate the V component. PI2 in FIG. 4 indicates a binarized image based on the second binarized image data.
- 3-valued image data creation unit 15 creates a 3-valued image data to be the first binarized image data and taking the NOR logic of the second binarized image data area divided image data.
- the cell binary of the first binarized image data is “true 1”, the others are “false”, the background of the second binarized image data is “true 2”, and the others are “false”.
- the “true / false” of the first binarized image data and the second binarized image data becomes the cell nucleus
- the “binary” of the first binarized image data and the second binarized image data is “ “Fake true” becomes the background
- “false false” of the first binarized image data and the second binarized image data becomes the cytoplasm.
- the fourth is a ternary image based on the ternary image data.
- the first and second binarized image data obtained from two types of image data of RGB image data and YUV image data having different types of feature amounts have components for clarifying the cell nucleus region and the background region, respectively. Since it is included, the obtained ternary image data is a clarified background region, cytoplasm region, and cell nucleus region.
- the pathological tissue image is first converted into the cell nucleus / cytoplasm / background.
- the area is divided into three areas, and then a level value obtained by quantifying the importance of each organization is set in each area.
- the cell nucleus region stained in purple by HE staining has a large difference in the R component value when compared with the region other than the cell nucleus in the RGB color space.
- the R component of the pathological tissue image (PI in FIG. 4) is separated (step ST22), and only this R component is binarized using the binarization method of Otsu. (Step ST23).
- the cell nucleus area (white) and the area other than the cell nucleus (black), which have been stained blue-purple, are divided.
- the background area is an area that is not stained by HE staining, showing white with the highest luminance value, and little color information.
- the color information of the area other than the background is mainly the red component.
- the background region is extracted using the V component of the YUV color space, which is the difference between the luminance and the red component (steps ST24 and ST25).
- the same binarization process is performed on the V component of the pathological tissue image by Otsu's binarization method (step ST26).
- the background area (black) that has not been stained by HE staining is divided.
- the two images PI1 and PI2 divided into the cell nucleus and others, the background and others are integrated, and as shown in PI3 of FIG. 4, there are three images of the nucleus (white), cytoplasm (ash), and background (black).
- a pathological tissue image divided into regions is synthesized. That is, in step ST27, an area other than the extracted cell nucleus and the background image is defined as the cytoplasm.
- an image in which predetermined level values are set for the pixels included in the cell nucleus region, cytoplasm region, and background region divided in step ST28 is generated. That is, a level value obtained by quantifying the importance of each tissue is set to the pixels included in each region of the pathological tissue image divided into regions.
- a level value obtained by quantifying the importance of each tissue is set to the pixels included in each region of the pathological tissue image divided into regions.
- the diagnosis of a pathologist mainly information on the cell nucleus and cytoplasm is comprehensively diagnosed, but the most important information is information on the nucleus such as the size and arrangement of the cell nuclei.
- appropriate level values are set for the pixels corresponding to the cell nucleus, cytoplasm, and background divided into regions, and the importance of each feature is reflected in each region.
- the background level value is fixed to 0 (that is, the pixel value included in the background region is 0)
- the cytoplasm level value is fixed to 2 (that is, the pixel value included in the cytoplasm region is 2)
- the level value of the cell nucleus is set to 14 (that is, the cell nucleus region) as the level value that can most distinguish non-cancer from cancer.
- the level value of the cytoplasm is 2 (ie, the value of the pixel contained in the cytoplasm region is 2), the level value of the background is 0 (ie, the value of the pixel contained in the background region is 0)
- the level value of the background is 0 (ie, the value of the pixel contained in the background region is 0)
- the pathological tissue image feature extraction system 3 includes a higher-order local autocorrelation calculation unit 31, an element feature vector calculation unit 32, and a feature extraction unit 33.
- the higher-order local autocorrelation calculation unit 31 executes steps ST3 and ST10 in FIG. Therefore, the higher-order local autocorrelation calculation unit 31 uses the above-described 35 kinds of predetermined local pattern masks shown in FIG. 6 as the ternary pathological tissue image PI3 created by the above-described region-division image data creation system 1 for pathological tissue images. Are individually used for scanning, and a product-sum value (HLAC feature value) is calculated for each local pattern mask.
- the element feature vector calculation unit 32 executes steps ST4 and ST11 of FIG.
- the feature extraction unit 33 executes steps ST5 and ST12 in FIG. 2, and combines element feature vectors obtained from a plurality of local pattern mask sets to generate a final feature vector.
- the feature amount obtained from each local pattern mask is the Nth-order autocorrelation function with respect to the displacement direction (a1..., AN), where f (r) is a ternary target pathological tissue image.
- the order N of the higher-order autocorrelation function coefficient is 0, 1 or 2
- the next displacement direction a is ⁇ no direction, right, upper right, upper, upper left
- a 35-dimensional vector xi (i 1,..., 35) calculated from 35 local pattern masks as shown in FIG. Calculate as HLAC features.
- the arrangement of mask candidates in nine directions centered on one mask candidate is the basis for configuring the local pattern mask.
- the central mask is a reference point, and all mask candidates located at the inner edge of the mask range in the nine surrounding directions can be correlated points located in all displacement directions.
- all mask candidates can be mask candidates for constituting a local pattern mask.
- the center mask is the reference point, and the two left and right mask candidates are the correlation partner points.
- the middle mask is a reference point, and the center mask is one correlation partner. For example, if 5 is set as the pixel value of a reference point in the target image, 5 ⁇ 5 is the feature amount at that reference point.
- the center mask is a reference point, and the center mask is a point of two correlation partners. No.
- the numbers in the mask indicate powers corresponding to the number of correlation partners.
- the element feature vector calculator 32 implements steps ST4 and ST11 in FIG.
- the element feature vector calculation unit 32 generates an element feature vector by connecting a plurality of HLAC feature quantities obtained by scanning an image with a plurality of local mask patterns.
- Element feature vectors may be generated by combining linear sums of feature quantities obtained from local pattern masks belonging to each group.
- a plurality of mask patterns that can be regarded as equivalent when rotated by a specific angle or / and inverted are respectively a plurality of invariant features. Divided into groups. For example, 45 ° is used as the rotation angle, but it may be 90 ° or 180 °.
- Inversion includes inversion in the vertical direction (X-axis symmetry), left-right direction (Y-axis symmetry), and oblique direction (origin symmetry).
- the local pattern mask of FIG. 6 is divided into eight invariant feature groups.
- the central mask of the mask range composed of 3 ⁇ 3 cells is the reference pixel position and the 0th-order local mask pattern is used
- the linear sum is calculated as a feature value.
- the multiplication values of pixel values of reference pixels (pixels located in the center mask) point and pixels whose positions are designated by mask candidates other than the center mask are integrated. Then, this process is integrated over the entire image (or partial area) to obtain the feature amount of the local mask pattern.
- “ ⁇ ” means that the pixel value in the pixel designated by the mask candidate is squared
- the triple circle means that the pixel value in the pixel designated by the mask candidate is cubed. means.
- a plurality of local mask patterns belonging to one invariant feature group (in the table of FIG. 7, there are zero invariant one invariant feature group, first in two invariant feature groups, and second in five invariant feature groups),
- the linear sum of product sum values (features) obtained by scanning with local pattern masks belonging to each of the eight invariant feature groups is calculated as one feature amount as in the rightmost pattern shown in FIG. . That is, for example, in one invariant feature group to which four local pattern masks belong, the sum of all feature amounts obtained by scanning images with the four local pattern masks is the feature amount of the one invariant feature group. .
- any orientation of the cells and cell nuclei in the pathological tissue image can be regarded as having the same properties as long as they have the same shape, and the image recognition accuracy is dramatically improved.
- the extracted HLAC features are reconstructed as rotation / inversion invariant HLAC features.
- inversion invariance is also considered.
- the local pattern mask No. in FIG. 6 is a local pattern mask No. 7, No. 8, no. Since it is rotationally symmetric with respect to 9, one rotation / inversion invariant feature amount is calculated by a linear sum of these four feature amounts.
- the 8-dimensional rotation / inversion invariant feature yj (j 1,..., 8) shown in FIG.
- the element feature vector calculation unit 32 generates feature element vectors by combining these after obtaining the feature quantities of the invariant feature groups.
- the feature extraction unit 33 combines a plurality of element feature vectors obtained from a plurality of local pattern mask sets having different sizes of mask ranges to generate a final feature vector.
- the mask range of the local pattern mask set is defined by the above (m, n) binomial set. That is, by preparing a plurality of (m, n) binomial sets (in this case, p) like (m1, n1), (m2, n2), ..., (mp, np), A plurality of (p in this case) element feature vectors are generated, and the length of the finally obtained feature vector is p times that of each element feature vector.
- FIG. 10A is a flowchart showing details of steps ST3 to ST5 and ST10 to ST12 in FIG. 2, and FIG. 10B is a diagram used for explaining the determination of the image correlation width.
- a plurality of local pattern masks when m and n are integers, a center mask (black block) among mask candidates in a mask range in which (2m + 1) ⁇ (2n + 1) cells are arranged in a lattice pattern
- One or more mask candidates selected from a plurality of mask candidates located within a predetermined mask range centering on is set as a mask (eight blocks with diagonal lines).
- the “predetermined mask range” corresponds to 9 ⁇ 9 vertical and horizontal lattices.
- the Euclidean distance between the mask candidate located at the corner of the predetermined mask range and the central mask is longer than the Euclidean distance between the central mask candidate on the inner edge side of the predetermined mask range and the central mask. Therefore, the “image correlation width of a predetermined mask range” is determined as including the difference between both distances. If the coordinates of the center mask are (0, 0), the coordinates of the eight masks are as shown in FIG.
- step ST31 of FIG. 10 (A) 1 is set to the variable i.
- scanning is performed with the i-th image correlation width set to a predetermined value among a plurality of image correlation widths prepared in advance.
- setting the first image correlation width to a predetermined value means that the first image correlation width is determined for a plurality of (m, n) prepared in advance in order to determine a mask range when scanning a pathological tissue image. It means selecting m and n.
- step ST33 an element feature vector is calculated based on the above-described equation (1) using the local pattern mask of FIG. That is, element feature vectors are calculated by combining and vectorizing feature quantities obtained by scanning an image with each local pattern mask.
- step ST34 it is checked whether or not the value of i is equal to a predetermined value p. If they are not equal, 1 is added to i in step T35, and the processes of steps T32 and ST33 are re-executed. If the value of i is equal to the predetermined value p, in step T36, the element feature vectors generated so far are combined, a final feature vector is generated, and the process ends.
- Step ST33 corresponds to steps ST3 and ST4 and steps ST10 and ST12 in FIG.
- the normal subspace formation (step ST7) using the principal component analysis (step ST6) of FIG. 2 will be described.
- a normal subspace is formed using principal component analysis.
- the formation of the normal partial space is described in detail in Non-Patent Document 6.
- the normal subspace is a subspace formed by the main components of the feature vector extracted from the learning non-cancer image.
- the distance between the normal subspace and the feature of the test pathological tissue image is calculated as a deviation degree.
- step ST14 when the degree of deviation is calculated to be large, it means that it is different from the feature of the non-cancer image, so that it can be recognized as an abnormality indicating a suspected cancer.
- FIG. 11 shows a detailed flowchart of normal subspace formation steps ST6 and ST7
- FIG. 12 shows a detailed flowchart of deviation degree calculation step ST13 of FIG. 2
- FIG. 13 shows the abnormality detection of FIG.
- the detailed flowchart of step ST14 of this is shown.
- step ST61 a set of feature vectors (rotation / inversion invariant feature vectors) is read.
- step ST62 principal component analysis is applied to the rotation / inversion invariant feature vector yj to obtain a principal component vector that forms a normal subspace.
- This principal component vector can be obtained by solving the eigenvalue problem of the autocorrelation matrix Ry of the feature vector set ⁇ yj ⁇ .
- a normalized feature vector obtained by normalizing each feature vector may be used.
- U is a matrix having eigenvectors as columns
- ⁇ is a diagonal matrix having eigenvalues as diagonal elements.
- the eigenvector corresponds to a principal component vector
- the eigenvalue corresponds to a contribution rate indicating how much each principal component has the ability to explain the entire data. Therefore, the eigenvectors are rearranged in descending order of contribution rate (step ST71). ).
- the dimension number K corresponding to the number forming the normal subspace is determined from the principal component vectors (that is, the eigenvectors).
- the number of dimensions K is the following cumulative contribution ratio that quantifies how much the principal component contributes to expressing the information of the analyzed data.
- the normal subspace eigenvectors U K ⁇ u 1 to dimension K which satisfies the cumulative contribution ratio eta K ⁇ C,. . . , U K ⁇ is a space defined as a basis vector (step ST72).
- C is the cumulative contribution rate condition
- ⁇ i is the contribution rate of the principal component vector u i
- M is the total number of eigenvalues.
- step ST13 and ST14 of FIG. 2 The deviation calculation and abnormality detection in steps ST13 and ST14 of FIG. 2 will be described.
- the distance between the feature vector extracted from the test pathological tissue image and the normal subspace is used as an abnormality detection index.
- This degree of deviation can be calculated as a projection component of the normal subspace onto the orthogonal complement space as follows (steps ST13A and 13B in FIG. 12). Projector P to normal subspace is given by
- U K T is a transposed matrix of U K and K is the number of dimensions.
- y is a feature vector of the test pathological tissue image
- y T is a transposed matrix of y.
- d ⁇ is compared with a threshold value H that preset, it is possible to abnormality detecting suspected cancerous (step ST14A and ST14B shown in FIG. 13).
- Figure 14 shows the data set used in the verification experiment.
- the learning teacher data using the 250 samples diagnosed as non-cancerous by pathologists, the test data were used another non-cancer data 50 samples and cancer data 24 samples from the learning data.
- the histopathological images used in the experiment were taken at a microscope magnification of 20 times, such as a non-cancer image shown in FIG. 15A and a cancer image shown in FIG. 15B, and a jpeg with a size of 1280 pixels ⁇ 960 pixels. It is an image saved in the format.
- Verification experiment 1 Verification of effectiveness of ternary method
- a comparative experiment was performed using the three methods shown in FIG.
- rotation / inversion invariant features were not reconstructed, and the results of the best condition in each method were compared among the three conditions of cumulative contribution ratio C of 0.999, 0.9999, and 0.99999.
- FIG. 17 shows the original image (FIG. 17A) and the area-divided image of each method converted to gray scale for visual comparison.
- the cell nuclei, cytoplasm, and background pixel values are displayed as 255, 127, and 0, respectively.
- the gray scale (FIG. 17B) is closest to the original image, and the structure of the tissue is clearly visible.
- binarization FIGG. 17C
- most of the cytoplasm portion is included in the background region.
- the cell nucleus, cytoplasm, and background can be appropriately divided into regions.
- Fig. 18 shows the results of verification experiments using each method.
- the number of false detections (FP (1 ⁇ )) is the smallest in the method of the present embodiment, and the effectiveness of the method of the present embodiment has been confirmed.
- the number of false detections is increased because the number of gradations of pixels is small with respect to the gray scale, and the amount of information representing the characteristics of the pathological tissue is reduced.
- the number of false detections is reduced despite the fact that the number of gradations of pixels is reduced to ternary with respect to 256 values of grayscale.
- Fig. 19 shows the results of the verification experiment.
- the average + standard deviation ( ⁇ ) set as the threshold is set as 1 ⁇
- the average + 2 ⁇ standard deviation ( ⁇ ) is set as 2 ⁇ in the graph.
- FIG. 20 shows a pathological tissue image region segmented image data creation system 101, a pathological tissue image feature extraction system 103, and a diagnosis unit 105, which have different configurations from the pathological tissue image region segmented image data creation system 1 shown in FIG. It is a block diagram which shows another structure of the other pathological diagnosis apparatus provided with these.
- FIG. 21 is a flowchart showing an algorithm when the ternarization used in the configuration of FIG. 20 is implemented by software.
- the second binarized image data creation unit 114 performs the principal component analysis on all the pixel values of the pathological tissue image data to distinguish between the background region and the other regions.
- This embodiment is different from the embodiment of FIG. 1 in that an image data configured to be used is used.
- Other points are the same as in the embodiment of FIG. Accordingly, with respect to the other points, the constituent elements shown in FIG. 20 are given the same reference numerals as the constituent elements shown in FIG.
- the second binarized image data creation unit 114 used in the present embodiment has all pixel data of the pathological tissue image data on the first principal component axis obtained by principal component analysis of all pixel values. Is normalized by projecting and binarized by a discriminating binarization method to generate second binarized image data.
- the second binary data obtained by projecting and normalizing all pixel data of the pathological tissue image data with respect to the other principal component axes instead of the first principal component axis is binarized by the discriminant binarization method.
- the converted image data may be created.
- all pixel data of the pathological tissue image data is projected and normalized with respect to a plurality of principal component axes, and a logical product operation is performed on a plurality of binarized image data obtained by binarization by a discrimination binarization method.
- the second binarized image data may be created.
- other operations such as logical sum may be used in addition to the logical product operation.
- the binarized image data creation unit 115 performs the second binarization by binarizing the first binarized image data obtained by binarizing the R component and the result using the principal component analysis.
- a cytological region is distinguished by taking a negative OR of the image data, and ternary image data serving as region divided image data is created.
- the second binarized image data is obtained by performing principal component analysis of the pathological tissue image data as in the present embodiment
- the second binarized image data is obtained from the YUV image data as shown in FIG. Tests have shown that the background and cytoplasm regions can be clarified more than in the case. This is because the distinction between the background and the other is largely related to the relative color density, and therefore, in a pathological tissue image with weak staining, the overlap of the pixels belonging to the background and the cytoplasmic distribution with only the V component of the YUV image. This is because the principal component analysis requires a direction that maximizes the variance, that is, a direction that minimizes the overlap of the distributions, and can separate the background and others well.
- FIG. 23 shows the processing of the present embodiment as an image.
- the background region, the cytoplasm region, and the cell nucleus region are different from the region-divided image shown in FIG. 4 obtained in the embodiment of FIG. 1. More clearly divided.
- an image based on the HLAC feature is obtained by weighting the image divided into the background region, the cytoplasm region, and the cell nucleus region according to the degree of attention of the doctor in the pathological diagnosis.
- Recognition accuracy can be increased. For example, when the values (level values) given to the pixels belonging to the background region and the cytoplasm region are fixed to 0 and 2, respectively, the value (level value) given to the pixels belonging to the cell nucleus region is set to 14 to obtain the best recognition result. Is obtained.
- FIG. 24 shows a pathological tissue image region segmented image data creation system 201, a pathological tissue image feature extraction system 203, and a diagnosis unit 205 that have different configurations from the pathological tissue image region segmented image data creation system 1 shown in FIG. It is a block diagram which shows another structure of the other pathological diagnosis apparatus provided with these.
- FIG. 25 is a flowchart showing an algorithm when the ternarization used in the configuration of FIG. 24 is implemented by software.
- principal component analysis is used to divide a pathological tissue image into regions such as the background, cytoplasm, and cell nucleus without depending on the staining state of the specimen.
- Pathological specimens are stained with hematoxylin and eosin. Since the cell nucleus area is stained with hematoxylin with a blue-violet dye, the B component is higher than the other components compared to the other areas when compared with each component in the RGB color space even if the staining density is different. This decrease is a fact that the pathological specimen image stained with hematoxylin and eosin does not change even if the staining condition is different.
- a cell scale region can be extracted by creating a gray scale image in which the difference value between the B component and the R component in the RGB color space is emphasized and binarizing. Therefore, in the present embodiment, the first binarized image data creation unit 212 performs redundant component removal (step ST222A), clipping (step ST222B), and binarization in order to extract a cell nucleus region. Further, the second binarized image data generating unit 214 can perform principal component analysis on CIE uv image data output from the CIE uv image data generating unit 213 to distinguish a background region from other regions. It differs from the embodiment of FIG. 1 in that it is configured to create data.
- the higher-order local autocorrelation calculation unit 231 of the pathological tissue image feature extraction system 203 scans with a plurality of local pattern masks, and the color index (for example, the color number) of the reference point and the displacement direction. calculating the co-occurrence of color index point correlation partners (eg color number) is close to the point of using a so-called CILAC, it differs from the embodiment FIG. Other points are the same as in the embodiment of FIG. Thus in other regards, the steps shown in the components and 25 shown in FIG. 24 are denoted by the sign of plus the number of 200 to the reference numerals to the components shown in FIGS. 1 and 3 The description is omitted.
- the first binarized image data creation unit 212 performs redundant component removal in step ST222A for color information reduction in FIG.
- the cell nucleus region is stained with hematoxylin with a blue-violet pigment. Therefore, when compared with each component of the RGB color space, the B component is more than the other components even if the staining concentration is different. high. Therefore, in order to reduce redundant components unrelated to the cell nucleus region, the pixel value is set to 0 when the result of subtracting the B component from the R component in the RGB color space is larger than 0 in all the pixels of the pathological tissue image. By performing such redundant component removal, it is possible to remove pixel information that contains a large amount of B component that hinders extraction of the cell nucleus region.
- step ST223 thus the B 'obtained by clipping for each pixel, is regarded as image data for determining the first binarized image data, performs binarization.
- clipping is performed, it is possible to significantly reduce the influence of noise in the pathological specimen image and staining unevenness in the pathological specimen image.
- FIG. 26 shows an image used to confirm the effects of redundancy removal and clipping. From FIG. 26, it is understood that the glandular cavity A, which is the background, is erroneously extracted as a cell nucleus when neither redundancy removal nor clipping is performed, or when only clipping is performed. Moreover, the glandular cavities that are the background cannot be extracted only by removing the redundancy, but the size of the cell nucleus region is reduced. On the other hand, it can be seen that when redundant part removal and clipping are used in combination, the glandular cavity A is not extracted, and at the same time, the cell nucleus region B can be more accurately extracted.
- the second binarized image data creation unit 214 used in the present embodiment uses a second principal component axis analysis result obtained by performing principal component analysis on CIE uv image data of a pathological tissue image, and a background region and other regions.
- the second binarized image data that can be distinguished from each other is created.
- the CIE uv image data is image data expressed in the CIE color system defined by the International Lighting Committee (Eclairage).
- the CIELV color system is a uniform color space designed so that the distance on the color space is close to a perceptual color difference by humans.
- all pixel data of the pathological tissue image is converted into the CIELV color system, and all pixel data is projected onto the second principal component axis obtained by the principal component analysis, and the result is determined by the discriminating binarization method.
- Binarization is performed to create second binarized image data. The reason why only the second principal component axis is used is that it has been visually confirmed through experiments that the background is extracted most faithfully compared to the case where other principal component axes are used.
- the normalized binarization method is obtained by projecting and normalizing all pixel data of the pathological tissue image data with respect to the other principal component axes instead of the second principal component axis.
- binarization may be used to create second binarized image data.
- all pixel data of the pathological tissue image data is projected and normalized with respect to a plurality of principal component axes, and a logical product operation is performed on a plurality of binarized image data obtained by binarization by a discrimination binarization method.
- the second binarized image data may be created.
- other operations such as logical sum may be used in addition to the logical product operation when generating the second binarized image data.
- the binarized image data obtained by processing the V component in the YUV color space by the discriminating binarization technique as used in the first embodiment is used. May be.
- all pixel data of the pathological tissue image data is projected onto the first principal component axis obtained by performing principal component analysis on all pixel data in the RGB color space.
- the second binarized image data may be created by binarizing the normalized data by the discrimination binarization method.
- the second binarization is performed by binarizing the pixel data of the pathological tissue image data, which is not the first principal component axis, but by normalizing by projecting all the pixel data of the pathological tissue image data by the discrimination binarization method.
- image data may be created.
- all pixel data of the pathological tissue image data is projected and normalized with respect to a plurality of principal component axes, and a logical product operation is performed on a plurality of binarized image data obtained by binarization by a discrimination binarization method.
- the second binarized image data may be created.
- other operations such as logical sum may be used in addition to the logical product operation when generating the second binarized image data.
- FIG. 27A shows an original grayscale image.
- B is an extracted image obtained in the embodiment of FIG. 1, and the cytoplasm region is also partially extracted white.
- C is an extracted image obtained in the embodiment of FIG. 20, and most of the cytoplasmic region is extracted white as in the background.
- D is an extracted image obtained in the present embodiment, and a white background is extracted so as not to include a cytoplasm region.
- the ternary image data creation unit 215 in FIG. 24 will be described.
- the third binary image data the cytoplasm region and a part of the cell nucleus are distinguished from other regions.
- an arbitrary pixel value A is set for the pixels belonging to the cell nucleus region of the first binary image data, and 0 is set for the other pixel values.
- an arbitrary pixel value B is set for the pixels belonging to the background area of the second binary image data, and 0 is set for the other pixel values.
- the third image data an arbitrary value C is set to pixels corresponding to a part of the cytoplasm region and the cell nucleus, and 0 is set to other pixel values.
- the three binary images are overlaid by the following procedure. That is, the pixel value B is set so as to be overwritten also on the third image data for the pixel at the position where the pixel value B is set in the second binary image data.
- the third image data has a value B set for pixels belonging to the background area, a value C set for pixels corresponding to part of the cytoplasmic region and the cell nucleus, and a state where 0 is set for pixels corresponding to the remaining part of the cell nucleus. Become.
- the pixel value A in the first binary image data is set so as to overwrite the pixel value A on the third image data as well.
- the third image data is in a state where the value B is set for the pixels belonging to the background area, the value C is set for the pixels belonging to the cytoplasm area, and the value A is set for the pixels belonging to the cell nucleus area.
- an appropriate level value for example, 0 for the background region, 2 for the cytoplasm region, and 14 for the cell nucleus region
- is set for the pixels in each region thereby generating a ternary image.
- FIG. 28A shows mask candidates (black blocks and shaded blocks) for creating a local pattern mask used in this embodiment
- FIG. 28B shows the position of the mask candidate. It is a table
- the eight mask candidates in the mask range have the coordinates of the intersection of the following two formulas Is defined as
- FIG. 28B shows the coordinates of mask candidates. The effect of such a local pattern mask composed of eight mask candidates and a center mask will be described by comparing FIG. 10 (B) and FIG. 28 (A). In the local pattern mask of FIG.
- the mask arrangement shown in FIG. 10B may be used for the local pattern mask, as was done in the first embodiment.
- the mask pixel value multiplication unit 231 uses the color index of the reference point (for example, the color number) as a result of scanning with a plurality of local pattern masks and the color index of the correlated partner located in the displacement direction (for example, a feature extraction method called CILAC (Color Index Local Auto-Correlation) that calculates the co-occurrence of color numbers) is used.
- CILAC Color Index Local Auto-Correlation
- ⁇ Co-occurrence is a property that shows the tendency of different events to appear at the same time.
- the relationship between adjacent pixels specified by the local mask pattern is expressed by a combination of three classes (cell nucleus, cytoplasm, background), and the occurrence frequency (or occurrence probability) of all combinations is a feature. It becomes possible to extract.
- the CILAC feature is expressed as a vector obtained by concatenating 0th, 1st, and 2nd order autocorrelations when the order of higher order correlation is 0, 1 or 2.
- r is a reference pixel
- a and b are displacement vectors from r
- f i (x) is a function that is 1 when pixel x
- D 3
- color labels 1, 2, and 3 are label values given to pixels belonging to the cell nucleus region, label values given to pixels belonging to the cytoplasm region, and label values given to pixels belonging to the background region
- the displacement vectors a and b are defined by the positions of nine mask candidates included in the local pattern mask.
- the zero-order correlation R 0 (i) is a vector of length 3 because i can take three types of values (color labels). Since the primary correlation R 1 (i, j, a) can take three values (color labels) for i and j and eight directions for the displacement vector, a vector of length 3 ⁇ 3 ⁇ 8 Become.
- the local pattern mask composed of the three values of the cell nucleus, cytoplasm, and background in CILAC Only those showing the relationship between the cell nucleus and the cytoplasm can be used.
- FIG. 29 shows an example of a CILAC 3 ⁇ 3 local pattern mask up to the first order. If it is a basic scanning method, all the local pattern masks of FIG. 29 are used. However, if the local pattern mask is limited to only cell nuclei and those showing the relationship between cell nuclei and cytoplasm, only the local pattern mask in which the cells are colored in FIG. 29 is used. Considering the mask number up to second order, the number of all types of local pattern mask is 1587, the number of invariant feature group even when the reconstitution with rotation reversal invariance is 63.
- the number of local pattern masks is 153, and the number of invariant feature groups when reconstructing by rotation inversion is constant is 15. is there.
- HLAC features may be extracted not from CILAC feature values but from pathological tissue images that are divided into regions and set with level values, as in the first embodiment. Of course, it is not necessary to reconstruct the feature quantity based on the rotation / inversion invariant feature group.
- the overdetection rate (number of overdetections / number of normal samples) when performing cross-validation in a sample experiment of cancer tissue was 15.7%. From this result, in the feature extraction of pathological tissue images, the CILAC extraction method is superior to HLAC, and over-detection can be suppressed by limiting the local pattern mask to only the cell nucleus and showing the relationship between the cell nucleus and the cytoplasm. Was confirmed.
- the present invention does not require detailed definition of cancer characteristics in advance, and it is possible to detect abnormalities of unknown lesions that have not yet been discovered by learning the characteristics of normal tissue images collected so far. is there.
- the first binary image data in which the cell nucleus region and other regions can be distinguished from each other and the second binary image data in which the background region and other regions can be distinguished are subjected to a negative OR operation. Since the ternary image data serving as the region divided image data is created by distinguishing the cytoplasm region, it is possible to generate a region divided image in which the background region, the cytoplasm region, and the cell nucleus region are clearer than in the past.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Nonlinear Science (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
Description
本発明のように、細胞核領域とその他の領域とが区別できる第1の2値化画像データと背景領域とその他の領域とが区別できる第2の2値化画像データの否定論理和をとって細胞質領域を区別し、領域分割画像データとなる3値化画像データを作成すると、背景領域、細胞質領域及び細胞核領域が明確になった領域分割画像を生成することができる。これは、3種類の領域を区別して領域分割する場合、特徴的な2種類の領域を1種類ずつその特徴に合わせた区別方法により作成した領域の種類が異なる2種類の2値化画像データの否定論理和を取ることにより、区別するための特徴が不明瞭な領域部分が明瞭になり、既に分割した明瞭な領域部分と組み合わせることで3種類の領域を明瞭な状態で出力できるようになるためである。
y=±(m/n)xあるいはy=0あるいはx=0
要素特徴ベクトル生成部は、高次局所自己相関計算部により複数の局所パターンマスク毎に得られる積和値である特徴量を連結(concatenate)して要素特徴ベクトルを得る。
実験2 提案する回転・反転不変の有効性検証
[実験データ]
検証実験により、非癌病理組織画像の学習により、癌病理組織画像がきちんと異常検出できるかどうかの確認を行うために、予め病理医により診断されている明らかな非癌データと明らかな癌データを用いた実験を行った。
検証実験では、学習データを用いて正常部分空間を生成した後に、各学習データの正常部分空間に対する逸脱度を計測し、その平均+標準偏差(σ)を閾値とした評価方法を用いた。テストデータに対し、この閾値を越えた逸脱度の場合を、癌の疑いを示す異常として検出する。
本実施の形態で用いる3値化手法の有効性を検証するために、図16に示す3手法で比較実験を行った。本検証実験では、回転・反転不変特徴の再構成は行わず、累積寄与率条件のCは、0.999、0.9999、0.99999の3条件のうち各手法において最も良い条件の結果を比較した。
回転・反転不変を考慮した提案手法による有効性を検証するために、再構成前の35次元HLAC特徴での異常検出結果と、再構成後の8次元回転・反転不変HLAC特徴での異常検出結果を比較した。なお、本検証実験での累積寄与率条件のCは、0.999、0.9999、0.99999の3条件のうち各手法において最も良い条件の結果を比較した。
y=±(m/n)xあるいはy=0あるいはx=0
特にmとnとを等しくすると(実施例ではm=n=4)、8個のマスク候補は、図28(A)に示すように中心マスクを中心とする八角形の頂点に位置することになる。図28(B)は、マスク候補の座標を示している。このような8個のマスク候補と中心マスクにより構成される局所パターンマスクの効果について、図10(B)と図28(A)を比較しながら説明する。図10(B)の局所パターンマスクにおいて、マスク範囲の角に位置するマスクと中心マスクとの間のユークリッド距離と、マスク範囲の内縁の辺の中央のマスクと中心マスクとの間のユークリッド距離との差の絶対値をΔD1とする。一方、図28(A)の局所パターンマスクにおいて、中心マスクの右上(左上、右下、左下でもよい)方向に位置するマスクと中心マスクとの間のユークリッド距離と、マスク範囲の内縁の辺の中央のマスクと中心マスクとの間のユークリッド距離との差の絶対値をΔD2とする。図より、ΔD2≦ΔD1であることは明らかである。ここで、複数の局所マスクパターンのうち、それぞれ45°回転させたときに等価とみなせるもの同士を同一グループと見なして、回転不変要素特徴ベクトルを生成する手続きについて考慮すると、図28(A)に基づく局所パターンマスクの方が、不変性が高いことは明らかであり、より品質の高い病理組織画像の特徴ベクトルを抽出できると考えられる。
R0(i)=Σrfi(r)
R1(i,j,a)= Σrfi(r)fj(r+a)
R2(i,j,k,a,b)= Σrfi(r)fj(r+a)fk(r+b)
ここで、rは参照画素、aとbはrからの変位ベクトル、fi(x)は画素xが色ラベルiをとるときに1、それ以外のときに0とする関数、i∈{1, ..., D}、j∈{1, ..., D}、k∈{1, ..., D}は色ラベルである。本実施例においてD=3とし、色ラベル1、2、3は、細胞核領域に属する画素に与えられるラベル値、細胞質領域に属する画素に与えられるラベル値、背景領域に属する画素に与えられるラベル値であるものとする。また、変位ベクトルaとbは局所パターンマスクに含まれる9個のマスク候補の位置により規定される。0次相関R0(i)は、iが3種類の値(色ラベル)を取り得るため、長さ3のベクトルとなる。1次相関R1(i,j,a)は、iとjがそれぞれ3種類の値(色ラベル)、変位ベクトルが8種類の方向をとり得るため、長さ3×3×8のベクトルとなる。2次相関R2(i,j,k,a,b)は、iとjとkがそれぞれ3種類の値(色ラベル)、変位ベクトルが8種類の方向から2方向をとり得るため、長さ3×3×3×8×7のベクトルとなる。したがって、2次相関まで全てを連結すると1587次元となる。さらに,本実施の形態ではCILAC特徴についても、前述の実施の形態と同様に回転・反転不変特徴への再構築を行うので、CILAC特徴xiは63次元の回転・反転不変特徴yj(j = 1, …, 63)へ再構成される。
3,103 病理組織画像の特徴抽出システム
5,105 診断部
11,111 RGB画像データ生成部
12,112 第1の2値化画像データ作成部
13 YUV 画像データ生成部
14,114 第2の2値化画像データ作成部
15,115 3値化画像データ作成部
31,131 高次局所自己相関計算部
231 マスク画素値計算部
32,132 要素特徴ベクトル計算部
33,133 特徴抽出部
Claims (20)
- 背景、細胞質及び細胞核を含む病理組織画像データから、背景領域、細胞質領域及び細胞核領域が明確になった領域分割画像を生成するために必要な領域分割画像データを作成する病理組織画像の領域分割画像データ作成システムと、
前記病理組織画像の領域分割画像データ作成システムにより作成した病理組織画像を予め定めた複数の局所パターンマスクを個々に用いて走査し、前記局所パターンマスク毎に高次局所自己相関特徴を計算する高次局所自己相関計算部と、
前記複数の局所パターンマスクをそれぞれ45°ずつ回転させたとき及び反転させたときに等価とみなせる複数の局所パターンマスクをそれぞれ複数の不変特徴グループに分け、1つの前記不変特徴グループに属する複数の前記局所パターンマスクを一つの特徴量とみなして、各不変特徴グループに属する前記局所パターンマスクによる走査で得られた前記高次局所自己相関特徴の線形和を計算する要素特徴ベクトル計算部と、
前記高次局所自己相関特徴の線形和値に基づいて、前記病理組織画像の特徴を抽出する特徴抽出部とを具備し、
前記病理組織画像の領域分割画像データ作成システムが、
前記病理組織画像データから前記細胞核領域とその他の領域とが区別できる第1の2値化画像データを作成する第1の2値化画像データ作成部と、
前記病理組織画像データから前記背景領域とその他の領域とが区別できる第2の2値化画像データを作成する第2の2値化画像データ作成部と、
前記第1の2値化画像データと前記第2の2値化画像データの否定論理和をとって細胞質領域を明確にし、前記領域分割画像データとなる3値化画像データを作成する3値化画像データ作成部とを備えていることを特徴とする病理組織画像の特徴抽出システム。 - 背景、細胞質及び細胞核を含む病理組織画像データから、背景領域、細胞質領域及び細胞核領域が明確になった領域分割画像を生成するために必要な領域分割画像データを作成する病理組織画像の領域分割画像データ作成システムであって、
前記病理組織画像データから前記細胞核領域とその他の領域とが区別できる第1の2値化画像データを作成する第1の2値化画像データ作成部と、
前記病理組織画像データから前記背景領域とその他の領域とが区別できる第2の2値化画像データを作成する第2の2値化画像データ作成部と、
前記第1の2値化画像データと前記第2の2値化画像データの否定論理和をとって細胞質領域を明確にし、前記領域分割画像データとなる3値化画像データを作成する3値化画像データ作成部とを備えていることを特徴とする病理組織画像の領域分割画像データ作成システム。 - 前記第1の2値化画像データ作成部は、前記病理組織画像のRGB画像データからR成分を分離し、分離したR成分を判別2値化法により2値化処理して、前記細胞核領域とその他の領域とが区別できる第1の2値化画像データを作成するように構成されている請求項2に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第1の2値化画像データ作成部は、前記RGB画像データの全画素データをRGB色空間におけるR軸に射影して前記R成分を分離することを特徴とする請求項3に記載の病理組織画像の領域分割画像データ作成システム。
- 前記RGB画像データが、前記病理組織画像のすべての画素においてRGB色空間上のR成分からB成分を減算し、減算結果が0より小さい場合の画素値を0とする冗長成分除去を行った冗長成分除去情報削減RGB画像データである請求項3または4に記載の病理組織画像の領域分割画像データ作成システム。
- 前記RGB画像データが、前記冗長成分除去情報削減RGBデータに含まれるすべての画素においてRGB色空間上のB成分からR成分を減算した値が、所定の値より大きい場合には、B成分からR成分を減算した値が前記所定の値となるようにB成分を予め定めた領域内のものとするクリッピングをしたクリッピングRGB画像データである請求項5に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第2の2値化画像データ作成部は、前記病理組織画像のYUV画像データからV成分を分離し、分離したV成分を判別2値化法により2値化処理して、前記背景領域とその他の領域とが区別できる第2の2値化画像データを作成するように構成されている請求項2に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第2の2値化画像データ作成部は、前記YUV画像データの全画素データをYUV色空間におけるV軸に射影して前記V成分を分離することを特徴とする請求項7に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第2の2値化画像データ作成部は、前記第2の2値化画像データを、前記病理組織画像データを主成分分析して得ることを特徴とする請求項2に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第2の2値化画像データ作成部は、前記病理組織画像のCIELuv画像データの全画素データを主成分分析し、全画素データの第2主成分スコアを判別2値化法により2値化処理して、前記背景領域とその他の領域とが区別できる第2の2値化画像データを作成するように構成されている請求項9に記載の病理組織画像の領域分割画像データ作成システム。
- 前記CIELuv画像データは、前記病理組織画像のRGB画像データをXYZ画像データに変換し、Yの値に基づいてLの値を算出し、XYZの値及び前記Lの値に基づいてuとvを算出することにより、RGB画像データから変換されたものである請求項10に記載の病理組織画像の領域分割画像データ作成システム。
- 前記第2の2値化画像データ作成部は、前記病理組織画像データの全画素データを主成分分析して、全画素データの第1主成分スコアを判別2値化法により2値化処理して、前記背景領域とその他の領域とが区別できる第2の2値化画像データを作成するように構成されている請求項2に記載の病理組織画像の領域分割画像データ作成システム。
- 請求項3乃至12のいずれか1項に記載の病理組織画像の領域分割画像データ作成システムにより作成した病理組織画像を、予め定めた複数の局所パターンマスクを個々に用いて走査し、前記局所パターンマスク毎に高次局所自己相関特徴を計算する高次局所自己相関計算部と、
前記複数の局所パターンマスクをそれぞれ45°ずつ回転させたとき及び反転させたときに等価とみなせる複数の局所パターンマスクをそれぞれ複数の不変特徴グループに分け、1つの前記不変特徴グループに属する複数の前記局所パターンマスクを一つの特徴量とみなして、各不変特徴グループに属する前記局所パターンマスクによる走査で得られた前記高次局所自己相関特徴の線形和を計算する要素特徴ベクトル計算部と、
前記高次局所自己相関特徴の線形和値に基づいて、前記病理組織画像の特徴を抽出する特徴抽出部とからなる病理組織画像の特徴抽出システム。 - 前記複数の局所パターンマスクは、m及びnを整数としたときに、(2m+1)×(2n+1)のセルからなるマスク範囲に格子状に配列した複数のマスク候補のうち、マスク範囲の中心に位置するマスク候補を選択し、さらにマスク範囲から0以上の任意個数のマスク候補を選択する事で構成されている請求項13に記載の病理組織画像の特徴抽出システム。
- 前記中心マスク候補以外のマスク候補が、中心マスクまでの距離が等しくなるように選択された請求項14に記載の病理組織画像の特徴抽出システム。
- 前記中心マスクの座標を(0,0)としてxy座標を仮想したときに、前記中心マスク候補以外の複数のマスク候補が、
(x2/n2)+(y2/m2)=1
y=(m/n)x または y=-(m/n)x または y=0 または x=0
上記2式の交点の座標を有する請求項14に記載の病理組織画像の特徴抽出システム。 - 前記mと前記nが等しく、
前記局所パターンマスクとして使用できる前記複数のマスク候補が前記中心マスク候補以外に8個ある請求項16に記載の病理組織画像の特徴抽出システム。 - 前記局所パターンマスクとして、細胞核のみと、細胞核と細胞質の関係を示すものだけを使用する請求項13乃至17のいずれか1項に記載の病理組織画像の特徴抽出システム。
- 病理組織画像を、予め定めた複数の局所パターンマスクを個々に用いて走査し、前記局所パターンマスク毎に高次局所自己相関特徴を計算する走査して、前記局所パターンマスク毎に高次局所自己相関特徴を計算するステップと、
前記複数のマスクパターンをそれぞれ45°ずつ回転させたとき及び反転させたときに等価とみなせる複数のマスクパターンをそれぞれ複数の不変特徴グループに分け、1つの前記不変特徴グループに属する複数の前記マスクパターンを一つの特徴量とみなして、各不変特徴グループに属する前記局所パターンマスクによる走査で得られた前記高次局所自己相関特徴の線形和を計算するステップと、
前記高次局所自己相関特徴の線形和値に基づいて、前記病理組織画像の特徴を抽出するステップとからなる病理組織画像の特徴抽出方法。 - 前記局所パターンマスクとして、細胞核のみと、細胞核と細胞質の関係を示すものだけを使用する請求項19に記載の病理組織画像の特徴抽出方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012525453A JP5825641B2 (ja) | 2010-07-23 | 2011-07-22 | 病理組織画像の特徴抽出システム及び病理組織画像の特徴抽出方法 |
US13/807,135 US9031294B2 (en) | 2010-07-23 | 2011-07-22 | Region segmented image data creating system and feature extracting system for histopathological images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-166496 | 2010-07-23 | ||
JP2010166496 | 2010-07-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012011579A1 true WO2012011579A1 (ja) | 2012-01-26 |
Family
ID=45496996
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/066744 WO2012011579A1 (ja) | 2010-07-23 | 2011-07-22 | 病理組織画像の領域分割画像データ作成システム及び病理組織画像の特徴抽出システム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9031294B2 (ja) |
JP (1) | JP5825641B2 (ja) |
WO (1) | WO2012011579A1 (ja) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013161155A1 (ja) * | 2012-04-23 | 2013-10-31 | 日本電気株式会社 | 画像計測装置、画像計測方法、および、画像計測用プログラム |
WO2013179581A1 (ja) * | 2012-05-30 | 2013-12-05 | パナソニック株式会社 | 画像計測装置、画像計測方法及び画像計測システム |
CN103902998A (zh) * | 2012-12-27 | 2014-07-02 | 核工业北京地质研究院 | 一种用于绿泥石信息提取的高光谱影像处理方法 |
JP2015052581A (ja) * | 2013-01-08 | 2015-03-19 | キヤノン株式会社 | 生体組織画像の再構成方法及び装置並びに該生体組織画像を用いた画像表示装置 |
JP5789786B2 (ja) * | 2012-11-27 | 2015-10-07 | パナソニックIpマネジメント株式会社 | 画像計測装置および画像計測方法 |
WO2016084315A1 (en) * | 2014-11-28 | 2016-06-02 | Canon Kabushiki Kaisha | Data processing apparatus, spectral information acquisition apparatus, data processing method, program, and storage medium |
KR20170082630A (ko) | 2014-12-01 | 2017-07-14 | 고쿠리츠켄큐카이하츠호진 상교기쥬츠 소고켄큐쇼 | 초음파 검사 시스템 및 초음파 검사 방법 |
US9721184B2 (en) | 2013-11-05 | 2017-08-01 | Fanuc Corporation | Apparatus and method for picking up article randomly piled using robot |
JP2018091685A (ja) * | 2016-12-01 | 2018-06-14 | 国立研究開発法人産業技術総合研究所 | 検査装置および検査方法 |
JP2018132309A (ja) * | 2017-02-13 | 2018-08-23 | 株式会社Ihi | 探索方法及び探索システム |
JP2018535471A (ja) * | 2015-09-23 | 2018-11-29 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 正規化及びアーチファクト補正のための画像処理方法及び画像処理装置 |
WO2019044579A1 (ja) | 2017-08-31 | 2019-03-07 | 国立大学法人大阪大学 | 病理診断装置、画像処理方法及びプログラム |
JP2019045514A (ja) * | 2013-05-30 | 2019-03-22 | キヤノン株式会社 | 分光画像データ処理装置および2次元分光装置 |
CN109859218A (zh) * | 2019-02-25 | 2019-06-07 | 北京邮电大学 | 病理图关键区域确定方法、装置、电子设备及存储介质 |
WO2020137222A1 (ja) * | 2018-12-28 | 2020-07-02 | オムロン株式会社 | 欠陥検査装置、欠陥検査方法、及びそのプログラム |
JP2021508373A (ja) * | 2017-11-27 | 2021-03-04 | デシフェックス | 正常モデルの分析による組織病理学検査用組織サンプルの自動スクリーニング |
CN114240958A (zh) * | 2021-12-23 | 2022-03-25 | 西安交通大学 | 一种应用于病理学组织分割的对比学习方法 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031294B2 (en) * | 2010-07-23 | 2015-05-12 | National Institute Of Advanced Industrial Science And Technology | Region segmented image data creating system and feature extracting system for histopathological images |
US9230161B2 (en) | 2013-12-06 | 2016-01-05 | Xerox Corporation | Multiple layer block matching method and system for image denoising |
US9558393B2 (en) * | 2014-03-27 | 2017-01-31 | Konica Minolta, Inc. | Image processing device and storage medium for image processing |
WO2016120442A1 (en) | 2015-01-30 | 2016-08-04 | Ventana Medical Systems, Inc. | Foreground segmentation and nucleus ranking for scoring dual ish images |
JP6626783B2 (ja) * | 2016-06-02 | 2019-12-25 | Hoya株式会社 | 画像処理装置および電子内視鏡システム |
US11195313B2 (en) * | 2016-10-14 | 2021-12-07 | International Business Machines Corporation | Cross-modality neural network transform for semi-automatic medical image annotation |
KR20180060257A (ko) * | 2016-11-28 | 2018-06-07 | 삼성전자주식회사 | 객체 인식 방법 및 장치 |
CN111095352B (zh) * | 2017-08-04 | 2023-12-29 | 文塔纳医疗系统公司 | 用于检测被染色样本图像中的细胞的自动化方法和系统 |
US11270163B2 (en) | 2017-12-14 | 2022-03-08 | Nec Corporation | Learning device, learning method, and storage medium |
JP7047849B2 (ja) | 2017-12-14 | 2022-04-05 | 日本電気株式会社 | 識別装置、識別方法、および識別プログラム |
JP6943295B2 (ja) | 2017-12-14 | 2021-09-29 | 日本電気株式会社 | 学習装置、学習方法、および学習プログラム |
CN110246567B (zh) * | 2018-03-07 | 2023-07-25 | 中山大学 | 一种医学图像预处理方法 |
EP3862913A4 (en) * | 2018-10-01 | 2022-07-06 | Hitachi Industrial Equipment Systems Co., Ltd. | PRINT INSPECTION DEVICE |
CN113222928B (zh) * | 2021-05-07 | 2023-09-19 | 北京大学第一医院 | 一种尿细胞学人工智能尿路上皮癌识别系统 |
CN114240859B (zh) * | 2021-12-06 | 2024-03-19 | 柳州福臻车体实业有限公司 | 一种基于图像处理的模具研合率检测方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5068592A (ja) * | 1973-10-22 | 1975-06-07 | ||
JPS59134846A (ja) * | 1983-01-21 | 1984-08-02 | Fujitsu Ltd | 電子ビ−ム装置 |
JPS63144257A (ja) * | 1986-12-08 | 1988-06-16 | Nippon Koden Corp | 細胞分類装置 |
JP2004058737A (ja) * | 2002-07-25 | 2004-02-26 | National Institute Of Advanced Industrial & Technology | 駅ホームにおける安全監視装置 |
JP2004286666A (ja) * | 2003-03-24 | 2004-10-14 | Olympus Corp | 病理診断支援装置および病理診断支援プログラム |
JP2008216066A (ja) * | 2007-03-05 | 2008-09-18 | Kddi Corp | 類似画像検索装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4045655A (en) | 1973-10-15 | 1977-08-30 | Hitachi, Ltd. | Automatic cyto-screening device |
JP4496943B2 (ja) * | 2004-11-30 | 2010-07-07 | 日本電気株式会社 | 病理診断支援装置、病理診断支援プログラム、病理診断支援装置の作動方法、及び病理診断支援システム |
US8280132B2 (en) * | 2006-08-01 | 2012-10-02 | Rutgers, The State University Of New Jersey | Malignancy diagnosis using content-based image retreival of tissue histopathology |
JP5154844B2 (ja) * | 2007-06-14 | 2013-02-27 | オリンパス株式会社 | 画像処理装置および画像処理プログラム |
JP4947589B2 (ja) | 2007-06-27 | 2012-06-06 | Kddi株式会社 | 類似画像検索装置 |
JP4558047B2 (ja) * | 2008-01-23 | 2010-10-06 | オリンパス株式会社 | 顕微鏡システム、画像生成方法、及びプログラム |
US8488863B2 (en) * | 2008-11-06 | 2013-07-16 | Los Alamos National Security, Llc | Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials |
US9031294B2 (en) * | 2010-07-23 | 2015-05-12 | National Institute Of Advanced Industrial Science And Technology | Region segmented image data creating system and feature extracting system for histopathological images |
US8699769B2 (en) * | 2011-07-12 | 2014-04-15 | Definiens Ag | Generating artificial hyperspectral images using correlated analysis of co-registered images |
-
2011
- 2011-07-22 US US13/807,135 patent/US9031294B2/en active Active
- 2011-07-22 JP JP2012525453A patent/JP5825641B2/ja not_active Expired - Fee Related
- 2011-07-22 WO PCT/JP2011/066744 patent/WO2012011579A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5068592A (ja) * | 1973-10-22 | 1975-06-07 | ||
JPS59134846A (ja) * | 1983-01-21 | 1984-08-02 | Fujitsu Ltd | 電子ビ−ム装置 |
JPS63144257A (ja) * | 1986-12-08 | 1988-06-16 | Nippon Koden Corp | 細胞分類装置 |
JP2004058737A (ja) * | 2002-07-25 | 2004-02-26 | National Institute Of Advanced Industrial & Technology | 駅ホームにおける安全監視装置 |
JP2004286666A (ja) * | 2003-03-24 | 2004-10-14 | Olympus Corp | 病理診断支援装置および病理診断支援プログラム |
JP2008216066A (ja) * | 2007-03-05 | 2008-09-18 | Kddi Corp | 類似画像検索装置 |
Non-Patent Citations (1)
Title |
---|
SHINJI UMEYAMA: "Rotation invariant features based on higher-order autocorrelations", DAI 45 KAI PROCEEDINGS OF THE NATIONAL MEETING OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 45, no. 2, 28 September 1992 (1992-09-28), pages 323 - 324 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2013161155A1 (ja) * | 2012-04-23 | 2015-12-21 | 日本電気株式会社 | 画像計測装置、画像計測方法、および、画像計測用プログラム |
WO2013161155A1 (ja) * | 2012-04-23 | 2013-10-31 | 日本電気株式会社 | 画像計測装置、画像計測方法、および、画像計測用プログラム |
US9390313B2 (en) | 2012-04-23 | 2016-07-12 | Nec Corporation | Image measurement apparatus and image measurment method measuring the cell neclei count |
WO2013179581A1 (ja) * | 2012-05-30 | 2013-12-05 | パナソニック株式会社 | 画像計測装置、画像計測方法及び画像計測システム |
JP5576993B2 (ja) * | 2012-05-30 | 2014-08-20 | パナソニック株式会社 | 画像計測装置、画像計測方法及び画像計測システム |
JPWO2013179581A1 (ja) * | 2012-05-30 | 2016-01-18 | パナソニック株式会社 | 画像計測装置、画像計測方法及び画像計測システム |
US9418414B2 (en) | 2012-05-30 | 2016-08-16 | Panasonic Intellectual Property Management Co., Ltd. | Image measurement apparatus, image measurement method and image measurement system |
US9558551B2 (en) | 2012-11-27 | 2017-01-31 | Panasonic Intellectual Property Management Co., Ltd. | Image measurement apparatus and image measurement method for determining a proportion of positive cell nuclei among cell nuclei included in a pathologic examination specimen |
JP5789786B2 (ja) * | 2012-11-27 | 2015-10-07 | パナソニックIpマネジメント株式会社 | 画像計測装置および画像計測方法 |
CN103902998A (zh) * | 2012-12-27 | 2014-07-02 | 核工业北京地质研究院 | 一种用于绿泥石信息提取的高光谱影像处理方法 |
JP2015052581A (ja) * | 2013-01-08 | 2015-03-19 | キヤノン株式会社 | 生体組織画像の再構成方法及び装置並びに該生体組織画像を用いた画像表示装置 |
JP2020101564A (ja) * | 2013-05-30 | 2020-07-02 | キヤノン株式会社 | 分光画像データ処理装置および2次元分光装置 |
JP2019045514A (ja) * | 2013-05-30 | 2019-03-22 | キヤノン株式会社 | 分光画像データ処理装置および2次元分光装置 |
US9721184B2 (en) | 2013-11-05 | 2017-08-01 | Fanuc Corporation | Apparatus and method for picking up article randomly piled using robot |
WO2016084315A1 (en) * | 2014-11-28 | 2016-06-02 | Canon Kabushiki Kaisha | Data processing apparatus, spectral information acquisition apparatus, data processing method, program, and storage medium |
US10786227B2 (en) | 2014-12-01 | 2020-09-29 | National Institute Of Advanced Industrial Science And Technology | System and method for ultrasound examination |
KR20170082630A (ko) | 2014-12-01 | 2017-07-14 | 고쿠리츠켄큐카이하츠호진 상교기쥬츠 소고켄큐쇼 | 초음파 검사 시스템 및 초음파 검사 방법 |
KR102014104B1 (ko) | 2014-12-01 | 2019-08-26 | 고쿠리츠켄큐카이하츠호진 상교기쥬츠 소고켄큐쇼 | 초음파 검사 시스템 및 초음파 검사 방법 |
JP2018535471A (ja) * | 2015-09-23 | 2018-11-29 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 正規化及びアーチファクト補正のための画像処理方法及び画像処理装置 |
JP2018091685A (ja) * | 2016-12-01 | 2018-06-14 | 国立研究開発法人産業技術総合研究所 | 検査装置および検査方法 |
JP2018132309A (ja) * | 2017-02-13 | 2018-08-23 | 株式会社Ihi | 探索方法及び探索システム |
WO2019044579A1 (ja) | 2017-08-31 | 2019-03-07 | 国立大学法人大阪大学 | 病理診断装置、画像処理方法及びプログラム |
JP7220017B2 (ja) | 2017-11-27 | 2023-02-09 | デシフェックス | 正常モデルの分析による組織病理学検査用組織サンプルの自動スクリーニング |
JP2021508373A (ja) * | 2017-11-27 | 2021-03-04 | デシフェックス | 正常モデルの分析による組織病理学検査用組織サンプルの自動スクリーニング |
WO2020137222A1 (ja) * | 2018-12-28 | 2020-07-02 | オムロン株式会社 | 欠陥検査装置、欠陥検査方法、及びそのプログラム |
JP2020106467A (ja) * | 2018-12-28 | 2020-07-09 | オムロン株式会社 | 欠陥検査装置、欠陥検査方法、及びそのプログラム |
US11830174B2 (en) | 2018-12-28 | 2023-11-28 | Omron Corporation | Defect inspecting device, defect inspecting method, and storage medium |
CN109859218A (zh) * | 2019-02-25 | 2019-06-07 | 北京邮电大学 | 病理图关键区域确定方法、装置、电子设备及存储介质 |
CN114240958A (zh) * | 2021-12-23 | 2022-03-25 | 西安交通大学 | 一种应用于病理学组织分割的对比学习方法 |
CN114240958B (zh) * | 2021-12-23 | 2024-04-05 | 西安交通大学 | 一种应用于病理学组织分割的对比学习方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012011579A1 (ja) | 2013-09-09 |
US9031294B2 (en) | 2015-05-12 |
JP5825641B2 (ja) | 2015-12-02 |
US20130094733A1 (en) | 2013-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5825641B2 (ja) | 病理組織画像の特徴抽出システム及び病理組織画像の特徴抽出方法 | |
JP6710135B2 (ja) | 細胞画像の自動分析方法及びシステム | |
Chekkoury et al. | Automated malignancy detection in breast histopathological images | |
Schwartz et al. | Visual material traits: Recognizing per-pixel material context | |
Hervé et al. | Statistical color texture descriptors for histological images analysis | |
CN108109140A (zh) | 基于深度学习的低级别脑胶质瘤柠檬酸脱氢酶无损预测方法及系统 | |
Xu et al. | Computerized classification of prostate cancer gleason scores from whole slide images | |
CN105678788A (zh) | 一种基于hog和低秩分解的织物疵点检测方法 | |
CN114140445B (zh) | 基于重点关注区域提取的乳腺癌病理图像识别方法 | |
Akhtar et al. | Optical character recognition (OCR) using partial least square (PLS) based feature reduction: an application to artificial intelligence for biometric identification | |
CN112990214A (zh) | 一种医学图像特征识别预测模型 | |
Chatterjee et al. | A novel method for IDC prediction in breast cancer histopathology images using deep residual neural networks | |
Jonnalagedda et al. | [regular paper] mvpnets: Multi-viewing path deep learning neural networks for magnification invariant diagnosis in breast cancer | |
Di Leo et al. | Towards an automatic diagnosis system for skin lesions: estimation of blue-whitish veil and regression structures | |
CN113361407B (zh) | 基于PCANet的空谱特征联合高光谱海冰图像分类方法 | |
WO2014006421A1 (en) | Identification of mitotic cells within a tumor region | |
CN111507992A (zh) | 一种基于内外应力的低分化腺体分割方法 | |
Albashish et al. | Ensemble learning of tissue components for prostate histopathology image grading | |
Rathore et al. | A novel approach for ensemble clustering of colon biopsy images | |
Sertel et al. | Computer-aided prognosis of neuroblastoma: classification of stromal development on whole-slide images | |
Fetisov et al. | Unsupervised prostate cancer histopathology image segmentation via meta-learning | |
CN115018820A (zh) | 基于纹理加强的乳腺癌多分类方法 | |
CN111415350B (zh) | 一种用于检测宫颈病变的阴道镜图像识别方法 | |
CN114155399A (zh) | 基于多特征融合递进式判别的乳腺病理全切片分类方法 | |
Inamdar et al. | A Novel Attention based model for Semantic Segmentation of Prostate Glands using Histopathological Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11809749 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13807135 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012525453 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11809749 Country of ref document: EP Kind code of ref document: A1 |