WO2018131723A1 - Dispositif de traitement d'image et procédé de traitement d'image - Google Patents

Dispositif de traitement d'image et procédé de traitement d'image Download PDF

Info

Publication number
WO2018131723A1
WO2018131723A1 PCT/JP2018/001080 JP2018001080W WO2018131723A1 WO 2018131723 A1 WO2018131723 A1 WO 2018131723A1 JP 2018001080 W JP2018001080 W JP 2018001080W WO 2018131723 A1 WO2018131723 A1 WO 2018131723A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
learning
processing
unit
data
Prior art date
Application number
PCT/JP2018/001080
Other languages
English (en)
Japanese (ja)
Inventor
佑基 島原
皓 菅原
亨 青池
夏麿 朽名
高寛 上坂
Original Assignee
エルピクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by エルピクセル株式会社 filed Critical エルピクセル株式会社
Publication of WO2018131723A1 publication Critical patent/WO2018131723A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image processing device, an image processing method, an image processing program, an image classification device, and an image analysis support device used for image processing.
  • Patent Document 1 As a technology developed in view of such a problem, there is a technology proposed in Patent Document 1, for example.
  • classification and analysis of cells in an image are performed using a neural network learned in advance.
  • Patent Document 1 since the cell classification apparatus described in Patent Document 1 uses a neural network that has been learned in advance, it has not been able to cope with rapid diversification of image analysis methods and image classification methods that accompany recent technological advances. Moreover, in the conventional cell analysis apparatus, in order for a user to analyze and classify an image, a high individual ability, for example, a considerable knowledge amount or experience value is required.
  • the present invention has been made in view of such circumstances, and the object thereof is to support a person who does not have a considerable knowledge amount or experience value when analyzing and classifying an image, and further so-called An object of the present invention is to provide an image processing apparatus that can deal with a wide variety of image processing and classification by utilizing machine learning.
  • An image processing apparatus includes an input unit that inputs an analysis target image.
  • the image processing unit performs processing on the analysis target image input to the input unit, and performs the analysis.
  • a process that is a combination of the result image and a plurality of the processes is output.
  • the analysis target image and the processing process are stored in a storage unit, and a plurality of processing target images are collectively processed by the processing process output from the storage unit.
  • the learning unit learns the analysis target image as learning image data, the processing process as learning process process data, and provides the learning image data and the learning process process data to the image processing unit.
  • an optimum process is output, which can cope with rapid diversification of the image analysis method and the image classification method. Appropriate image processing can be performed regardless of ability.
  • FIG. 1 is a block diagram showing functions of the image processing apparatus according to the first embodiment of the present invention.
  • the image processing apparatus 100 includes an input unit 101, an image processing unit 102, a storage unit 103, a batch processing unit 104, a learning unit 105, and a statistical analysis unit 106.
  • the image processing apparatus 100 may be mounted in a tissue / cell image acquisition apparatus such as a virtual slide, or may be mounted in a server connected to the tissue / cell image acquisition apparatus via a network.
  • the image processing apparatus 100 is connected to an output device 107 that is a device such as a display or a printer.
  • the input unit 101, the image processing unit 102, the storage unit 103, the batch processing unit 104, and the learning unit 105 may be realized by a program or may be realized as a module.
  • each processing unit is described as an operation subject. However, it may be read so that the CPU is the operation subject and the CPU executes each processing unit as a program.
  • Each processing unit may be placed on a so-called cloud.
  • the analysis target image is input to the input unit 101.
  • the analysis target image is, for example, an image obtained by imaging a biological sample such as a cell, a subcellular structure, or a biological tissue using an optical microscope.
  • An imaging unit such as a camera built in the microscope is used for a predetermined time. It is also possible to acquire encoded still image data or the like in the JPG, Jpeg2000, PNG, TIFF format, etc., captured at intervals, and use that image as an analysis target image.
  • the input unit 101 includes Motion JPEG, MPEG, H.264, and the like. It is also possible to extract still image data of frames at a predetermined interval from moving image data such as H.264, HD / SDI format, and input the image as an analysis target image. Furthermore, an image acquired by the imaging unit via a bus, a network, or the like may be input to the input unit 101 as an analysis target image.
  • the image processing unit 102 extracts a plurality of feature amounts from the analysis target image input to the input unit 101, and extracts optimal learning process process data corresponding thereto from the learning unit 105 described later. Furthermore, the image processing unit 102 performs processing on the analysis target image input to the input unit 101 based on the optimal learning process process data, and causes the output device 107 to display an analysis result image that is a combination of the processing. . Further, the image processing unit 102 outputs an analysis result image and a processing process determined by user's arbitrary selection to the storage unit 103.
  • the storage unit 103 temporarily stores the analysis result image output from the image processing unit 102 and the processing process in a cache memory.
  • the batch processing unit 104 extracts a processing process temporarily stored in the storage unit 103, and collectively processes a plurality of processing target images input to the input unit 101 based on the processing process, and performs analysis results on the subject. Is displayed on the output device 107.
  • the learning unit 105 extracts the analysis target image and the processing process temporarily stored in the storage unit 103, and learns them. Further, learned image data and learning process data are provided to the image processing unit 102.
  • the statistical analysis unit 106 performs statistical analysis on the analysis result regarding the subject output by the batch processing unit 104 and displays the statistical analysis result on the output device 107.
  • the output device 107 is configured by a device such as a display or a printer, for example, and displays an analysis result image and a statistical analysis result.
  • the image processing apparatus 100 using the learning image data and the learning processing data learned by the learning unit 105, a processing process that suits the user's purpose is output, and a plurality of processing target images are collectively displayed. And statistical analysis.
  • the input unit 101 outputs the input analysis target image to the image processing unit 102, and outputs the input processing target image to the batch processing unit 104. Note that the image input as the analysis target image to the input unit 101 is different from the image input as the processing target image.
  • the image processing unit 102 calculates a plurality of feature amounts from the analysis target image.
  • the feature amount includes a shape feature amount derived from the shape of the subject and a texture feature amount derived from the texture of the subject.
  • the texture feature amount in the present embodiment is a monochrome image group obtained by binarizing first 16 threshold values obtained by equally dividing the luminance range from the lowest luminance to the highest luminance into 18 levels. For each of (16 types), a white area and a black area are extracted. The texture feature amount is used to measure morphological parameters such as the area, circumference, number, and complexity of the object to be imaged, and for each morphological parameter, statistics (average, maximum, variance, median value) of the entire 16 image groups are measured. ) Is a numerical value group obtained by calculating.
  • the texture feature amount includes what can be called a multiple shape feature amount. Further, the extraction of the shape feature amount is performed not only on the gray scale image but also on the boundary image obtained by applying the Sobel filter as preprocessing.
  • the texture feature amount is a value calculated based on the luminance of the entire image, and the average luminance, maximum luminance, luminance distribution, luminance histogram, and luminance relationship (difference and product) between adjacent pixels. This includes a numerical group focusing on, which can be called a luminance feature amount. Examples of the feature amount extracted by the image processing unit 102 include those listed in Table 1.
  • the shape feature amount (morphological parameter) is listed in Table 2
  • the texture feature amount is listed in Table 3.
  • the image processing unit 102 extracts optimal learning processing data from the learning unit 105 using the calculated feature amount of the analysis target image. Based on the extracted learning processing data, the image processing unit 102 combines optimal processing and causes the output device 107 to display an analysis result image.
  • the processing includes, for example, gray scale conversion, black and white inversion, binarization, hole filling, opening, contour extraction, and the like, but is not limited thereto. Further, the image processing unit 102 outputs the processing process determined by the user selection and the analysis target image to the storage unit.
  • the storage unit 103 temporarily stores the processing process output by the image processing unit 102 and the analysis target image in a so-called cache memory.
  • the storage unit 103 outputs a processing process to the batch processing unit 104 and simultaneously learns 105.
  • the processing process and the image to be analyzed are output.
  • the collective processing unit 104 acquires a plurality of processing target images from the input unit 101 and acquires a processing process from the storage unit 103.
  • the batch processing unit 104 processes a plurality of processing target images at once using the above process.
  • the batch processing unit 104 outputs an analysis result regarding the subject from a plurality of processing target images for which the batch processing has been completed, and causes the output device 107 to display the analysis result.
  • the analysis result regarding the subject may be the total number of cells in all the processing target images, but is not limited thereto.
  • it may include a plurality of numerical values such as the number of cells of a plurality of types such as the number of cells having protrusions and the number of cells having no protrusions.
  • the collective processing unit 104 can output numerical values of analysis results related to the subject to the output device 107 as a table or a graph.
  • the learning unit 105 When the learning unit 105 acquires the processing process and the analysis target image from the storage unit 103, the learning unit 105 learns the processing process as learning processing process data and the analysis target image as learning image data.
  • the learning is performed using, for example, a machine learning technique that is a conventional technique.
  • the learning image data includes a feature amount.
  • the statistical analysis unit 106 acquires the analysis result regarding the subject output by the batch processing unit 104, performs statistical analysis on these data, and displays the statistical analysis result on the output device 107.
  • the statistical analysis is, for example, calculation of an average value, calculation of a standard deviation, significance test, etc., but is not limited thereto.
  • FIG. 2 is a flowchart for explaining the operation of the image processing unit 102.
  • the image processing unit 102 calculates a plurality of the above-described feature amounts from the analysis target image, and extracts optimal learning process process data corresponding to the feature amount from the learning unit 105.
  • the image processing unit 102 selects later-described learning image data that approximates the feature amount of the analysis target image, and extracts the learning process process data from the learning unit as optimum learning process process data.
  • the image processing unit 102 performs processing on the analysis target image input to the input unit 101.
  • the image processing unit 102 combines the above-described optimal processing based on the learning process process data, and causes the output device 107 to display an analysis result image.
  • FIG. 3 is a diagram illustrating an example of the display of the analysis result image processed by the image processing unit 102.
  • Reference numeral 301 denotes an input analysis target image.
  • Reference numeral 302 denotes an image obtained by performing gray scale processing on the analysis target image.
  • Reference numeral 303 denotes an image group obtained by performing black and white reversal processing on the image 302,
  • 3031 is an image obtained by reversing 302 from black and white
  • 3032 is an image that is not subjected to black and white reversal.
  • the user selects either 3031 or 3032 according to the purpose of analysis. In the figure, it is shown that the user has selected 3032.
  • Reference numeral 304 denotes an image group obtained by performing binarization processing on the image 3032.
  • Reference numerals 3041 to 3044 denote images binarized with different threshold values.
  • Reference numeral 305 denotes an image group obtained by performing a filling process on the image 3042, and reference numerals 3051 to 3053 denote images obtained by filling different holes.
  • Reference numeral 306 denotes an image group obtained by performing an opening process on the image 3052, and reference numerals 3061 to 3063 denote different opened images.
  • Reference numeral 307 denotes an image obtained by performing contour extraction processing on the image 3062. The user also selects images suitable for the purpose in 304 to 307.
  • the display of the analysis result images is displayed in the optimum order, for example, in the order of the analysis result images based on the learning process process data whose feature amounts approximate, but may be in other orders.
  • the image processing unit 102 displays, for example, the analysis result image of the optimum process on the left side in each process.
  • the image processing unit 102 outputs the analysis result image and the processing process to the storage unit 103.
  • the storage unit 103 temporarily stores the analysis result image output from the image processing unit 102 and the processing process in the cache memory.
  • FIG. 4 is a flowchart for explaining the operation of the batch processing unit 104.
  • the user inputs a plurality of processing target images to be batch processed to the input unit 101.
  • the plurality of images may be more than 1000 sheets.
  • the batch processing unit 104 extracts and sets the processing process temporarily stored in the storage unit 103.
  • the batch processing unit 104 batch-processes the plurality of processing target images based on the set processing process, and outputs an analysis result regarding the subject.
  • numerical values, tables, graphs, and the like for the analysis results regarding the subject can be displayed on the output device 107.
  • FIG. 7 is an example of an analysis result relating to the subject, and is a diagram illustrating a distribution table of the number of cells in the plurality of processing target images that are collectively processed.
  • FIG. 5 is a flowchart for explaining the operation of the learning unit 105.
  • the learning unit 105 extracts the analysis target image and the processing process temporarily stored in the storage unit 103, learns the analysis target image as learning image data, and the processing process as learning processing process data.
  • the learning unit performs the learning using, for example, a conventional machine learning technique.
  • the learning unit 105 outputs optimal learning image data and learning process process data to the image processing unit 102.
  • the selection of the optimal learning image data and learning processing process data is performed based on the feature amount of the analysis target image as described above.
  • FIG. 6 is a flowchart for explaining the operation of the statistical analysis unit 106.
  • the statistical analysis unit 106 performs statistical analysis on the analysis result regarding the subject, and causes the output device 107 to display statistical analysis results such as numerical values, tables, and graphs.
  • the statistical analysis unit 106 sets the analysis result regarding the subject output by the batch processing unit.
  • the statistical analysis unit 106 performs statistical analysis such as calculation of an average value, calculation of a standard deviation, and a significant difference test on a plurality of numerical values included in the analysis result regarding the subject.
  • Statistical analysis is not limited to these. The user can arbitrarily select what kind of statistical analysis is performed.
  • FIG. 8 is an example of the statistical analysis result, and is a diagram displaying the average value and error of the number of cells subjected to batch processing.
  • FIG. 9 is a block diagram showing functions of the image processing apparatus 900 according to the second embodiment of the present invention.
  • the image processing apparatus includes an input unit 901, a feature extraction unit 902, a classification condition output unit 903, a classification unit 904, and a learning unit 905.
  • the image processing apparatus 900 is connected to an output device 906 that is a device such as a display or a printer.
  • the input unit 901 receives a plurality of data sets including analysis target images.
  • the analysis target image may be the same image as in the first embodiment.
  • the user sorts the plurality of analysis target images into a plurality of arbitrary groups according to the purpose of classification. This is input to the input unit 901 as a data set. For example, when a user tries to classify a plurality of images into a cell image having a projection and a cell image having no projection, the user sets a data set including only a cell image having a projection and a cell image having no projection. Each of the data sets including only is input to the input unit 901.
  • the feature extraction unit 902 extracts feature amounts from the data set, outputs the feature amount data to the classification condition output unit 903, and outputs the analysis target image to the learning unit 905.
  • the classification condition output unit 903 acquires the feature amount of the data set from the feature extraction unit 902, and extracts optimal learning classification condition data from the learning unit 905 based on the feature amount.
  • the classification condition output unit 903 outputs the classification conditions using the optimal learning classification condition data.
  • the classification unit 904 classifies the classification target image acquired from the input unit 901 based on the classification condition acquired from the classification condition output unit 903. For example, the classification unit 1104 and the analysis result as illustrated in FIG. To display. Further, the classification unit 904 outputs the classification condition to the learning unit 905.
  • the learning unit 905 learns the analysis image acquired from the feature extraction unit as learning image data, the classification condition acquired from the classification condition output unit 903 as learning classification condition data, and the optimum learning classification condition data as the classification condition output unit 903. Output to.
  • the input unit 901 outputs a plurality of data sets including the input analysis target image to the feature extraction unit 902, and outputs the input classification target image to the classification unit 904. Note that the image input as the analysis target image to the input unit 901 is different from the image input as the classification target image.
  • Feature extraction unit 902 extracts feature values from the data set.
  • the feature extraction unit 902 calculates a feature amount from each analysis target image.
  • the feature extraction unit 902 decomposes each time point into separate images, and extracts a feature amount for each decomposed image, whereby the feature for each time point is extracted.
  • the group is output to the classification condition output unit 903.
  • the analysis target image is a multiband image captured at a plurality of fluorescence wavelengths (for example, the cell nucleus and microtubule are fluorescently labeled with red and green, respectively, and both of these images are captured)
  • a grayscale image group is generated after being decomposed into each band, and a feature amount is extracted for each of the grayscale images, and is output to the classification condition output unit 903.
  • the feature extraction unit 902 outputs to the learning unit 905 the analysis target image of the data set for which feature amount extraction has been completed. It is assumed that the data on the analysis target image includes information on the feature amount.
  • the classification condition output unit 903 outputs a classification condition based on the relationship between the data set and the feature quantity and the feature quantity. Specifically, the classification condition output unit 903 acquires the feature amount of the data set from the feature extraction unit 902, and extracts the optimal learning classification condition data from the learning unit 905 using the feature amount. The classification condition output unit 903 selects learning classification condition data that approximates the feature amount of the data set, and outputs the classification condition to the classification unit 904 based on the learning classification condition data and the feature amount of the data set.
  • the classification unit 904 acquires the classification target image from the input unit 901 and acquires the classification condition from the classification condition output unit 903.
  • the classification unit 904 classifies the classification target images into appropriate groups using the classification condition, and displays the classification tag 1103 and the analysis result as illustrated in FIG. Further, the classification unit 904 provides the classification condition to the learning unit 905.
  • the learning unit 905 learns the analysis image acquired from the feature extraction unit as learning image data, the classification condition acquired from the classification condition output unit 903 as learning classification condition data, and the optimum learning classification condition data as the classification condition output unit 903. Output to.
  • the learning unit 905 performs the learning using, for example, a machine learning technique that is a conventional technique.
  • FIG. 10 is a flowchart for explaining the operation of the classification condition output unit 903.
  • the classification condition output unit 903 extracts the feature amount of the data set from the feature extraction unit 902.
  • the classification condition output unit 903 compares the feature amounts of the respective data sets, and repeatedly executes the processing until a combination pattern suitable for image classification is selected from a plurality of combination patterns of the feature amounts. By repeating this process, each data set is clustered.
  • clustering refers to automatically classifying without external criteria, and so-called “unsupervised classification”.
  • a set of data that should be classified is defined as a measure that indicates the relationship between any number of data, and the data set is divided into several clusters. Therefore, grouping is performed so that the relevance between data is high and the relevance between different clusters is low. Therefore, based on the result of clustering, each image data is classified so that highly related image data belong to the same cluster without requiring pre-processing for setting an external reference. Will be.
  • the classification condition output unit 903 extracts optimal learning classification condition data from the learning unit 905 based on the clustering result.
  • the classification condition output unit 903 outputs a classification condition based on the optimal learning classification condition data and the result of the clustering.
  • the classification condition output unit 903 provides the classification condition as learning classification condition data to the learning unit 905 to be learned.
  • the learning classification condition data may include the analysis target image and the feature amount.
  • the learned learning condition data is output to the classification condition output unit 903 when the minute condition is output.
  • FIG. 11 is a diagram for displaying an example of the classification target images classified by the classification unit 904.
  • the image is displayed on the output device 906.
  • An example of a screen is shown.
  • Reference numeral 1101 denotes a group of classification target images classified as cell images having no protrusions.
  • Reference numeral 1102 denotes a group of classification target images classified as cell images having protrusions.
  • DESCRIPTION OF SYMBOLS 100 ... Image processing apparatus, 101 ... Input part, 102 ... Image processing part, 103 ... Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image destiné à traiter et à classifier divers types d'images sans dépendre de la capacité individuelle d'un utilisateur en utilisant un apprentissage dit machine. Un dispositif de traitement d'image (100) selon un aspect de la présente invention comporte une unité d'entrée (101) destinée à entrer une image à analyser, traite l'image à analyser, qui a été entrée dans l'unité d'entrée, (101), dans une unité de traitement d'image (102), et produit une procédure de traitement qui est une combinaison d'une image de résultat d'analyse sur laquelle le traitement a été exécuté, et du traitement en pluralité. L'image à analyser et la procédure de traitement sont stockées dans une unité de stockage (103) et une pluralité d'images à traiter sont traitées par lots par la procédure de traitement sortie de l'unité de stockage (103). De plus, dans l'unité d'apprentissage (105), l'image à analyser est utilisée comme données d'image d'apprentissage pour apprendre la procédure de traitement en tant que données de procédure de traitement d'apprentissage. Les données d'image d'apprentissage et les données de procédure de traitement d'apprentissage sont fournies à l'unité de traitement d'image (102).
PCT/JP2018/001080 2017-01-16 2018-01-10 Dispositif de traitement d'image et procédé de traitement d'image WO2018131723A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017005444A JP6329651B1 (ja) 2017-01-16 2017-01-16 画像処理装置及び画像処理方法
JP2017-005444 2017-01-16

Publications (1)

Publication Number Publication Date
WO2018131723A1 true WO2018131723A1 (fr) 2018-07-19

Family

ID=62186717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/001080 WO2018131723A1 (fr) 2017-01-16 2018-01-10 Dispositif de traitement d'image et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP6329651B1 (fr)
WO (1) WO2018131723A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230334832A1 (en) * 2020-09-29 2023-10-19 Shimadzu Corporation Image analyzing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008129881A1 (fr) * 2007-04-18 2008-10-30 The University Of Tokyo Procédé de sélection de valeurs de caractéristiques, dispositif de sélection de valeurs de caractéristiques, procédé de classification d'image, dispositif de classification d'image, programme informatique et support d'enregistrement
JP2015137857A (ja) * 2014-01-20 2015-07-30 富士ゼロックス株式会社 検出制御装置、プログラム及び検出システム
JP2015201819A (ja) * 2014-04-10 2015-11-12 株式会社東芝 画質改善システム、画質改善方法及びプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008129881A1 (fr) * 2007-04-18 2008-10-30 The University Of Tokyo Procédé de sélection de valeurs de caractéristiques, dispositif de sélection de valeurs de caractéristiques, procédé de classification d'image, dispositif de classification d'image, programme informatique et support d'enregistrement
JP2015137857A (ja) * 2014-01-20 2015-07-30 富士ゼロックス株式会社 検出制御装置、プログラム及び検出システム
JP2015201819A (ja) * 2014-04-10 2015-11-12 株式会社東芝 画質改善システム、画質改善方法及びプログラム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CARTA, vol. 56, 1 July 2013 (2013-07-01) *
vol. 18, no. 14, 24 February 1994 (1994-02-24), pages 23 - 28 *

Also Published As

Publication number Publication date
JP6329651B1 (ja) 2018-05-23
JP2018116376A (ja) 2018-07-26

Similar Documents

Publication Publication Date Title
Veta et al. Assessment of algorithms for mitosis detection in breast cancer histopathology images
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
CN111524137B (zh) 基于图像识别的细胞识别计数方法、装置和计算机设备
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
Huang et al. Time-efficient sparse analysis of histopathological whole slide images
WO2014021175A1 (fr) Dispositif et méthode de détection d'une région de cellules nécrosées et support de stockage pour le stockage d'un programme informatique détectant une région de cellules nécrosées
JP2022551683A (ja) 人工知能(ai)モデルを使用した非侵襲的遺伝子検査を行う方法及びシステム
US10769432B2 (en) Automated parameterization image pattern recognition method
CN113658174B (zh) 基于深度学习和图像处理算法的微核组学图像检测方法
Pourakpour et al. Automated mitosis detection based on combination of effective textural and morphological features from breast cancer histology slide images
Shihabuddin et al. Multi CNN based automatic detection of mitotic nuclei in breast histopathological images
JP2018125019A (ja) 画像処理装置及び画像処理方法
Huang et al. HEp-2 cell images classification based on textural and statistic features using self-organizing map
WO2018131723A1 (fr) Dispositif de traitement d'image et procédé de traitement d'image
CN107590806A (zh) 一种基于大脑医学成像的检测方法和系统
JP4609322B2 (ja) 染色体状態の評価方法および評価システム
Sertel et al. Computer-aided prognosis of neuroblastoma: classification of stromal development on whole-slide images
Yancey Deep Feature Fusion for Mitosis Counting
CN113177602B (zh) 图像分类方法、装置、电子设备和存储介质
Amitha et al. Developement of computer aided system for detection and classification of mitosis using SVM
Bai et al. A convolutional neural network combined with color deconvolution for mitosis detection
Zanotelli et al. A flexible image segmentation pipeline for heterogeneous multiplexed tissue images based on pixel classification
Li et al. HEp-2 cells staining patterns classification via wavelet scattering network and random forest
Zhang et al. Extraction of karyocytes and their components from microscopic bone marrow images based on regional color features
Alkrimi et al. Isolation and classification of red blood cells in anemic microscopic images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18739129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18739129

Country of ref document: EP

Kind code of ref document: A1