WO2021102844A1 - 处理图像的方法、装置及系统 - Google Patents

处理图像的方法、装置及系统 Download PDF

Info

Publication number
WO2021102844A1
WO2021102844A1 PCT/CN2019/121731 CN2019121731W WO2021102844A1 WO 2021102844 A1 WO2021102844 A1 WO 2021102844A1 CN 2019121731 W CN2019121731 W CN 2019121731W WO 2021102844 A1 WO2021102844 A1 WO 2021102844A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
analysis
analysis result
image block
training
Prior art date
Application number
PCT/CN2019/121731
Other languages
English (en)
French (fr)
Inventor
李瑶鑫
张长征
陈晓仕
涂丹丹
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980075397.1A priority Critical patent/CN113261012B/zh
Priority to EP19954469.3A priority patent/EP3971762A4/en
Priority to PCT/CN2019/121731 priority patent/WO2021102844A1/zh
Publication of WO2021102844A1 publication Critical patent/WO2021102844A1/zh
Priority to US17/590,005 priority patent/US20220156931A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This application relates to the field of artificial intelligence (AI), and more specifically, to methods, devices, and systems for processing images.
  • AI artificial intelligence
  • the embodiments of the application provide a method, device, and system for processing images, which can judge pathological images, reduce judgment errors caused by human subjective factors, and can also help doctors with no or less relevant experience to obtain the pathological images.
  • the results of the analysis will help doctors diagnose the patient’s condition.
  • an embodiment of the present application provides a method for processing pathological images, including: acquiring a plurality of image blocks obtained by segmentation of the pathological image to be analyzed; and inputting the plurality of image blocks to the first analysis Model to obtain a first analysis result, wherein the first analysis model classifies each image block of the plurality of image blocks according to the number or area of the suspicious lesion components, and the first analysis result indicates the value of each image block
  • the type is the first type or the second type.
  • the first type indicates that the number or area of suspicious lesion components in the image block is greater than or equal to a preset threshold
  • the second type indicates that the number or area of suspicious lesion components in the image block is less than The preset threshold
  • the final analysis result of the pathological image is obtained by combining the first analysis result and the second analysis result.
  • the above-mentioned first analysis model may also be referred to as an image block classification model.
  • the above-mentioned second analysis model may also be referred to as a suspicious lesion component detection model.
  • the image block whose type is the first type may also be referred to as the first type image block.
  • the image block of the second type may also be referred to as the second type image block.
  • the above technical solution can use the first analysis model and the second analysis model to process the pathological image to obtain the analysis result of the pathological image. This can reduce judgment errors caused by human subjective factors. In addition, it can also make the interpretation of pathological images rely too much on experienced doctors. This can help doctors who have no or less relevant experience to obtain the analysis result of the pathological image, which helps the doctor to diagnose the patient's condition.
  • the method further includes: inputting each image block to a third analysis model to obtain a third analysis result, wherein the third analysis model predicts each Image quality of the image block; synthesizing the first analysis result and the second analysis result to obtain the final analysis result of the pathological image, including: synthesizing the first analysis result, the second analysis result, and the third analysis result to obtain the pathology The final analysis result of the image.
  • Using the quality information of the pathological image can assist in determining the analysis result of the pathological image.
  • This third analysis model may also be referred to as an image quality prediction model.
  • acquiring multiple image blocks includes: acquiring multiple initial image blocks formed after the pathological image to be analyzed is segmented; and inputting the multiple initial image blocks Each initial image block in to a style transfer model to obtain multiple image blocks, where the style transfer model converts the style of each initial image block.
  • Unifying the styles of image blocks through style transfer can improve the accuracy of the classification results of the image blocks and the detection results of suspicious lesion components, thereby improving the accuracy of the analysis results of the pathological image.
  • the method further includes: training the initial first analysis model according to the first training data set to obtain the first analysis model, wherein the initial first analysis model
  • the analysis model is one of the artificial intelligence AI models
  • the first training data set includes a plurality of first training images
  • the label of each first training image is the first type or the second type.
  • the first training data set is also called training data set 2.
  • the first analysis model may also be referred to as an image block classification model.
  • the image block classification model may be obtained by training using training images obtained after marking by multiple experienced doctors or pathologists. Therefore, the image block classification model has higher accuracy in classifying image blocks.
  • the method further includes: training the initial second analysis model according to the second training data set to obtain the second analysis model, wherein the initial second analysis model
  • the analysis model is one of the artificial intelligence AI models.
  • the second training data set includes multiple second training images containing suspicious lesion components.
  • the label of each second training image is the location information of the suspicious lesion component in the training image. .
  • the second training data set is also called training data set 3.
  • This second analysis model may also be referred to as a suspicious lesion component detection model.
  • the suspicious lesion component detection model can be obtained by training using training images obtained after marking by multiple experienced doctors or pathologists. Therefore, the detection result of the suspicious lesion component detection model for the suspicious lesion component in the image block has a high accuracy rate.
  • the method further includes: training an initial third analysis model according to a third training data set to obtain a third analysis model, wherein the initial third analysis model
  • the analysis model is one of the artificial intelligence AI models
  • the third training data set includes a plurality of third training images
  • the label of each third training image is the image quality type of each third training image.
  • the third training data set is also called training data set 1.
  • the synthesizing the first analysis result and the second analysis result to obtain the final analysis result of the pathological image includes: inputting the first analysis result and the The second analysis result is to the judgment model, and the final analysis result of the pathological image is obtained.
  • the pathological image is a pathological image of cervical cells
  • the suspicious lesion component is a positive cervical cell
  • an embodiment of the present application provides a data processing device, including: an acquisition unit, configured to acquire a plurality of image blocks obtained by segmenting the pathological image to be analyzed; and an image analysis unit, configured to A plurality of image blocks are input to a first analysis model to obtain a first analysis result, wherein the first analysis model classifies each of the plurality of image blocks according to the number or area of suspicious lesion components, and the first analysis model
  • the analysis result indicates that the type of each image block is the first type or the second type, the first type indicates that the number or area of the suspicious lesion components in the image block is greater than or equal to the preset threshold, and the second type indicates that the image block is The number or area of the suspicious lesion components is less than the preset threshold;
  • the image analysis unit is further configured to input at least one second-type image block in the first analysis result into the second analysis model to obtain the second analysis result,
  • the second analysis model analyzes the position of the suspicious lesion component of each input image block of the second type; the decision analysis unit is
  • the device further includes an image quality detection unit for inputting each image block to a third analysis model to obtain a third analysis result, wherein the The third analysis model predicts the image quality of each image block; the decision analysis unit is specifically configured to synthesize the first analysis result, the second analysis result, and the third analysis result to obtain the final analysis result of the pathological image.
  • the acquiring unit is specifically configured to acquire multiple initial image blocks formed after the pathological image to be analyzed is segmented; and input the multiple initial image blocks Each of the initial image blocks in to a style transfer model to obtain the multiple image blocks, wherein the style transfer model converts the style of each initial image block.
  • the device further includes a first training unit configured to train the initial first analysis model according to the first training data set to obtain the first analysis model ,
  • the initial first analysis model is one of the artificial intelligence AI models
  • the first training data set includes a plurality of first training images
  • the label of each first training image is the first type or the second type.
  • the device further includes a second training unit for training the initial second analysis model according to the second training data set to obtain the second analysis Model, wherein the initial second analysis model is one of the artificial intelligence AI models, the second training data set includes a plurality of second training images containing suspicious lesion components, and the label of each second training image is the suspicious The location information of the lesion component in the training image.
  • the initial second analysis model is one of the artificial intelligence AI models
  • the second training data set includes a plurality of second training images containing suspicious lesion components
  • the label of each second training image is the suspicious The location information of the lesion component in the training image.
  • the device further includes a third training unit for training the initial third analysis model according to the third training data set to obtain the third analysis A model, wherein the initial third analysis model is one of the artificial intelligence AI models, the third training data set includes a plurality of third training images, and the label of each third training image is each third training image The type of image quality.
  • the decision analysis unit is specifically configured to input the first analysis result and the second analysis result to the decision model to obtain the final analysis result of the pathological image.
  • the present application provides a computing device system, including at least one memory and at least one processor, the at least one memory is used to store computer instructions; when the at least one processor executes the computer instructions, the The computing device system executes the method provided by the first aspect or any one of the possible implementation manners of the first aspect.
  • the present application provides a non-transitory readable storage medium.
  • the non-transitory readable storage medium executes any one of the foregoing first aspect or the first aspect.
  • the storage medium stores the program.
  • the storage medium includes but is not limited to volatile memory, such as random access memory, non-volatile memory, such as flash memory, hard disk (English: hard disk drive, abbreviation: HDD), solid state drive (English: solid state drive, Abbreviation: SSD).
  • the present application provides a computer program product
  • the computer program product includes computer instructions, when executed by a computing device, the computing device executes the foregoing first aspect or any possible implementation of the first aspect.
  • the computer program product may be a software installation package.
  • the computer program product may be downloaded and executed on a computing device. Program product.
  • Fig. 1 is a schematic structural block diagram of a pathological image processing system according to an embodiment of the present application.
  • Fig. 2 is a schematic structural block diagram of another pathological image processing system provided according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of deployment of a pathology image processing system provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural block diagram of a computing device according to an embodiment of the present application.
  • Fig. 5 is a schematic structural block diagram of a training system provided by an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a method for processing an image provided according to an embodiment of the present application.
  • Fig. 7 is an image block before style transfer and an image block after style transfer provided according to an embodiment of the present application.
  • Fig. 8 is a schematic structural block diagram of a data processing device according to an embodiment of the present application.
  • Fig. 9 is a schematic structural block diagram of a computing device system according to an embodiment of the present application.
  • references described in this specification to "one embodiment” or “some embodiments”, etc. mean that one or more embodiments of the present application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless it is specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • Pathological examination is a pathological examination method suitable for examining pathological changes in collective organs, tissues or cells. Pathological examination can provide a basis for doctors to diagnose the disease.
  • Morphological examination is one of the common pathological examination methods. Morphological examination can be divided into exfoliative cytological examination and biopsy. Exfoliative cytology is to collect cells that have fallen off from human organs. After the collected cells are stained, the stained cells are observed for pathological examination. Common exfoliative cytology tests include sputum smears for lung cancer, urine smears for urinary tract tumors and so on. Biopsy is to take out a small piece of tissue from the diseased part of the patient's body to make a pathological slide, and perform pathological examination by observing the changes in the morphology and structure of cells and/or tissues in the pathological slide.
  • the subject of pathological examination can include any components that can be used for pathological examination, such as cells, cell fragments, cell nuclei, intracellular substances (such as hemosiderin deposition of macrophages in alveolar cells, intracellular substance deposition), and other components that can be passed through Microscopically observed tissues or substances (such as fibrin, protein-like substances), etc.
  • components such as cells, cell fragments, cell nuclei, intracellular substances (such as hemosiderin deposition of macrophages in alveolar cells, intracellular substance deposition), and other components that can be passed through Microscopically observed tissues or substances (such as fibrin, protein-like substances), etc.
  • the objects of pathological examination are referred to as components in the embodiments of the present application.
  • the objects of pathological examinations are different.
  • cervical cells are often used as components, and pathological slides made of them are used for pathological examination.
  • the components can be divided into two categories, one is normal components; the other is components different from normal components (for example: diseased components, suspicious diseased components).
  • one type can be normal cells; the other type can be abnormal cells, for example, it can be diseased cells or suspicious diseased cells, another example, it can be cell debris, another example, it can be Cells that contain specific substances (for example, hemosiderin deposition, substance deposition).
  • the pathological slides referred to in the examples of this application are pathological specimens made of components such as exfoliated cells or living tissues.
  • the components to be examined need to be smeared on the slides, so the resulting pathological glass
  • the film can also be called a smear.
  • the pathological image referred to in the embodiments of the present application is an image obtained by performing digital processing (for example, scanning, shooting) on a pathological slide.
  • the current pathological examination usually involves a professional doctor using a microscope to observe the pathological slide or observing the pathological image through a computer, and then give the examination result of the pathological image.
  • the examination result of the pathological image is usually used for diagnosis of the disease. Relying only on the method of pathological examination by doctors, on the one hand, greatly increases the workload of doctors; on the other hand, because some doctors may need to undertake a large number of pathological examination tasks or some doctors are not high in professional skills, it is easy to give doctors If the result of the check is wrong.
  • AI artificial intelligence
  • the pathological image processing system can analyze the collected pathological images to obtain an output result, and the output result is the inspection result corresponding to the pathological image. Doctors can use the output results of the pathological image processing system to judge the patient's condition, perform auxiliary diagnosis, and perform preoperative analysis.
  • the pathological image to be processed is a pathological image of cervical cells obtained by using Thinprep cytologic test (TCT) technology.
  • TCT Thinprep cytologic test
  • the pathological examination of the pathological images of cervical cells is often used for cervical cancer preventive examinations and confirmation of cervical cancer conditions.
  • the components in the pathological image of cervical cells ie cervical cells
  • Fig. 1 is a schematic structural block diagram of a pathological image processing system according to an embodiment of the present application.
  • the pathological image processing system 100 shown in FIG. 1 includes: an image acquisition component 101, an image preprocessing component 102, an image analysis component 103, and a decision analysis component 104.
  • the image acquisition component 101 is used to acquire a pathological image and segment the pathological image to obtain N image blocks, where N is a positive integer greater than or equal to 2.
  • the embodiment of the present application does not limit the specific implementation manner for the image acquisition component 101 to collect pathological images.
  • the image acquisition component 101 can scan a pathological slide to obtain the pathological image.
  • the image acquisition component 101 can take pictures of a pathological slide to obtain the pathological image.
  • the image acquisition component 101 may receive pathological images from other equipment or devices.
  • the value of N may be a preset or manually set fixed value.
  • the fixed value can be 1000, 1500, 2000, etc.
  • the value of N may be related to the resolution of the pathological image. For example, if the resolution of the pathological image is less than 500 ⁇ 500, the value of N can be 500; if the resolution of the pathological image is greater than 500 ⁇ 500 and less than or equal to 1500 ⁇ 1500, the value of N can be 1000; if If the resolution of the pathological image is greater than 1500 ⁇ 1500, the value of N can be 2000.
  • the size of the image block may be a fixed value preset or manually set.
  • the size of each image block may be 50 ⁇ 50.
  • the value of N can be determined according to the size of the pathological image and the size of a preset or manually set image block.
  • Each image block can include at least multiple cells.
  • the image preprocessing component 102 may be used to determine the image quality of each of the N image blocks.
  • the image preprocessing component 102 can also be used to perform style transfer on the image block to obtain the image block after the style transfer has been performed.
  • Style transfer can also be referred to as image style transfer, which refers to fusing the style of a target image with the reference image, and the style of the output image obtained after the fusion is the same or similar to the style of the reference image.
  • the style of an image refers to the color and contrast of the image.
  • the pathological images obtained may show different styles.
  • style transfer technology a pathological image of a style can be converted into a pathological image with the same or similar style as the reference pathological image. The pathological image after the style transfer will be more accurate when used in subsequent image analysis.
  • the image analysis component 103 may be used to obtain the image blocks processed by the image preprocessing component 102, and classify the obtained image blocks according to the trained AI model, and determine the type of the obtained image blocks.
  • the image analysis component 103 may determine that the acquired image block is the first type image block or the second type image block, wherein the number of suspicious diseased cells in the first type image block is greater than or equal to the first type image block.
  • a preset threshold, and the number of the suspicious diseased cells in the second type image block is less than the first preset threshold.
  • the image analysis component 103 may determine that the acquired image block is a first type image block, a second type image block, or a third type image block, wherein the suspicious image block in the first type image block The number of diseased cells is greater than or equal to the first preset threshold, the number of suspicious diseased cells in the second type of image block is greater than or equal to the second preset threshold and less than the first preset threshold, and the number of suspicious lesions in the third type of image block The number of cells is less than the second preset threshold.
  • the image analysis component 103 is also used to further detect the second type of image blocks obtained by classification, and detect suspicious diseased cells in the second type of image blocks.
  • the decision analysis component 104 may be used to obtain the first type image block determined by the image analysis component 103 and the second type image block containing suspicious diseased cells.
  • the decision analysis component 104 may determine the target output result corresponding to the pathological image according to the first type image block and the second type image block containing suspicious diseased cells.
  • the decision analysis component 104 may also obtain the image quality information output by the image preprocessing component 102. In this case, the decision analysis component may determine the target output result corresponding to the pathological image based on the first type of image block, the second type of image block containing suspicious diseased cells, and the image quality information.
  • a pathology image processing system 200 is also provided. As shown in FIG. 2, the pathology image processing system 200 includes: an image acquisition component 201, an image analysis component 202, and a decision analysis component 203.
  • the function of the image capture component 201 is the same as the function of the image capture component 101.
  • the function of the image analysis component 202 is the same as the function of the image analysis component 103.
  • the decision analysis component 203 may be used to obtain the first type of image blocks determined by the image analysis component 203 and the second type of image blocks containing suspicious diseased cells.
  • the decision analysis component 203 can determine the target output result corresponding to the pathological image according to the first type image block and the second type image block containing suspicious diseased cells.
  • Fig. 3 is a schematic diagram of deployment of a pathology image processing system provided by an embodiment of the present application. All components in the pathology image processing system can be deployed in a cloud environment, which is an entity that uses basic resources to provide cloud services to users in a cloud computing mode.
  • the cloud environment includes a cloud data center and a cloud service platform.
  • the cloud data center includes a large number of basic resources (including computing resources, storage resources, and network resources) owned by a cloud service provider.
  • the computing resources included in the cloud data center can be a large number of computing resources.
  • Device for example, server).
  • the pathology image processing system can be implemented by a server in the cloud data center; the pathology image processing system can also be implemented by a virtual machine created in the cloud data center.
  • the pathological image processing system may also be a software device independently deployed on a server or virtual machine in a cloud data center, and the software device is used to realize the functions of the pathological image processing system.
  • the software device may also be distributedly deployed on multiple servers, or distributedly deployed on multiple virtual machines, or distributedly deployed on virtual machines and servers.
  • the pathology image processing system can be abstracted by the cloud service provider into a cloud service on the cloud service platform and provided to the user. After the user purchases the cloud service on the cloud service platform, the cloud environment uses the pathology image processing system to provide the user
  • the cloud service of pathological image detection users can upload pathological images to be processed to the cloud environment through the application program interface (API) or through the web interface provided by the cloud service platform, and the pathological image processing system receives the pathological images to be processed , The pathological image to be processed is detected, and the detection result is returned to the terminal where the user is located, or the detection result is stored in the cloud environment, for example, presented on the web interface of the cloud service platform for the user to view.
  • API application program interface
  • the pathology image processing system is a software device
  • several parts of the pathology image processing system can also be deployed in different environments or devices.
  • a part of the pathology image processing system is deployed on a terminal computing device (such as terminal servers, Smartphones, laptops, tablets, personal desktop computers, smart cameras), and the other part is deployed in the data center (specifically deployed on the server or virtual machine in the data center).
  • the data center can be a cloud data center. It can be an edge data center, which is a collection of edge computing devices that are deployed closer to the terminal computing device.
  • the scanning device is deployed with the image acquisition component in the pathological image processing system, and the scanning device can Scan the pathological slides to obtain pathological images, segment the pathological images, and send the segmented image blocks to the data center through the network.
  • the data center is equipped with image preprocessing components, image analysis components, and decision analysis components. The component further processes the segmented image blocks, and finally obtains the analysis result.
  • the data center sends the analysis result to the computer, so that the doctor can obtain the analysis result of the pathological image.
  • this application does not restrict which parts of the pathology image processing system are deployed in the terminal computing equipment and which parts are deployed in the data center. In actual applications, it can be adapted according to the computing capabilities of the terminal computing equipment or specific application requirements. deploy.
  • the scanning device can scan the pathological slide to obtain the pathological image, and upload the pathological image to the data center.
  • the data center can perform segmentation processing and subsequent detection of the pathological image.
  • the pathology image processing system can also be deployed in three parts, where one part is deployed in the terminal computing device, one part is deployed in the edge data center, and the other part is deployed in the cloud data center.
  • the pathological image processing system can also be separately deployed on a computing device in any environment (for example: separately deployed on a terminal computing device or separately deployed on a computing device in a data center)
  • the computing device 400 includes a bus 401, a processor 402, a communication interface 403, and a memory 404.
  • the processor 402, the memory 404, and the communication interface 403 communicate through a bus 401.
  • the processor 402 may be a central processing unit (English: central processing unit, abbreviated: CPU).
  • the memory 404 may include a volatile memory (English: volatile memory), such as a random access memory (English: random access memory, abbreviation: RAM).
  • the memory 404 may also include a non-volatile memory (English: non-volatile memory, abbreviation: NVM), such as read-only memory (English: read-only memory, abbreviation: ROM), flash memory, HDD or SSD.
  • NVM non-volatile memory
  • the memory 404 stores executable codes included in the pathological image processing system, and the processor 402 reads the executable codes in the memory 404 to execute the pathological image processing method.
  • the memory 404 may also include an operating system and other software modules required for running processes.
  • the operating system can be LINUX TM , UNIX TM , WINDOWS TM etc.
  • the method for performing pathological image processing requires the use of a pre-trained artificial intelligence (AI) model.
  • AI artificial intelligence
  • the AI model is essentially an algorithm that includes a large number of parameters and calculation formulas (or calculations). Rules), the AI model can be trained, and the trained AI model can learn the rules and features in the training data.
  • a variety of AI models with different functions after training can be used.
  • the trained AI model used to predict the quality of the image block is called the image quality prediction model;
  • the trained AI model of style transfer is called the style transfer model;
  • the trained AI model used to classify image blocks is called the image block classification model;
  • the trained AI model used to detect suspicious disease components is called It is a suspicious lesion component detection model;
  • the trained AI model used to determine the analysis result of the pathological image is called the decision model.
  • the above five models can be trained by a training system, which uses different training sets to train the image quality prediction model, style transfer model, image block classification model, suspicious lesion component detection model, and decision model.
  • the image quality prediction model, style transfer model, image block classification model, suspicious lesion component detection model, and decision model trained by the training system are deployed in the pathology image processing system, and the pathology image processing system is used for pathology image detection.
  • the pathology image processing system may only use some of the above five models.
  • the training system can only train the models that the pathological image processing system needs to use.
  • Fig. 5 is a schematic structural block diagram of a training system provided by an embodiment of the present application.
  • the training system 500 shown in FIG. 5 includes a collection component 501 and a training component 502.
  • the acquisition component 501 can acquire a training data set for training an image quality prediction model (hereinafter referred to as training data set 1), a small sample image block set for training a style transfer model, and a training data set for training an image block classification model ( Hereinafter referred to as training data set 2), a training data set used to train a suspicious lesion component detection model (hereinafter referred to as training data set 3), and a training data set used to train a decision model (hereinafter referred to as training data set 4).
  • training data set 1 a training data set for training an image quality prediction model
  • training data set 3 a training data set used to train a suspicious lesion component detection model
  • training data set 4 a training data set used to train a decision model
  • the training component 502 can use the training data set obtained by the collection component 501 to train the AI model to obtain a corresponding AI model. For example, the training component 502 may first initialize each layer of parameters in the image quality prediction model (that is, assign an initial value to each parameter), and then use the training images in the training data set 1 to train the image quality prediction model , Until the loss function in the image training prediction model converges or all training images in the training data set 1 are used for training.
  • the deployment mode and deployment location of the training system may refer to the deployment mode and deployment location of the aforementioned pathological image processing system.
  • the training system can also be deployed in the same environment or equipment as the pathological image processing system, or in a different environment or equipment from the pathological image processing system.
  • the training system and the pathological image processing system can also form a system together.
  • Fig. 6 is a schematic flowchart of a method for processing pathological images according to an embodiment of the present application.
  • N is a positive integer greater than or equal to 2.
  • the image quality of the image block can be one of a variety of image qualities.
  • the image quality may include normal and abnormal.
  • the image quality of each of the N image blocks may be normal or abnormal.
  • the image quality of the image block may include: normal, air bubbles, faded, and out of focus.
  • the image quality of each of the N image blocks may be any one of normal, bubble, faded, or out of focus.
  • the image quality of the image block may include multiple levels. For example, the multiple levels can be expressed as excellent, good, medium, and poor.
  • the multiple levels may be expressed as scores, such as 5, 4, 3, 2, 1, where the image block with a score of 5 has the best image quality, and the image block with a score of 1 has the worst image quality.
  • the image quality of each of the N image blocks is one of the multiple levels.
  • the image quality of the image block may be determined using the image quality prediction model trained by the training system 500 as shown in FIG. 5.
  • each of the N image blocks can be input to the image quality prediction model, which is a trained AI model, and the N image blocks can be determined according to the output result of the image quality prediction model The image quality of each image block in.
  • the training system 500 may train the initial image quality prediction model in a supervised learning manner, and obtain the image quality prediction model.
  • the collection component 501 can collect multiple image blocks for training, and the multiple image blocks can be obtained after segmentation of one or more pathological images.
  • the collected image blocks are processed and annotated manually or by the collection component 501 to form a training data set 1.
  • the training data set 1 may include multiple training images, and each training image of the multiple training images may include an image block data and label information, where the image block data is an image block collected by the collection component 501 Data or processed data of an image block, and the label information is the actual image quality of the image block.
  • the actual image quality of the image block can be manually judged and marked in advance.
  • the training component 502 can use the training data in the training data set 1 to train the initial image quality prediction model to obtain the image quality prediction model. For example, the training component 502 first initializes the parameters of each layer in the initial image quality prediction model (that is, assigns an initial value to each parameter), and then the training data in the training data set 1 performs the initial image quality prediction model on the initial image quality prediction model. Training until the loss function in the initial image quality prediction model converges or all the training data in the training data set 1 is used for training, then the training is completed and an image quality prediction model that can be used in this solution is obtained.
  • the training component 502 first initializes the parameters of each layer in the initial image quality prediction model (that is, assigns an initial value to each parameter), and then the training data in the training data set 1 performs the initial image quality prediction model on the initial image quality prediction model. Training until the loss function in the initial image quality prediction model converges or all the training data in the training data set 1 is used for training, then the training is completed and an image quality prediction model that
  • the initial image quality prediction model can use some machine learning models or deep learning models available in the industry that can be used for classification, such as decision trees (DT), random forests (RF), logistic regressions, LR), support vector machine (support vector machine, SVM), convolutional neural network (convolutional neural network, CNN), recurrent neural network (rucurrent neural network, RNN), etc.
  • DT decision trees
  • RF random forests
  • LR logistic regressions
  • support vector machine support vector machine
  • SVM support vector machine
  • convolutional neural network convolutional neural network
  • CNN recurrent neural network
  • RNN recurrent neural network
  • the image quality of each image block of the N image blocks may not be determined in a manner based on an AI model.
  • artificial intelligence technology may not be used in the process of determining the image quality of each image block.
  • Laplacian, Brenner gradient function, Tenengrad gradient function, etc. can be used to determine the definition of each image block. If the definition of an image block satisfies the preset condition, it can be determined that the image quality of the image block is normal; otherwise, the image quality of the image block is determined to be abnormal. For another example, whether the image block is out of focus can be determined according to the degree of association between a pixel in the image block and the pixels around the pixel.
  • a pixel in the image block has a high degree of association with the pixels around the pixel (for example, greater than a preset association degree threshold), it can be determined that the image quality of the image block is out of focus. If the degree of association between a pixel in the image block and the pixels around the pixel is low (for example, less than a preset association degree threshold), it can be determined that the image quality of the image block is normal.
  • the quality information of the pathological image may include the number of image blocks whose image quality meets a preset standard among the N image blocks of the pathological image that are segmented, and the quality information of the pathological image is The analysis result obtained after the quality prediction of each image block.
  • the quality information of the pathological image may include the number of image blocks whose image quality is normal.
  • the quality information of the pathological image may include the number of image blocks whose image quality is normal.
  • the image quality that meets the preset standard is an image quality greater than or equal to a preset level.
  • the preset level may be good.
  • the quality information of the pathological image may include the total number of image blocks with excellent and good image quality, or the quality information of the pathological image may include the number of image blocks with excellent image quality and the number of image blocks with good image quality. .
  • the quality information of the pathological image may also include the total number of image blocks.
  • the number of image blocks whose image quality does not meet the preset standard can be determined according to the total number of image blocks and the number of image blocks whose image quality meets the preset standard.
  • the quality information of the pathological image may include the number of image blocks of each image quality.
  • the quality information of the pathological image may include the number of image blocks whose image quality is normal and the number of image blocks whose image quality is abnormal.
  • the quality information of the pathological image may include the number of image blocks with normal image quality, the image quality is the number of image blocks with bubbles, and the image quality is The number of faded image blocks and the number of image blocks whose image quality is out of focus.
  • the quality information of the pathological image may include identity information of image blocks whose image quality meets a preset standard.
  • the pathological image is divided into P ⁇ Q image blocks, P and Q are positive integers greater than or equal to 1, and P ⁇ Q is equal to N.
  • (p, q) can be used as the identity information of the image block in the p-th row and the q-th column of the P ⁇ Q image blocks, p is a positive integer greater than or equal to 1 and less than or equal to P, and q is greater than or equal to 1 and less than or equal to a positive integer of Q.
  • the image quality of the image block is divided into normal and abnormal and P ⁇ Q is equal to 3 ⁇ 3.
  • the quality information of the pathological image includes (1,1), (1,2), (2,1), (3,1) and (3,2), it means that the 3 ⁇ 3 image blocks are Image block in one row and first column, image block in first row and second column, image block in second row and first column, image block in third row and first column, and image of image block in third row and second column The quality is normal.
  • the quality information of the pathological image may include identity information of each image quality.
  • the quality information of the pathological image includes [(1,1), (1,2), (2,1), (3,1) and (3,2)]; [(1,3), (2 ,2), (2,3), (3,3)], it means the image block in the first row and the first column of the 3 ⁇ 3 image blocks, the image block in the first row and the second column, and the second row
  • the image quality of the image block in the first column, the image block in the third row and the first column and the image block in the third row and second column are normal; the image block in the first row and third column, the image in the second row and second column
  • the image quality of the image block in the second row and third column and the image block in the third row and third column is abnormal.
  • 604 Perform style transfer on the image block to obtain an image block after the style transfer.
  • the image block before the style transfer may also become the initial image block, and the image block after the style transfer is called an image block.
  • style transfer may be performed only on image blocks whose image quality meets a preset standard. For example, suppose that the image quality of only N 1 image blocks among the N image blocks meets the preset standard, and N 1 is a positive integer greater than or equal to 1 and less than or equal to N. In this case, style transfer can be performed on the N 1 image blocks to obtain N 1 transferred image blocks.
  • style transfer may be performed on the N image blocks obtained after segmentation, to obtain N transferred image blocks, and then select from the N transferred image blocks that the image quality conforms to the preset Standard image blocks for subsequent processing.
  • style transfer may be directly performed on the collected pathological image to obtain the transferred pathological image, and then the transferred pathological image may be segmented to obtain N transferred image blocks.
  • the production reagents used in the production of the pathological image and the scanning machine (or camera) used in the digital processing will affect the final pathological image.
  • the model used when processing the image block (such as the image block classification model and/or the suspicious lesion component detection model) may be obtained by using one or several film preparation reagents and one or several scanning machines (or cameras)
  • the image block obtained after the pathological image segmentation is determined as the training data.
  • the model used by the image analysis component may be trained based on image blocks of one style or image blocks with similar styles. For ease of description, the style of the image block used to train the model used by the image analysis component is referred to as the target style below.
  • the styles of the segmented N image blocks obtained in step 601 may be different from the target style.
  • the style transfer is not performed on the image block, it will have a certain impact on the subsequent image block classification and the accuracy of detecting suspicious diseased cells, thereby affecting the final analysis result.
  • the accuracy of the classification result of the image block and the detection result of suspicious diseased cells can be improved, and the accuracy of the analysis result of the finally obtained pathological image can be improved.
  • the style transfer may be determined using the style transfer model trained by the training system 500 as shown in FIG. 5.
  • each of the N 1 image blocks can be input to a style transfer model, which is a trained AI model.
  • N 1 transferred images can be obtained Piece.
  • the training system 500 can train the style transfer model in an unsupervised learning manner.
  • the style transfer model can be an AI model.
  • the embodiments of the present application do not limit the structure adopted by the style transfer model.
  • it can be any model under the framework of generative adversarial networks (GAN), such as deep convolutional generative adversarial networks (DCGAN), or other image blocks based on small sample image blocks.
  • GAN generative adversarial networks
  • DCGAN deep convolutional generative adversarial networks
  • the image blocks in the small sample image block set are collected by the collecting component 501.
  • the style of the image blocks in the small sample image block set is the target style of style transfer.
  • GAN includes a generator G (Generator) and a discriminator D (Discriminator), where the generator G is used to generate a set of candidate image blocks based on the input of GAN, and the discriminator D (Discriminator) is connected to the output of the generator G for It is determined whether the candidate image block set output by the generator G is a real image block set.
  • generator G Generator
  • discriminator D discriminator
  • the generator G and the discriminator D are a process of alternating confrontation training.
  • An arbitrary set of image blocks is used as the input of the generator G, the generator G outputs the candidate image block set, and the discriminator D generates the generator G
  • the candidate image block set and the small sample image block set are used as input values, the characteristics of the candidate image block set and the small sample image block set are compared, and the output candidate image block set and the small sample image block set belong to the same type of image block set (The candidate image block set of the same type as the small sample image block set is also called the real image block set.
  • the real image block set and the image blocks in the small sample image block set have the same or similar characteristics), according to the output candidate
  • the probability that the image block set is the real image block set optimize the parameters in the generator G (the parameters in the discriminator D remain unchanged at this time) until the candidate image block set output by the generator G is discriminated by the discriminator D to the real image block Set (the probability that the candidate image block set is the real image block set is greater than the threshold), then the discriminator D optimizes the parameters of each internal network layer according to the probability output by the discriminator D (the parameters in the generator G are unchanged at this time), This allows the discriminator D to continue to discriminate that the candidate image block set output by the generator G and the small sample image block set are not in the same category.
  • the parameters in the generator G and the discriminator D are alternately optimized until the generator G generates whether the candidate image block set that the discriminator D cannot discriminate is a real signal set. From the above training process, it can be found that the process of alternating training between generator G and discriminator D is a process in which generator G and discriminator D game each other; when the candidate image block set and small sample image block set that can be generated by generator G have The same or similar features, that is, the candidate image block set is close to the real image block set, the discriminator D cannot accurately determine whether the input image block set is a real image block set, and the GAN training is completed.
  • step 604 may not need to be performed.
  • the style of the pathological image acquired in step 601 is the same or similar to the target style.
  • the style transfer of the image block may not be performed.
  • the model used for classifying image blocks and detecting suspicious diseased cells can be adapted to image blocks of various styles. Therefore, it is not necessary to perform style transfer on the image block.
  • step 602 to step 603 may not need to be performed.
  • the N image blocks obtained after the pathological image segmentation can be directly subjected to style transfer (that is, step 604 is executed) or classification and subsequent steps (that is, step 605 to step 607) are directly executed.
  • FIG. 7 shows the image blocks before the style transfer and the image blocks after the style transfer. The color depth and contrast of the image blocks after the style transfer have changed.
  • 605 Classify multiple image blocks according to the image block classification model to obtain an image block classification result.
  • the image block classification result indicates that the type of each image block in the plurality of image blocks is the first type or the second type, the image block of the first type is called the first type image block, and the type is the second type The image block is called the second type image block.
  • the number of suspicious diseased cells in the second type image block is less than the first preset number.
  • the area of the suspicious diseased cells in the first type image block is greater than or equal to a first preset area threshold.
  • the number of suspicious diseased cells in the second type image block is smaller than the first preset area.
  • the number or area of suspicious diseased cells in the first type image block is greater than a preset threshold, and the number or area of suspicious diseased cells in the second type image block is less than a preset threshold.
  • the preset threshold may include a first preset number and a first preset area.
  • the image block is the first One type of image block.
  • the image block with the number of suspicious diseased cells less than the first preset number and the area of the suspicious diseased cells less than the first preset area is the second type of image block.
  • the image block is of the second type Image block.
  • the image block with the number of suspicious diseased cells greater than or equal to the first preset number and the area of the suspicious diseased cell greater than or equal to the first preset area is the first type of image block.
  • step 604 the multiple image blocks in step 605 refer to image blocks obtained after performing style transfer.
  • the image blocks are classified into two types of image blocks of the first type and the second type according to the number of suspicious diseased cells and/or the area of the suspicious diseased cells.
  • the image blocks may be classified into three types: first type image blocks, second type image blocks, and third type image blocks according to the number of suspicious diseased cells and/or the area of suspicious diseased cells.
  • two preset numbers may be set, a first preset number and a second preset number, where the first preset number is greater than the second preset number. If the number of suspicious diseased cells in an image block is greater than or equal to the first preset number, the image block is the first type of image block; if the number of suspicious diseased cells in an image block is less than the first preset number and greater than If the number is equal to or equal to the second preset number, the image block is the second type image block; if the number of suspicious diseased cells in an image block is less than the second preset number, the image block is the third type image block.
  • only one preset number may be set, such as the first preset number mentioned above.
  • the image block is the first type of image block; if there are no suspicious diseased cells in an image block, the image block is The second graphic block; if an image block has suspicious diseased cells and the number of suspicious diseased cells is less than the first preset number, the image block is a third type of image block.
  • the image block classification may be determined using the image block classification model trained by the training system 500 as shown in FIG. 5.
  • the multiple image blocks may be input to the image block classification model, where the image block classification model is a trained AI model, and the first type image block and the at least one image block are obtained according to the output result of the image block classification model.
  • An image block of the second type may be determined using the image block classification model trained by the training system 500 as shown in FIG. 5.
  • the training system 500 trains the image block classification model. For ease of description, the following assumes that the first type of image block and the second type of image block are divided according to the number of suspicious diseased cells.
  • the training system 500 can train the image block classification model in a supervised learning manner.
  • the collection component 501 can collect multiple image blocks, and the multiple image blocks can be obtained after segmentation of one or more pathological images.
  • the collected image blocks form a training data set (hereinafter referred to as training data set 2) after being annotated by the doctor.
  • the training data set 2 may include a plurality of training data, each of the plurality of training data may include an image block data and label information, wherein the image block data is an image block collected by the collection component 501 Data or processed data of an image block, and the label information is the actual type of the image block (that is, the first type image block or the second type image block).
  • the captured image block may be marked by a doctor or pathologist.
  • the doctor can determine whether the image block is the first type image block or the second type image block according to the number of suspicious diseased cells in the image block, and use the determination result as the label information of the image block.
  • the collected image blocks may be independently marked by two or more doctors or pathologists. Integrate the marking results of two or more doctors or pathologists to obtain the label information of the image block. For example, in some embodiments, if two of the three doctors determine that an image block is the first type of image block, and the other doctor determines that the image block is the second type of image block, the label of the image block can be determined The information may be a first type image block. For another example, in other embodiments, if two of the three doctors determine that an image block is the first type of image block, and the other doctor determines that the image block is the second type of image block, the fourth The doctor determines the type of the image block. The determination result of the fourth doctor is used as the marking information of the image block. In this way, the accuracy of the training data can be improved.
  • the image blocks in the training data set 2 are image blocks with the same style or similar styles.
  • the training component 502 can use the training data in the training data set 2 to train the initial image block classification model to obtain the image block classification model. For example, the training component 502 first initializes the parameters of each layer in the initial image block classification model (that is, assigns an initial value to each parameter), and then uses the training data in the training data set to perform the initial image block classification model. After training, until the loss function in the initial image block classification model converges or all the training data in the training data set 2 is used for training, the training is considered to be completed, and the trained model is called the image block classification model.
  • the initial image block classification model can use some existing machine learning models or deep learning models that can be used for image classification in the industry, such as: Residual Networks (ResNets), Visual Geometry Group (VGG) network, Google Network (Google Networks, GoogLeNet), Founding (Inception) network, etc.
  • Residual Networks Residual Networks (ResNets), Visual Geometry Group (VGG) network, Google Network (Google Networks, GoogLeNet), Founding (Inception) network, etc.
  • the image classification model can be composed of an input module, an Inception module and an output module.
  • the input module performs the following processing on the input data (ie image blocks to be classified): 7*7 convolution, 3*3 pooling, local response normalization (LRN), 1*1 convolution, 3 *3 Convolution and LRN to get the processed data.
  • the processed data can pass through multiple (for example, 9) Inception modules.
  • the output module can perform average pooling (AP), fully connected (FC), and Softmax activation on the data processed by the Inception module to obtain the output result.
  • the output result obtained after processing by the output module is the type of the image block to be classified.
  • the detection of suspicious diseased cells may be determined using a suspicious disease component detection model trained by the training system 500 as shown in FIG. 5.
  • the at least one second type image block can be input to the suspicious lesion component detection model.
  • the suspicious lesion component detection model detects suspicious diseased cells in each second type image block, and determines that the suspicious diseased cells are in the second type image block.
  • the detection result is obtained, and the detection result includes the position information of the detected suspicious diseased cell in the corresponding second type image block.
  • the suspicious lesion component detection model is a trained AI model, and the suspicious lesion component detection model can output the location information of the suspicious lesion cell; according to the location information of the suspicious lesion cell, the at least one second type image block can be determined Image block containing suspicious diseased cells.
  • the training system 500 trains the suspicious lesion component detection model.
  • the training system 500 can train the suspicious lesion component detection model in a supervised learning manner.
  • the collection component 501 can collect multiple image blocks, and the multiple image blocks can be obtained after segmentation of one or more pathological images.
  • the collected image blocks form a training data set (ie, training data set 3) after being annotated by the doctor.
  • the training data set 3 may include a plurality of training data, each of the plurality of training data may include an image block data and label information, wherein the image block data is an image block collected by the collection component 501 Data or data of an image block after processing, and the label information is the location of the suspicious diseased cell in the image block.
  • the captured image block may be marked by a doctor or pathologist.
  • the doctor can identify and judge whether suspicious diseased cells are included in the image block, and if it includes suspicious diseased cells, mark the location of the suspicious diseased cells to obtain the label information of the image block.
  • the collected image blocks may be independently marked by two or more doctors or pathologists. Integrate the marking results of two or more doctors or pathologists to obtain the label information of the image block.
  • IoU detection evaluation function overlap
  • a preset threshold for example, 0.3
  • each pair has multiple doctors with consistent lesion types.
  • the bounding boxes are combined into the average value. That is, the bounding box marked by only one doctor can be skipped to maintain high quality, and the merged bounding box together with its lesion type is the final mark on the cell.
  • another doctor with more experience may mark the location of the suspicious diseased cells in the image block. The doctor's marking result is used as the marking information of the image block.
  • the suspicious disease component detection model can use some existing machine learning models or deep learning models that can be used for object detection in the industry, such as convolutional neural network (convolutional neural network, CNN), regional convolutional neural network (region-based CNN) , R-CNN), Fast-RCNN (Fast-RCNN), Fast-RCNN (Faster RCNN), Single Shot MultiBox Detector (Single Shot MultiBox Detector, SSD), etc.
  • convolutional neural network convolutional neural network
  • CNN convolutional neural network
  • region-based CNN regional convolutional neural network
  • R-CNN R-CNN
  • Fast-RCNN Fast-RCNN
  • Fast-RCNN Faster RCNN
  • Single Shot MultiBox Detector Single Shot MultiBox Detector
  • the training component 502 can use the training data in the training data set 3 to obtain the suspicious lesion component detection model.
  • the following takes Faster-RCNN as an example to briefly introduce the training and use process of the suspicious lesion component detection model.
  • RPN Region Proposal Network
  • RPN Region Proposal Network
  • RPN uses the RPN to obtain multiple proposals
  • FPN uses the multiple proposals to train Fast-RCNN (which can be considered as Fast-RCNN) It is an initial suspicious disease component detection model).
  • the trained Fast-RCNN is the suspicious lesion component detection model.
  • the second type of image block to be detected is input into the suspicious lesion component detection model.
  • the suspicious lesion component detection model may include four modules, namely: a convolution module, an RPN module, a region of interest (ROI) polling module, and a classification module.
  • the convolution module is used to extract the feature map of the second type of image block;
  • the RPN module is used to generate a region proposal;
  • the ROI pooling module is used to collect the features extracted by the convolution module and the suggestions generated by the PRN module Region, use the collected information to extract the proposal feature map;
  • the classification module uses the suggested feature extracted by the ROI pooling module to calculate the suggested category, and perform Bounding Box Regression to obtain the suspicious lesions in the second type of image block The location of the cell.
  • the analysis result of determining the pathological image according to the first type image block and the second type image block containing suspicious diseased cells may include the analysis result of the first type image block containing suspicious lesion components.
  • the second type image block and the quality information of the pathological image determine the analysis result of the pathological image.
  • the image quality of the image block is divided into four types: normal, bubble, faded, and out of focus, and it is assumed that the image block that meets the preset standard is an image block with a normal image quality.
  • the proportion of image blocks with normal image quality in all image blocks can be determined according to the image quality information.
  • the letter R is used below to indicate the ratio of the number of image blocks whose image quality is normal to the total number of image blocks.
  • R can be used to determine whether a pathological image is available.
  • R is less than a preset ratio
  • the final result obtained by using the pathological image is also unreliable.
  • R is greater than the preset ratio, it can be determined that the pathological image is available, and the analysis result of the pathological image is determined based on the result of image block classification and the detection result of suspicious diseased cells.
  • the pathological image is not available, then there is no need to continue processing the segmented image block, and the analysis result is directly output.
  • the result is that the pathological image is not available; if the pathological image is available, the image blocks with normal image quality can be classified and suspicious diseased cells can be detected, and the analysis result of the pathological image can be determined according to the classification results of the image blocks and the suspicious diseased cells detection results.
  • the reliability of the analysis result of the pathological image may be determined according to R, or according to R, the first type image block, and the second type image block containing suspicious diseased cells.
  • the analysis result of the pathological image may be determined based on the first type image block and the second type image block containing suspicious diseased cells.
  • the first type image block and the second type image block containing suspicious diseased cells are collectively referred to as image block classification result information below.
  • the analysis result of the pathological image is the decision result corresponding to the image block classification result information among multiple decision results based on the correspondence between the image block classification result information and the decision result.
  • Table 1 shows the correspondence between the classification result information of an image block and the decision result.
  • T11, T12, T13, T21, T22, T31, and T32 in Table 1 respectively represent different preset thresholds.
  • Num1 represents the number of image blocks of the first type
  • Num2 represents the number of image blocks of the second type
  • Num3 represents the number of image blocks of the second type. The total number of suspicious diseased cells in.
  • the analysis result of the pathological image is judgment result 1.
  • the analysis result of the pathological image can be determined to correspond to the classification result information of the image block and R among the plurality of judgment results according to the correspondence between R, the classification result information of the image block, and the decision result.
  • the result of the verdict For example, Table 2 shows the correspondence between the classification result information of an image block and the decision result.
  • T11, T12, T13, T21, T22, T31, T32, T41, T42, and T43 in Table 2 respectively represent different preset thresholds
  • Num1 represents the number of image blocks of the first type
  • Num2 represents the number of image blocks of the second type
  • Num3 Represents the total number of suspicious diseased cells in the second type of image block
  • R represents the ratio of the number of image blocks labeled as normal to the total number of image blocks.
  • the analysis result of the pathological image is decision result 1.
  • Table 1 and Table 2 are only an indication of the correspondence between the classification result information of the image block and the judgment result, or the correspondence between R, the classification result information of the image block and the judgment result.
  • the embodiment of the present application does not limit the correspondence between the image block classification result information and the judgment result, or the correspondence between R, the image block classification result information and the judgment result.
  • the corresponding relationship between the image block classification result and the judgment result may also include an average suspected diseased cell in the second type image block, and the number of the first type image block accounts for the total number of image blocks that meet the quality requirements ( That is, the ratio of the sum of the number of image blocks of the first type and the number of image blocks of the second type), or the ratio of the number of image blocks of the second type to the total number of image blocks meeting the quality requirement, etc.
  • the analysis result of the pathological image may be determined by using the decision model trained by the training system 500 as shown in FIG. 5.
  • the training system 500 can train the decision model in a supervised learning manner.
  • the collection component 501 can collect multiple pathological images.
  • the collected pathological images are annotated by the doctor to form a training data set (that is, the training data set 4).
  • the training data set 4 may include multiple training data, each of the multiple training data may include pathological image data and label information, where the pathological image is a pathological image collected by the collecting component 501 after segmentation.
  • the label information is a judgment result corresponding to the pathological image determined by the doctor or pathologist according to the pathological image.
  • a doctor or pathologist may determine the judgment result of the pathological image for the collected pathological image, and use the judgment result as the label information.
  • multiple doctors or pathologists may independently determine the judgment result of the pathological image in the collected pathological image, and combine the multiple judgment results to determine the final judgment result of the pathological image. Use the judgment result as the label information.
  • the multiple pathological images collected by the collection component 501 for training the decision model have the same style or similar styles.
  • the pathological image data included in the training data may include image block classification result information of multiple image blocks that meet the quality requirements of the pathological image.
  • the pathological image data included in the training data may include image block classification result information of a plurality of image blocks meeting the quality requirements of the pathological image and quality information of the pathological image.
  • the image block classification result information included in the training data and/or the quality information of the pathological image may be determined by using a trained AI model.
  • the image block classification result information included in the training data and/or the quality information of the pathological image may be manually determined.
  • the training component 502 can use the training data in the training data set 4 to obtain the decision model. For example, the training component 502 first initializes the parameters of each layer in the initial decision model (that is, assigns an initial value to each parameter), and then the training data in the training data set trains the initial decision model until the initial decision model If the loss function of is converged or all the training data in the training data set 4 is used for training, the initial decision model after training is called the decision model.
  • the initial decision model can use some machine learning models or deep learning models available in the industry that can be used for classification, such as: decision tree (DT), random forest (RF), logistic regression (LR) Support vector machine (support vector machine, SVM), convolutional neural network (convolutional neural network, CNN), recurrent neural network (rucurrent neural network, RNN), accelerated R-CNN (Faster Region-CNN, Faster R-CNN) , Single Shot MultiBox Detector (SSD), etc.
  • DT decision tree
  • RF random forest
  • LR logistic regression
  • Support vector machine support vector machine
  • convolutional neural network convolutional neural network
  • RNN recurrent neural network
  • RNN recurrent neural network
  • accelerated R-CNN Faster Region-CNN, Faster R-CNN
  • SSD Single Shot MultiBox Detector
  • the analysis result of the pathological image may include one or all of the analysis result of squamous epithelial lesion and/or the analysis result of glandular epithelial cells or the pathological image is not available.
  • the squamous epithelial lesion analysis results include atypical squamous epithelial cells-unclear significance, atypical squamous epithelial cells-high-grade intraepithelial lesions, low-grade intraepithelial lesions, high-grade intraepithelial lesions, or squamous cells are not excluded cancer.
  • the glandular epithelial cell analysis results include atypical glandular epithelial cells-non-specific, atypical glandular epithelial cells-tending to become cancerous, or adenocarcinoma.
  • This application also provides a data processing device. It should be understood that the functions included in the data processing device may be the same as those included in the aforementioned pathological image processing system, or the data processing device may include part of the functions in the aforementioned pathological image processing system; or The data processing device may also include part or all of the functions in the aforementioned pathological image processing system and part or all of the functions in the aforementioned training system.
  • Fig. 8 is a schematic structural block diagram of a data processing device provided by an embodiment of the present application.
  • the data processing device 900 shown in FIG. 8 includes an acquisition unit 901, an image analysis unit 902, and a decision analysis unit 903.
  • the acquiring unit 901 is configured to acquire multiple image blocks, which are obtained by segmentation of the pathological image to be analyzed.
  • the image analysis unit 902 is configured to input the plurality of image blocks into a first analysis model to obtain a first analysis result, wherein the first analysis model is based on the number or area of the suspicious lesion components for each of the plurality of image blocks.
  • Image blocks are classified, the first analysis result indicates that the type of each image block is the first type or the second type, and the first type indicates that the number or area of suspicious lesion components in the image block is greater than or equal to a preset threshold
  • the second type indicates that the number or area of suspicious lesion components in the image block is less than the preset threshold.
  • the first analysis model may be the image block classification model in the foregoing method embodiment, and the first analysis result is the image block classification result in the foregoing method embodiment.
  • the image analysis unit 902 is further configured to input at least one second-type image block in the first analysis result into a second analysis model to obtain a second analysis result, wherein the second analysis model analyzes each input second analysis model The location of the suspicious lesion component of the type image block.
  • the second analysis model may be the suspicious lesion component detection model in the foregoing method embodiment, and the second analysis result is the detection result in the foregoing method embodiment.
  • the decision analysis unit 903 is configured to synthesize the first analysis result and the second analysis result to obtain the final analysis result of the pathological image.
  • the device further includes an image quality detection unit 904, configured to input each image block to a third analysis model to obtain a third analysis result, wherein the third analysis model predicts each image The image quality of each image block.
  • the image analysis unit 903 is specifically configured to synthesize the first analysis result, the second analysis result, and the third analysis result to obtain the final analysis result of the pathological image.
  • the third analysis model may be the image quality prediction model in the foregoing method embodiment, and the second analysis result is the quality information in the foregoing method embodiment.
  • the acquiring unit 901 is specifically configured to acquire multiple initial image blocks formed after the pathological image to be analyzed is segmented; input each initial image block of the multiple initial image blocks to The style transfer model obtains the multiple image blocks, wherein the style transfer model converts the style of each initial image block.
  • the device further includes a first training unit 905, configured to train the initial first analysis model according to the first training data set to obtain the first analysis model, wherein the initial first analysis model
  • the analysis model is one of the artificial intelligence AI models
  • the first training data set includes a plurality of first training images
  • the label of each first training image is the first type or the second type.
  • the device further includes a second training unit 906, configured to train the initial second analysis model according to the second training data set to obtain the second analysis model, wherein the initial second analysis model
  • the second analysis model is one of the artificial intelligence AI models.
  • the second training data set includes multiple second training images containing suspicious lesion components.
  • the label of each second training image is the value of the suspicious lesion component in the training image. location information.
  • the device further includes a third training unit 907, configured to train the initial third analysis model according to the third training data set to obtain the third analysis model, wherein the initial third analysis model
  • the three analysis model is one of the artificial intelligence AI models
  • the third training data set includes a plurality of third training images
  • the label of each third training image is the image quality type of each third training image.
  • the decision analysis unit 903 is specifically configured to input the first analysis result and the second analysis result to the decision model to obtain the final analysis result of the pathological image.
  • the acquisition unit 901, the image analysis unit 902, the decision analysis unit 903, the image quality detection unit 904, the first training unit 905, the second training unit 906, and the third training unit 907 can be found in the foregoing method embodiments description of.
  • the image analysis unit 902 can perform the above step 605 and step 606;
  • the decision analysis unit 903 can perform the above step 607;
  • the image quality detection unit 904 can perform the above step 602 and step 603.
  • the image analysis unit 902 can be collectively described as an image analysis component; the decision analysis unit 903 can be described as a decision analysis component.
  • the acquiring unit 901 can be further divided into an acquisition unit and a style transfer unit.
  • the acquisition unit can be described as an image acquisition component.
  • the style transfer unit and the image quality detection unit 904 can be collectively described as an image preprocessing component.
  • the first training unit 905, the second training unit 906, and the third training unit 907 may be collectively described as a training system.
  • the units in the data processing apparatus 900 shown in FIG. 8 may be implemented by different devices.
  • the first training unit 905, the second training unit 906, and the third training unit 907 can be implemented by an independent training device.
  • the training device may send the trained model to the data processing device 900.
  • the present application also provides a computing device 400 as shown in FIG. 4.
  • the processor 402 in the computing device 400 reads the executable code stored in the memory 404 to execute the aforementioned method for processing pathological images.
  • each unit in the data processing apparatus 900 of the present application can be separately deployed on multiple computing devices, the present application also provides a computing device system as shown in FIG. 9, and the computing device system includes multiple computing devices 1000
  • Each computing device 1000 includes a bus 1001, a processor 1002, a communication interface 1003, and a memory 1004.
  • the processor 1002, the memory 1004, and the communication interface 1003 communicate with each other through the bus 1001.
  • the processor 1002 may be a CPU.
  • the memory 1004 may include a volatile memory (English: volatile memory), such as RAM.
  • the memory 1004 may also include non-volatile memory, such as ROM, flash memory, HDD or SSD.
  • Executable code is stored in the memory 1004, and the processor 1002 executes the executable code to execute part of the method for processing an image.
  • the memory 1004 may also include an operating system and other software modules required for running processes.
  • the operating system can be LINUX TM , UNIX TM , WINDOWS TM etc.
  • Each computing device 1000 establishes a communication path through a communication network.
  • Each computing device 1000 runs on any one or more of the acquisition unit 901, the image analysis unit 902, the decision analysis unit 903, the image quality detection unit 904, the first training unit 905, the second training unit 906, and the third training unit 907 A.
  • Any computing device 1000 may be a computing device in a cloud data center, or a computing device in an edge data center, or a terminal computing device.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions, and when these computer program instructions are loaded and executed on a computer, the process or functions described in FIG. 6 according to the embodiment of the present invention are generated in whole or in part.
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code runs on a computer, the computer executes the implementation shown in FIG. 6 The method of any one of the examples in the example.
  • the present application also provides a non-transitory readable storage medium, the non-transitory readable storage medium stores program code, and when the program code runs on a computer, the The computer executes the method of any one of the embodiments shown in FIG. 6.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请涉及人工智能领域,提供了一种处理图像的方法、装置及系统,该方法包括:获取多个图像块,该多个图像块由待分析的病理图像分割得到;将多个图像块输入至第一分析模型,获得第一分析结果,其中,第一分析模型根据可疑病变成分的数目或面积对所述多个图像块中的每个图像块进行分类;将第一分析结果中的至少一个第二类型图像块输入至第二分析模型,第二分析模型分析输入的每个第二类型图像块的可疑病变成分的位置,获得第二分析结果;最后,综合第一分析结果和第二分析结果获得该病理图像的最终分析结果。上述方法通过两个分析模型对病理图像进行分析,可以减少因人为主观因素造成的对病理图像分析结果判断错误的问题。

Description

处理图像的方法、装置及系统 技术领域
本申请涉及人工智能(artificial intelligence,AI)领域,更具体地,涉及处理图像的方法、装置及系统。
背景技术
随着图像数字化和数字存储技术的快速发展,当前通常将病理玻片扫描成数字化的病理图像,医生可以不通过显微镜而直接在计算机上阅读病理图像。但是,即使是有经验的医生面对大量的数字化病理图像,在进行诊断时,也会发生误诊的情况。因此,如何更好地处理数字化的病理图像,以使得根据病理图像进行分析或者诊断的结果更为准确,成为了一个亟待解决的问题。
发明内容
本申请实施例提供一种处理图像的方法、装置及系统,可以对病理图像进行判断,减少因人为主观因素造成的判断错误,也可以帮助没有相关经验或者相关经验较少的医生获得该病理图像的分析结果,有助于医生对病人的病情进行诊断。
第一方面,本申请实施例提供一种处理病理图像的方法,包括:获取多个图像块,该多个图像块由待分析的病理图像分割得到;将该多个图像块输入至第一分析模型,获得第一分析结果,其中,该第一分析模型根据可疑病变成分的数目或面积对该多个图像块中的每个图像块进行分类,该第一分析结果指示该每个图像块的类型为第一类型或第二类型,该第一类型表示图像块中的可疑病变成分的数目或面积大于或等于预设阈值,该第二类型表示图像块中的可疑病变成分的数目或面积小于该预设阈值;将该第一分析结果中的至少一个第二类型图像块输入至第二分析模型,获得第二分析结果,其中,该第二分析模型分析输入的每个第二类型图像块的可疑病变成分的位置;综合该第一分析结果和该第二分析结果获得该病理图像的最终分析结果。上述第一分析模型也可以称为图像块分类模型。上述第二分析模型也可以称为可疑病变成分检测模型。类型为第一类型的图像块也可以称为第一类型图像块。类型为第二类型的图像块也可以称为第二类型图像块。
上述技术方案可以利用第一分析模型和第二分析模型对病理图像进行处理,得到该病理图像的分析结果。这样可以减少因人为主观因素造成的判断错误。另外,也可以使得对病理图像的解读过于依赖于有经验的医生。从而可以帮助没有相关经验或者相关经验较少的医生也能获得该病理图像的分析结果,有助于医生对病人的病情进行诊断。
结合第一方面,在第一方面的一种可能的实现方式中,该方法还包括:输入每个图像块至第三分析模型,获得第三分析结果,其中,该第三分析模型预测每个图像块的图像质量;综合该第一分析结果和该第二分析结果获得该病理图像的最终分析结果,包括:综合该第一分析结果、该第二分析结果和该第三分析结果获得该病理图像的最终分析结果。利 用病理图像的质量信息可以辅助确定该病理图像的分析结果。
该第三分析模型也可以称为图像质量预测模型。
结合第一方面,在第一方面的一种可能的实现方式中,获取多个图像块,包括:获取待分析的病理图像被分割后形成的多个初始图像块;输入该多个初始图像块中的每个初始图像块至风格迁移模型,获得多个图像块,其中,该风格迁移模型对每个初始图像块的风格进行转换。通过风格迁移将图像块的风格进行统一,可以提高图像块的分类结果和可疑病变成分检测结果的准确性,从而可以提高该病理图像的分析结果的准确性。
结合第一方面,在第一方面的一种可能的实现方式中,该方法还包括:根据第一训练数据集对初始第一分析模型进行训练,获得第一分析模型,其中,该初始第一分析模型为人工智能AI模型中的一种,该第一训练数据集包括多个第一训练图像,每个第一训练图像的标签为第一类型或第二类型。第一训练数据集也称为训练数据集2。
该第一分析模型也可以称为图像块分类模型。该图像块分类模型可以是利用多个有丰富经验的医生或者病理学家进行标记后得到的训练图像进行训练得到的。因此该图像块分类模型对图像块进行分类的结果准确性较高。
结合第一方面,在第一方面的一种可能的实现方式中,该方法还包括:根据第二训练数据集对初始第二分析模型进行训练,获得第二分析模型,其中,该初始第二分析模型为人工智能AI模型中的一种,该第二训练数据集包括多个包含可疑病变成分的第二训练图像,每个第二训练图像的标签为可疑病变成分在训练图像中的位置信息。第二训练数据集也称为训练数据集3
该第二分析模型也可以称为可疑病变成分检测模型。可疑病变成分检测模型可以是利用多个有丰富经验的医生或者病理学家进行标记后得到的训练图像进行训练得到的。因此该可疑病变成分检测模型对图像块中的可疑病变成分进行检测的检测结果的准确率较高。
结合第一方面,在第一方面的一种可能的实现方式中,该方法还包括:根据第三训练数据集对初始第三分析模型进行训练,获得第三分析模型,其中,该初始第三分析模型为人工智能AI模型中的一种,该第三训练数据集包括多个第三训练图像,每个第三训练图像的标签为该每个第三训练图像的图像质量类型。第三训练数据集也称为训练数据集1。
结合第一方面,在第一方面的一种可能的实现方式中,该综合该第一分析结果和该第二分析结果获得该病理图像的最终分析结果,包括:输入该第一分析结果和该第二分析结果至判决模型,获得该病理图像的最终分析结果。
结合第一方面,在第一方面的一种可能的实现方式中,该病理图像为宫颈细胞的病理图像,该可疑病变成分为阳性宫颈细胞。
第二方面,本申请实施例提供一种数据处理装置,包括:获取单元,用于获取多个图像块,该多个图像块由待分析的病理图像分割得到;图像分析单元,用于将该多个图像块输入至第一分析模型,获得第一分析结果,其中,该第一分析模型根据可疑病变成分的数目或面积对该多个图像块中的每个图像块进行分类,该第一分析结果指示该每个图像块的类型为第一类型或第二类型,该第一类型表示图像块中的可疑病变成分的数目或面积大于或等于预设阈值,该第二类型表示图像块中的可疑病变成分的数目或面积小于该预设阈值;该图像分析单元,还用于将该第一分析结果中的至少一个第二类型图像块输入至第二分析模型,获得第二分析结果,其中,该第二分析模型分析输入的每个第二类型图像块的 可疑病变成分的位置;决策分析单元,用于综合该第一分析结果和该第二分析结果获得该病理图像的最终分析结果。
结合第二方面,在第二方面的一种可能的实现方式中,该装置还包括图像质量检测单元,用于输入该每个图像块至第三分析模型,获得第三分析结果,其中,该第三分析模型预测该每个图像块的图像质量;该决策分析单元,具体用于综合该第一分析结果、该第二分析结果和该第三分析结果获得该病理图像的最终分析结果。
结合第二方面,在第二方面的一种可能的实现方式中,该获取单元,具体用于获取该待分析的病理图像被分割后形成的多个初始图像块;输入该多个初始图像块中的每个初始图像块至风格迁移模型,获得该多个图像块,其中,该风格迁移模型对该每个初始图像块的风格进行转换。
结合第二方面,在第二方面的一种可能的实现方式中,该装置还包括第一训练单元,用于根据第一训练数据集对初始第一分析模型进行训练,获得该第一分析模型,其中,该初始第一分析模型为人工智能AI模型中的一种,该第一训练数据集包括多个第一训练图像,每个第一训练图像的标签为第一类型或第二类型。
结合第二方面,在第二方面的一种可能的实现方式中,该装置还包括第二训练单元,用于根据该第二训练数据集对初始第二分析模型进行训练,获得该第二分析模型,其中,该初始第二分析模型为人工智能AI模型中的一种,该第二训练数据集包括多个包含可疑病变成分的第二训练图像,每个第二训练图像的标签为该可疑病变成分在训练图像中的位置信息。
结合第二方面,在第二方面的一种可能的实现方式中,该装置还包括第三训练单元,用于根据该第三训练数据集对初始第三分析模型进行训练,获得该第三分析模型,其中,该初始第三分析模型为人工智能AI模型中的一种,该第三训练数据集包括多个第三训练图像,每个第三训练图像的标签为该每个第三训练图像的图像质量类型。
结合第二方面,在第二方面的一种可能的实现方式中,该决策分析单元具体用于输入该第一分析结果和该第二分析结果至判决模型,获得该病理图像的最终分析结果。
第三方面,本申请提供一种计算设备系统,包括至少一个存储器和至少一个处理器,所述至少一个存储器,用于存储计算机指令;当所述至少一个处理器执行所述计算机指令时,所述计算设备系统执行第一方面或第一方面的任意一种可能的实现方式提供的方法。
第四方面,本申请提供一种非瞬态的可读存储介质,所述非瞬态的可读存储介质被计算设备执行时,所述计算设备执行前述第一方面或第一方面的任意一种可能的实现方式中提供的方法。该存储介质中存储了程序。该存储介质包括但不限于易失性存储器,例如随机访问存储器,非易失性存储器,例如快闪存储器、硬盘(英文:hard disk drive,缩写:HDD)、固态硬盘(英文:solid state drive,缩写:SSD)。
第五方面,本申请提供一种计算机程序产品,所述计算机程序产品包括计算机指令,在被计算设备执行时,所述计算设备执行前述第一方面或第一方面的任意可能的实现方式中提供的方法。该计算机程序产品可以为一个软件安装包,在需要使用前述第一方面或第一方面的任意可能的实现方式中提供的方法的情况下,可以下载该计算机程序产品并在计算设备上执行该计算机程序产品。
附图说明
图1是根据本申请实施例提供的病理图像处理系统的示意性结构框图。
图2是根据本申请实施例提供的另一种病理图像处理系统的示意性结构框图。
图3是本申请实施例提供的一种病理图像处理系统的部署示意图。
图4是根据本申请实施例提供的一种计算设备的示意性结构框图。
图5是本申请实施例提供的一种训练系统的示意性结构框图。
图6是根据本申请实施例提供的一种处理图像的方法的示意性流程图。
图7是根据本申请实施例提供的风格迁移前的图像块和风格迁移后的图像块。
图8是根据本申请实施例提供一种数据处理装置的示意性结构框图。
图9是根据本申请实施例提供的一种计算设备系统的示意性结构框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请将围绕可包括多个设备、组件、模块等的系统来呈现各个方面、实施例或特征。应当理解和明白的是,各个系统可以包括另外的设备、组件、模块等,并且/或者可以并不包括结合附图讨论的所有设备、组件、模块等。此外,还可以使用这些方案的组合。
另外,在本申请实施例中,“示例的”、“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用示例的一词旨在以具体方式呈现概念。
本申请实施例描述的网络架构以及业务场景是为了更加清楚地说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
病理检查是适用于检查集体器官、组织或细胞中的病理改变的病理形态学检查方法。病理检查可以为医生对疾病的诊断提供依据。
形态学检查是常见的病理检查方法之一。形态学检查可以分为脱落细胞学检查和活体 组织检查。脱落细胞学检查是采集人体器官脱落的细胞,将采集到细胞经过染色处理后,通过观察这些染色后的细胞进行病理检查。常见的脱落细胞学检查包括通过痰涂片检查肺癌,尿液离心后涂片检查泌尿道肿瘤等。活体组织检查是从患者身体的病变部位取出小块组织制成病理玻片,通过观察病理玻片内的细胞和/或组织的形态结构变化进行病理检查。
病理检查的对象可以包括任何能够用于病理检查的成分,例如,细胞、细胞碎片、细胞核、细胞内的物质(例如肺泡内巨噬细胞含铁血黄素沉着、细胞内物质沉积)、其他可以通过显微镜观察到的组织或物质(例如纤维蛋白、蛋白样物质)等。为了便于描述,本申请实施例中将病理检查的对象称为成分。
应理解,对于不同的病理检查,病理检查的对象,即成分,是不同的。例如:对于宫颈癌的病理检查,常采用宫颈细胞作为成分,对其制成的病理玻片进行病理检测。可选的,在病理检查时,从结果划分的角度,成分可以分为两类,一类为正常成分;另一类为与正常成分不同的成分(例如:病变成分、可疑病变成分)。以细胞为例,一类可以是正常的细胞;另一类可以是不正常的细胞,例如,可以是发生了病变的细胞或者可疑病变细胞,又如,可以是细胞碎片,又如,可以是含有特定物质(例如铁血黄素沉着、物质沉积)的细胞。
本申请实施例中所称的病理玻片是由脱落细胞或活体组织等的成分制作而成的病理标本,对于一些病理检查,需将待检查的成分涂抹在玻片上,因此,形成的病理玻片也可以称为涂片。本申请实施例中所称的病理图像是对病理玻片进行数字化处理(例如:扫描、拍摄)得到的图像。
当前的病理检查通常是专业的医生利用显微镜对病理玻片进行观察或者通过计算机对病理图像进行观察,进而给出该病理图像的检查结果。该病理图像的检查结果通常会被用于进行疾病的诊断。仅依靠医生进行病理检查的方法,一方面大大增加了医生的工作量;另一方面,由于一些医生可能需要承接大量的病理检查任务或者一些医生业务水平不高等原因,还易出现医生给出的检查结果为错误的情况。
随着人工智能(artificial intelligent,AI)技术的进步,AI技术在医学领域的应用越来越深入。本申请将AI技术应用于对病理图像的处理,可以根据病理图像获得病理图像的检查结果,以用于辅助医学诊断。
本申请提供一种病理图像处理系统。该病理图像处理系统可以对采集到的病理图像进行分析,获得输出结果,该输出结果即为该病理图像对应的检查结果。医生可以利用该病理图像处理系统的输出结果判断病人的病情,进行辅助诊断,进行术前分析等。
为了便于描述,以下实施例假设需要处理的病理图像是利用新柏氏液基细胞学检测(Thinprep cytologic test,TCT)技术得到的宫颈细胞的病理图像。对宫颈细胞的病理图像进行病理检查,常用于宫颈癌预防检查、宫颈癌病情确认等。按照宫颈细胞的正常与否进行划分,宫颈细胞的病理图像中的成分(即宫颈细胞)可分为两类,一类为可疑病变细胞(通常称为阳性细胞),另一类可以是正常细胞(通常称为阴性细胞)。
图1是根据本申请实施例提供的病理图像处理系统的示意性结构框图。如图1所示的病理图像处理系统100包括:图像采集组件101,图像预处理组件102、图像分析组件103和决策分析组件104。
图像采集组件101,用于获取病理图像,并将该病理图像进行分割,得到N个图像块, N为大于或等于2的正整数。
本申请实施例对图像采集组件101采集病理图像的具体实现方式并不限定。例如,图像采集组件101可以对病理玻片进行扫描,得到该病理图像。又如,图像采集组件101可以对病理玻片进行拍照,得到该病理图像。又如,图像采集组件101可以从其他设备或装置中接收病理图像。
本申请实施例对图像采集组件101分割该病理图像的方式并不限定。可选的,在一些实现方式中,N的取值可以是一个预设或者人工设定的固定值。例如,该固定值可以为1000,1500,2000等。可选的,在另一些实现方式中,N的取值可以与病理图像的分辨率相关。例如,如果病理图像的分辨率小于500×500,则N的取值可以为500;如果病理图像的分辨率大于500×500且小于或等于1500×1500,则N的取值可以为1000;如果病理图像的分辨率大于1500×1500,则N的取值可以为2000。可选的,在另一些实现方式中,图像块的大小可以是预设或者人工设定的固定值。例如,每个图像块的大小可以是50×50。N的取值可以根据病理图像的大小和预设或者人工设定的图像块的大小确定。每个图像块中至少能够包括多个细胞。
图像预处理组件102,可以用于确定该N个图像块中的每个图像块的图像质量。图像预处理组件102,还可以用于对图像块进行风格迁移,得到进行了风格迁移后的图像块。
风格迁移(style transfer)也可以称为图像风格迁移,是指将一个目标图像的与参考图像的风格进行融合,融合后得到的输出的图像的风格与参考图像的风格相同或相似。图像的风格是指图像呈现的颜色、对比度等的面貌。在医学领域,由于不同的病理图像可能是由不同的扫描仪器扫描获得,或者不同的病理玻片在制作时采用的染色材料不同,因此获得的病理图像可能呈现出不同的风格。利用风格迁移技术,可以使得一种风格的病理图像转换成与参考病理图像的风格相同或相似的病理图像。经过风格迁移后的病理图像在用于后续的图像分析时获得的结果会更为准确。
图像分析组件103,可以用于获取经过图像预处理组件102处理后的图像块,并根据已训练完成的AI模型对获取的图像块进行分类,确定获取到的图像块的类型。
可选的,在一些实施例中,图像分析组件103可以确定获取的图像块为第一类型图像块或第二类型图像块,其中该第一类型图像块中可疑病变细胞的数目大于或等于第一预设阈值,该第二类型图像块中的该可疑病变细胞的数目小于该第一预设阈值。
可选的,在另一些实施例中,图像分析组件103可以确定获取的图像块为第一类型图像块、第二类型图像块或第三类型图像块,其中该第一类型图像块中的可疑病变细胞的数目大于或等于第一预设阈值,第二类型图像块中的可疑病变细胞的数目大于或等于第二预设阈值且小于第一预设阈值,第三类型图像块中的可疑病变细胞的数目小于该第二预设阈值。
图像分析组件103,还用于对分类获得的第二类型图像块进行进一步检测,检测第二类型图像块中的可疑病变细胞。
决策分析组件104,可以用于获取图像分析组件103确定的第一类型图像块以及包含可疑病变细胞的第二类型图像块。决策分析组件104可以根据第一类型图像块和包含可疑病变细胞的第二类型图像块,确定该病理图像对应的目标输出结果。可选的,决策分析组件104还可以获取图像预处理组件102输出的图像质量信息。在此情况下,决策分析组件 可以根据第一类型图像块、包含可疑病变细胞的第二类型图像块以及该图像质量信息,确定与该病理图像对应的目标输出结果。
可选的,在另一种实施例中还提供了一种病理图像处理系统200,如图2所示,病理图像处理系统200包括:图像采集组件201,图像分析组件202和决策分析组件203。
图像采集组件201的功能与图像采集组件101的功能相同。图像分析组件202的功能和图像分析组件103的功能相同。决策分析组件203,可以用于获取图像分析组件203确定的第一类型图像块以及包含可疑病变细胞的第二类型图像块。决策分析组件203可以根据第一类型图像块和包含可疑病变细胞的第二类型图像块,确定该病理图像对应的目标输出结果。
应理解,上述对病理图像处理系统的组件的划分仅仅是根据功能进行示例性地划分,本申请对病理图像采集系统的内部组件或模块的具体划分方式不作任何限定。
图3是本申请实施例提供的一种病理图像处理系统的部署示意图。该病理图像处理系统中的全部组件可以部署在云环境中,云环境是云计算模式下利用基础资源向用户提供云服务的实体。云环境包括云数据中心和云服务平台,所述云数据中心包括云服务提供商拥有的大量基础资源(包括计算资源、存储资源和网络资源),云数据中心包括的计算资源可以是大量的计算设备(例如服务器)。病理图像处理系统可以由云数据中心中的服务器实现;病理图像处理系统也可以由创建在云数据中心中的虚拟机实现。病理图像处理系统还可以是独立地部署在云数据中心中的服务器或者虚拟机上的软件装置,该软件装置用于对实现病理图像处理系统的功能。该软件装置还可以分布式地部署在多个服务器上、或者分布式地部署在多个虚拟机上、或者分布式地部署在虚拟机和服务器上。
如图3所示,病理图像处理系统可以由云服务提供商在云服务平台抽象成一种云服务提供给用户,用户在云服务平台购买该云服务后,云环境利用病理图像处理系统向用户提供病理图像检测的云服务,用户可以通过应用程序接口(application program interface,API)或者通过云服务平台提供的网页界面上传待处理的病理图像至云环境,由病理图像处理系统接收待处理的病理图像,对待处理的病理图像进行检测,检测结果返回至用户所在的终端,或者检测结果存储在云环境,例如:呈现在云服务平台的网页界面上供用户查看。
当病理图像处理系统为软件装置时,病理图像处理系统的几个部分还可以分别部署在不同的环境或设备中,例如:病理图像处理系统中的一部分部署在终端计算设备(如:终端服务器、智能手机、笔记本电脑、平板电脑、个人台式电脑、智能摄相机),另一部分部署在数据中心(具体部署在数据中心中的服务器或虚拟机上),数据中心可以是云数据中心,数据中心也可以是边缘数据中心,边缘数据中心是部署在距离终端计算设备较近的边缘计算设备的集合。
部署在不同环境或设备的病理图像处理系统的各个部分之间协同实现病理图像处理的功能,例如,在一种场景下,扫描设备中部署有病理图像处理系统中的图像采集组件,扫描设备可以对病理玻片进行扫描获得病理图像,并对病理图像进行分割,将分割后得到的图像块通过网络发送至数据中心,数据中心上部署有图像预处理组件、图像分析组件和决策分析组件,这些组件进一步地对分割后的图像块进行处理,最终获得分析结果,数据中心将分析结果发送至计算机,由此,医生可以获得病理图像的分析结果。应理解,本申请不对病理图像处理系统的哪些部分部署在终端计算设备和哪些部分部署在数据中心进 行限制性的划分,实际应用时可根据终端计算设备的计算能力或具体应用需求进行适应性的部署。例如,在另一种实现方式中,扫描设备可以对病理玻片进行扫描,得到病理图像,并将病理图像上传至数据中心。数据中心可以对该病理图像进行分割处理以及后续的检测。值得注意的是,在一种实施例中,病理图像处理系统还可以分三部分部署,其中,一部分部署在终端计算设备,一部分部署在边缘数据中心,一部分部署在云数据中心。
当病理图像处理系统为软件装置时,病理图像处理系统也可以单独部署在任意环境的一个计算设备上(例如:单独部署在一个终端计算设备上或者单独部署在数据中心中的一个计算设备上),如图4所示,计算设备400包括总线401、处理器402、通信接口403和存储器404。处理器402、存储器404和通信接口403之间通过总线401通信。其中,处理器402可以为中央处理器(英文:central processing unit,缩写:CPU)。存储器404可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random access memory,缩写:RAM)。存储器404还可以包括非易失性存储器(英文:non-volatile memory,缩写:NVM),例如只读存储器(英文:read-only memory,缩写:ROM),快闪存储器,HDD或SSD。存储器404中存储有病理图像处理系统所包括的可执行代码,处理器402读取存储器404中的该可执行代码以执行病理图像处理的方法。存储器404中还可以包括操作系统等其他运行进程所需的软件模块。操作系统可以为LINUX TM,UNIX TM,WINDOWS  TM等。
在本申请的一个实施例中,执行病理图像处理的方法需要使用预先训练完成的人工智能(artificial intelligence,AI)模型,AI模型本质是一种算法,其包括大量的参数和计算公式(或计算规则),AI模型可以被训练,训练后的AI模型可以学习到训练数据中的规律和特征。本申请实施例中可使用多种经过训练后具有不同功能的AI模型,例如:用于对图像块的质量进行预测的训练完成的AI模型,称为图像质量预测模型;用于对图像块进行风格迁移的训练完成的AI模型,称为风格迁移模型;用于对图像块进行分类的训练完成的AI模型,称为图像块分类模型;用于检测可疑病变成分的训练完成的AI模型,称为可疑病变成分检测模型;用于确定病理图像的分析结果的训练完成的AI模型,称为判决模型。上述五个模型可由训练系统进行训练,训练系统分别采用不同的训练集对图像质量预测模型、风格迁移模型、图像块分类模型、可疑病变成分检测模型和判决模型进行训练。经训练系统训练完成的图像质量预测模型、风格迁移模型、图像块分类模型、可疑病变成分检测模型和判决模型被部署于病理图像处理系统,由病理图像处理系统用于进行病理图像进行检测。
可以理解的是,在一些实施例中,病理图像处理系统可以仅使用上述五个模型中的部分模型。在此情况下,训练系统可以只训练病理图像处理系统需要使用的模型即可。
图5是本申请实施例提供的一种训练系统的示意性结构框图。如图5所示的训练系统500包括采集组件501和训练组件502。
采集组件501可以获取用于训练图像质量预测模型的训练数据集(以下简称训练数据集1),用于训练风格迁移模型的小样本图像块集,用于训练图像块分类模型的训练数据集(以下简称训练数据集2),用于训练可疑病变成分检测模型的训练数据集(以下简称训练数据集3),用于训练判决模型的训练数据集(以下简称训练数据集4)。
训练组件502可以利用采集组件501获取到的训练数据集对AI模型进行训练,得到 相应的AI模型。例如,训练组件502可以先对图像质量预测模型中的每层参数进行初始化(即,为每个参数赋予一个初始值),进而利用个训练数据集1中的训练图像对图像质量预测模型进行训练,直到图像训练预测模型中的损失函数收敛或者训练数据集1中所有的训练图像都被用于训练。
应理解,训练系统的部署方式和部署位置可以参照前述病理图像处理系统的部署方式和部署位置。训练系统还可以与病理图像处理系统部署在相同的环境或设备中,也可以与病理图像处理系统部署在不同的环境或设备中。在另一种实施例中,训练系统和病理图像处理系统还可以共同构成一个系统。
下面结合图6具体描述本申请提供的处理图像的方法,该方法可由前述病理图像处理系统执行。
图6是根据本申请实施例提供的一种处理病理图像的方法的示意性流程图。
601,获取病理图像,并将该病理图像进行分割,得到N个图像块,N为大于或等于2的正整数。
602,确定该N个图像块中的每个图像块的图像质量。
图像块的图像质量可以是多种图像质量中的一种。例如,在一些实施例中,图像质量可以包括正常和不正常。在此情况下,该N个图像块中的每个图像块的图像质量可以是正常或者不正常。又如,在另一些实施例中,图像块的图像质量可以包括:正常、气泡、褪色、失焦。在此情况下,该N个图像块中的每个图像块的图像质量可以为正常、气泡、褪色或失焦中的任一个。又如,在另一些实施例中,图像块的图像质量可以包括多个等级。例如,该多个等级可以用优、良、中、差表示。又如,该多个等级可以表示为得分,例如5、4、3、2、1,其中得分为5的图像块的图像质量最好,得分为1的图像块的图像质量最差。在此情况下,该N个图像块中的每个图像块的图像质量为该多个等级中的一个。
可选的,在一些实施例中,图像块的图像质量可以利用如图5所示的训练系统500训练的图像质量预测模型确定。例如,可以将N个图像块中的每个图像块输入至图像质量预测模型,该图像质量预测模型是经过训练的AI模型,根据该图像质量预测模型的输出结果,可以确定该N个图像块中的每个图像块的图像质量。
下面将对训练系统500如何训练得到该图像质量预测模型进行简单介绍。训练系统500可以采用监督学习(supervised learning)的方式训练初始图像质量预测模型,获得该图像质量预测模型。采集组件501可以采集多个用于训练的图像块,该多个图像块可以是一个或多个病理图像分割后得到的。采集到的图像块经过人工或采集组件501进行处理和标注后构成一个训练数据集1。该训练数据集1中可以包括多个训练图像,该多个训练图像中的每个训练图像可以包括一个图像块数据以及标签信息,其中该图像块数据是采集组件501采集到的一个图像块的数据或者经过处理后的一个图像块的数据,标签信息是该图像块的实际的图像质量。图像块的实际的图像质量可以由人工预先进行判断并标记。训练组件502可以利用该训练数据集1中的训练数据对初始图像质量预测模型进行训练,得到该图像质量预测模型。例如,训练组件502首先对初始图像质量预测模型中的每层的参数进行初始化(即,为每个参数赋予一个初始值),进而训练数据集1中的训练数据对该初始图像质量预测模型进行训练,直到该初始图像质量预测模型中的损失函数收敛或者训练数据集1中所有的训练数据被用于训练,则训练完成,获得可用于本方案的图像质量预测 模型。
初始图像质量预测模型可采用业界现有的一些可用于分类的机器学习模型或者深度学习模型,例如:决策树(decision tree,DT)、随机森林(random forest,RF)、逻辑回归(logistic regression,LR)、支持向量机(support vector machine,SVM)、卷积神经网络(convolutional neural network,CNN)、循环神经网络(rucurrent neural network,RNN)等中的任一个。
可选的,在另一些实施例中,该N个图像块中的每个图像块的图像质量也可以不利用基于AI模型的方式来确定。换句话说,确定该每个图像块的图像质量的过程中也可以不利用人工智能技术。例如,可以利用拉普拉斯算子、Brenner梯度函数、Tenengrad梯度函数等确定每个图像块的清晰度。若一个图像块的清晰度满足预设条件,则可以确定该图像块的图像质量为正常,否则确定该图像块的图像质量为不正常。又如,可以根据图像块中的一个像素和该像素周围的像素的关联度来确定该图像块是否是失焦。如果图像块中的一个像素和该像素周围的像素的关联度较高(例如大于一个预设的关联度阈值)则可以确定该图像块的图像质量为失焦。如果图像块中的一个像素和该像素周围的像素的关联度较低(例如小于一个预设的关联度阈值)则可以确定该图像块的图像质量为正常。
603,根据每个图像块的图像质量,确定该病理图像的质量信息。
可选的,在一些实施例中,该病理图像的质量信息可以包括该病理图像被分割的N个图像块中图像块的图像质量符合预设标准的图像块数目,病理图像的质量信息即为各个图像块进行质量预测后获得的分析结果。
例如,若图像块的图像质量分为正常和不正常,则符合预设标准的图像质量为正常。在此情况下,该病理图像的质量信息可以包括图像质量为正常的图像块数目。
又如,若图像块的图像质量分为正常、气泡、褪色、失焦,则符合预设标准的图像质量为正常。在此情况下,该病理图像的质量信息可以包括图像质量为正常的图像块数目。
又如,若图像块的图像质量分为多个等级(例如优、良、中、差),则符合预设标准的图像质量为大于或等于一个预设等级的图像质量。例如,该预设等级可以是良。在此情况下,该病理图像的质量信息可以包括图像质量为优和良的图像块总数,或者,该病理图像的质量信息可以包括图像质量为优的图像块数目和图像质量为良的图像块数目。
可选的,在一些实施例中,该病理图像的质量信息中除了图像质量符合预设标准的图像块数目外,还可以包括图像块的总数目。这样,可以根据图像块的总数和图像质量符合预设标准的图像块数目,确定图像质量不符合该预设标准的图像块数目。
可选的,在另一些实施例中,该病理图像的质量信息中可以包括每个图像质量的图像块数目。
例如,若图像块的图像质量分为正常和不正常,则该病理图像的质量信息可以包括图像质量为正常的图像块数目和图像质量为不正常的图像块数目。
又如,若图像块的图像质量分为正常、气泡、褪色、失焦,则该病理图像的质量信息可以包括图像质量为正常的图像块数目,图像质量为气泡的图像块数目,图像质量为褪色的图像块数目以及图像质量为失焦的图像块数目。
可选的,在另一些实施例中,该病理图像的质量信息中可以包括图像质量符合预设标准的图像块的身份信息。
例如,该病理图像被分割为P×Q个图像块,P和Q为大于或等于1的正整数且P×Q等于N。那么(p,q)可以作为表示P×Q个图像块中的第p行第q列的图像块的身份信息,p为大于或等于1且小于或等于P的正整数,q为大于或等于1且小于或等于Q的正整数。
假设图像块的图像质量分为正常和不正常且P×Q等于3×3。若该病理图像的质量信息中包括(1,1),(1,2),(2,1),(3,1)和(3,2),则表示3×3个图像块中的第一行第一列的图像块,第一行第二列的图像块,第二行第一列的图像块,第三行第一列的图像块和第三行第二列的图像块的图像质量为正常。
可选的,在另一些实施例中,该病理图像的质量信息中可以包括每个图像质量的身份信息。
例如,假设图像块的图像质量分为正常和不正常且P×Q等于3×3。若该病理图像的质量信息中包括[(1,1),(1,2),(2,1),(3,1)和(3,2)];[(1,3),(2,2),(2,3),(3,3)],则表示3×3个图像块中的第一行第一列的图像块,第一行第二列的图像块,第二行第一列的图像块,第三行第一列的图像块和第三行第二列的图像块的图像质量为正常;第一行第三列的图像块,第二行第二列的图像块,第二行第三列的图像块和第三行第三列的图像块的图像质量为不正常。
604,对图像块进行风格迁移,得到风格迁移后的图像块。在另一种实施例中,进行风格迁移前的图像块也可以成为初始图像块,进行风格迁移后的图像块称为图像块。
可选的,在一些实施例中,可以仅对图像质量符合预设标准的图像块进行风格迁移。例如,假设N个图像块中只有N 1个图像块的图像质量符合预设标准,N 1为大于或等于1且小于或等于N的正整数。在此情况下,可以对该N 1个图像块进行风格迁移,得到N 1个迁移后图像块。
可选的,在另一些实施例中,可以对分割后得到的N个图像块进行风格迁移,得到N个迁移后图像块,然后再从N个迁移后图像块中选择出图像质量符合预设标准的图像块进行后续处理。
可选的,在另一些实施例中,可以直接对采集到的病理图像进行风格迁移,得到迁移后的病理图像,然后再对迁移后的病理图像进行分割,获得N个迁移后图像块。
病理图像在制片时使用的制片试剂以及在数字化处理时使用的扫描机器(或照相机)都会对最终得到的病理图像造成影响。在对图像块进行处理时使用的模型(例如图像块分类模型和/或可疑病变成分检测模型)可能是使用一种或者几种制片试剂以及一种或者几种扫描机器(或照相机)得到的病理图像分割后得到的图像块作为训练数据确定的。图像分析组件使用的模型可能是基于一种风格的图像块或者几种风格近似的图像块训练得到的。为了便于描述,以下将用于训练图像分析组件使用的模型的图像块的风格成为目标风格。步骤601中获取到的分割后的N个图像块的风格与目标风格可能并不相同。因此,如果不对图像块进行风格迁移,则会对后续的图像块分类以及检测可疑病变细胞的准确性造成一定影响,从而影响最终的分析结果。通过将图像块的风格转换为该目标风格可以提高图像块分类结果和可疑病变细胞检测结果的准确性,进而提高最终得到的病理图像的分析结果的准确性。
为了便于描述,以下假设需要进行风格迁移的是N 1个质量符合预设标准的图像块。
可选的,在一些实施例中,风格迁移可以利用如图5所示的训练系统500训练的风格 迁移模型确定。例如,可以将该N 1个图像块中的每个图像块输入至风格迁移模型,该风格迁移模型是经过训练的AI模型,根据该风格迁移模型的输出结果,可以得到N 1个迁移后图像块。
下面将对训练系统500如何训练该风格迁移模型进行简单介绍。训练系统500可以采用无监督学习的方式训练该风格迁移模型。风格迁移模型可以采用AI模型。本申请实施例并不限定该风格迁移模型采用的结构。例如可以是生成对抗网络(generative adversarial networks,GAN)框架下的任意一种模型,如深度卷积生成对抗网络(deep convolutional generative adversarial networks,DCGAN),或者其他基于小样本图像块集中各个图像块的特征生成新的图像块的AI模型。小样本图像块集中的图像块是由采集组件501采集的。小样本图像块集中的图像块的风格是风格迁移的目标风格。
下面描述一下GAN的原理:
GAN中包括生成器G(Generator)和判别器D(Discriminator),其中,生成器G用于基于GAN的输入生成候选图像块集,判别器D(Discriminator)连接生成器G的输出端,用于判别生成器G输出的候选图像块集是否为真实图像块集。
在GAN中,生成器G和判别器D是一个交替对抗训练的过程,将一个任意的图像块集作为生成器G的输入,生成器G输出候选图像块集,判别器D将生成器G生成的候选图像块集与小样本图像块集作为输入值,对候选图像块集与小样本图像块集的特征进行比对,输出候选图像块集与小样本图像块集属于同一类型的图像块集的概率(与小样本图像块集为同一类型的候选图像块集也称为真实图像块集,真实图像块集与小样本图像块集中的图像块有相同或相似的特征),根据输出的候选图像块集为真实图像块集的概率对生成器G中的参数进行优化(此时判别器D中的参数不变),直至生成器G输出的候选图像块集被判别器D判别真实图像块集(候选图像块集为真实图像块集的概率大于阈值),则判别器D根据判别器D输出的概率对内部各个网络层的参数进行优化(此时生成器G中的参数不变),使得判别器D继续可以判别生成器G输出的候选图像块集与小样本图像块集不属于同一类。生成器G和判别器D中的参数交替优化,直至生成器G生成判别器D无法判别的候选图像块集是否为真实信号集。从上述训练过程可以发现,生成器G与判别器D的训练交替的过程是生成器G和判别器D互相博弈的过程;当生成器G可以生成的候选图像块集与小样本图像块集具备相同或相似的特征,也就是说候选图像块集接近于真实图像块集,判别器D不能准确判别输入的图像块集是否为真实图像块集,GAN训练完成。
可选的,在一些实施例中,步骤604也可以不需要执行。例如,步骤601获取病理图像的风格与目标风格是相同或相似的。在此情况下,可以不对图像块进行风格迁移。又如,对图像块进行分类以及可疑病变细胞检测时使用的模型可以适应各种风格的图像块。因此,也可以不需要对图像块进行风格迁移。
可选的,在一些实施例中,步骤602至步骤603也可以不需要执行。在此情况下,对病理图像分割后得到的N个图像块可以直接进行风格迁移(即执行步骤604)或者,直接执行分类以及后续步骤(即步骤605至步骤607)。
图7示出了风格迁移前的图像块和风格迁移后的图像块,经过风格迁移后的图像块的颜色深浅和对比度发生了变化。
605,根据图像块分类模型,对多个图像块进行分类,获得图像块分类结果。
具体地,图像块分类结果指示多个图像块中的每个图像块的类型为第一类型或第二类型,类型为第一类型的图像块称为第一类型图像块,类型为第二类型的图像块称为第二类型图像块。
可选的,在一些实施例中,第一类型图像块可以是一个也可以是多个,该第一类型图像块中的可疑病变细胞数目大于或等于第一预设数目阈值。第二类型图像块中的可疑病变细胞数目小于该第一预设数目。
可选的,在另一些实施例中,该第一类型图像块中的可疑病变细胞面积大于或等于第一预设面积阈值。第二类型图像块中的可疑病变细胞数目小于该第一预设面积。
可选的,在另一些实施例中,该第一类型图像块中的可疑病变细胞数目或面积大于预设阈值,第二类型图像块中的可疑病变细胞数目或面积小于预设阈值。该预设阈值可以包括第一预设数目和第一预设面积。
可选的,在一些实施例中,只要一个图像块中可疑病变细胞数目大于或等于该第一预设数目或者可疑病变细胞面积大于或等于该第一预设面积,则该图像块为该第一类型图像块。换句话说,只有可疑病变细胞数目小于该第一预设数目且可疑病变细胞面积小于该第一预设面积的图像块才为该第二类型图像块。
可选的,在另一些实施例中,只要一个图像块中的可疑病变细胞数目小于该第一预设数目或者可疑病变细胞面积小于该第一预设面积,则该图像块为该第二类型图像块。换句话说,只有可疑病变细胞数目大于或等于该第一预设数目且可疑病变细胞面积大于或等于该第一预设面积的图像块才为该第一类型图像块。
可以理解的是,如果执行了步骤604,那么步骤605中的多个图像块是指进行了风格迁移后得到的图像块。
在步骤605中是根据可疑病变细胞数目和/或可疑病变细胞面积将图像块分为第一类型图像块和第二类型图像块两类。在另一些实施例中,还可以根据可疑病变细胞数目和/或可疑病变细胞面积将图像块分为第一类型图像块、第二类型图像块和第三类型图像块三类。
例如,在一些实施例中,可以设置两个预设数目,第一预设数目和第二预设数目,其中第一预设数目大于第二预设数目。如果一个图像块中的可疑病变细胞数目大于或等于该第一预设数目,则该图像块为第一类型图像块;如果一个图像块中的可疑病变细胞数目小于该第一预设数目且大于或等于第二预设数目,则该图像块为第二类型图像块;如果一个图像块中的可疑病变细胞数目小于该第二预设数目,则该图像块为第三类型图像块。
又如,在另一些实施例中,可以仅设置一个预设数目,例如上述第一预设数目。在此情况下,如果一个图像块中的可疑病变细胞数目大于或等于该第一预设数目,则该图像块为第一类型图像块;如果一个图像块没有可疑病变细胞,则该图像块为第二图形块;如果一个图像块有可疑病变细胞且可疑病变细胞数目小于该第一预设数目,则该图像块为第三类型图像块。
可选的,在一些实施例中,图像块分类可以利用如图5所示的训练系统500训练的图像块分类模型确定。例如,可以输入该多个图像块至该图像块分类模型,其中,该图像块分类模型为经过训练的AI模型,根据该图像块分类模型的输出结果,获得该第一类型图 像块和该至少一个第二类型图像块。
下面将对训练系统500如何训练图像块分类模型进行简单介绍。为了便于描述,以下假设第一类型图像块和第二类型图像块是根据可疑病变细胞数量划分的。训练系统500可以采用监督学习(supervised learning)的方式训练该图像块分类模型。采集组件501可以采集多个图像块,该多个图像块可以是一个或多个病理图像分割后得到的。采集到的图像块经过医生标注后构成一个训练数据集(以下简称训练数据集2)。该训练数据集2中可以包括多个训练数据,该多个训练数据中的每个训练数据可以包括一个图像块数据以及标签信息,其中该图像块数据是采集组件501采集到的一个图像块的数据或者经过处理后的一个图像块的数据,标签信息是该图像块实际的类型(即第一类型图像块或第二类型图像块)。
可选的,在一些实施例中,采集到的图像块可以由一个医生或者病理学家进行标记。医生可以根据图像块中的可疑病变细胞数量,确定该图像块是第一类型图像块还是第二类型图像块,并将确定结果作为该图像块的标签信息。
可选的,在另一些实施例中,采集到的图像块可以由两个或者更多的医生或者病理学家独立进行标记。综合两个或者更多的医生或病理学家的标记结果,得到该图像块的标签信息。例如,在一些实施例中,若三个医生中的两个医生确定一个图像块是第一类型图像块,另一个医生确定该图像块是第二类型图像块,则可以确定该图像块的标签信息可以是第一类型图像块。又如,在另一些实施例中,若三个医生中的两个医生确定一个图像块是第一类型图像块,另一个医生确定该图像块是第二类型图像块,则可以由第四个医生来确定该图像块的类型。第四个医生的确定结果作为该图像块的标记信息。这样,可以提高训练数据的准确性。
可选的,在一些实施例中,该训练数据集2中的图像块是具有相同风格或者相似风格的图像块。
训练组件502可以利用该训练数据集2中的训练数据对初始图像块分类模型进行训练,得到该图像块分类模型。例如,训练组件502首先对该初始图像块分类模型中的每层的参数进行初始化(即,为每个参数赋予一个初始值),进而利用训练数据集中的训练数据对该初始图像块分类模型进行训练,直到该初始图像块分类模型中的损失函数收敛或者该训练数据集2中所有的训练数据被用于训练,则认为训练完成,训练后的模型称为图像块分类模型。
初始图像块分类模型可采用业界现有的一些可用于图像分类的机器学习模型或者深度学习模型,例如:残差网络(Residual Networks,ResNets)、视觉几何组(Visual Geometry Group,VGG)网络、谷歌网络(Google Networks,GoogLeNet)、创始(Inception)网络等。
假设该图像分类模型是基于Inception网络第一版(Inception Version 1,Inception-v1)进行训练后确定的。在此情况下,该图像分类模型可以由输入模块,Inception模块和输出模块组成。输入模块对输入的数据(即待分类的图像块)进行以下处理:7*7卷积,3*3池化,局部响应归一化(Local Response Normalization,LRN),1*1卷积,3*3卷积和LRN,得到处理后的数据。然后该处理后的数据可以经过多个(例如9个)Inception模块。输出模块可以对Inception模块处理后的数据进行平均池化(average pooling,AP),全连 接(fully connected,FC),Softmax激活,得到输出结果。输出模块处理后得到的输出结果就是该待分类的图像块的类型。
606,根据可疑病变成分检测模型,对该至少一个第二类型图像块进行检测,确定该至少一个第二类型图像块中的包含可疑病变细胞的图像块。
可选的,在一些实施例中,可疑病变细胞的检测可以利用如图5所示的训练系统500训练的可疑病变成分检测模型确定。可以输入该至少一个第二类型图像块至该可疑病变成分检测模型,可疑病变成分检测模型分别对每个第二类型图像块中的可疑病变细胞进行检测,确定可疑病变细胞在第二类型图像块中的位置,获得检测结果,检测结果包括检测到的可疑病变细胞在对应的第二类型图像块中的位置信息。其中,该可疑病变成分检测模型为经过训练的AI模型,该可疑病变成分检测模型可以输出可疑病变细胞的位置信息;根据该可疑病变细胞的位置信息可以确定该至少一个第二类型图像块中的包含可疑病变细胞的图像块。
下面将对训练系统500如何训练该可疑病变成分检测模型进行简单介绍。训练系统500可以采用监督学习(supervised learning)的方式训练该可疑病变成分检测模型。采集组件501可以采集多个图像块,该多个图像块可以是一个或多个病理图像分割后得到的。采集到的图像块经过医生标注后构成一个训练数据集(即训练数据集3)。该训练数据集3中可以包括多个训练数据,该多个训练数据中的每个训练数据可以包括一个图像块数据以及标签信息,其中该图像块数据是采集组件501采集到的一个图像块的数据或者经过处理后的一个图像块的数据,标签信息是该图像块中的可疑病变细胞的位置。
可选的,在一些实施例中,采集到的图像块可以由一个医生或者病理学家进行标记。医生可以识别判断该图像块中是否包括可疑病变细胞,若包括可疑病变细胞,则标记该可疑病变细胞的位置,得到该图像块的标签信息。
可选的,在另一些实施例中,采集到的图像块可以由两个或者更多的医生或者病理学家独立进行标记。综合两个或者更多的医生或病理学家的标记结果,得到该图像块的标签信息。可选的,在一些实施例中,如果检测评价函数重叠度(intersection-over-union,IoU)大于一个预设阈值(例如0.3)3,则每对具有来自多个医生的一致病变类型的多个边界框合并为平均值。也就是说,可以跳过仅由一名医生标记的边界框以保持高质量,并且合并的边界框连同其病变类型是细胞上的最终标记。可选的,在另一些实施例中,若多个医生标记的结果不同,则可以由另一个具有更丰富经验的医生标记图像块中的可疑病变细胞的位置。该医生的标记结果作为该图像块的标记信息。
该可疑病变成分检测模型可采用业界现有的一些可用于物体检测的机器学习模型或者深度学习模型,例如:卷积神经网络(convolutional neural network,CNN)、区域卷积神经网络(region-based CNN,R-CNN)、快速-RCNN(Fast-RCNN)加快-RCNN(Faster RCNN)、单发多框检测(Single Shot MultiBox Detector,SSD)等中的任一个。
训练组件502可以利用该训练数据集3中的训练数据得到该可疑病变成分检测模型。下面以Faster-RCNN为例,对该可疑病变成分检测模型的训练和使用过程进行简单介绍。
训练过程
利用训练数据集3中的训练数据,得到区域生成网络(Region Proposal Network,RPN);利用该RPN得到多个建议(proposal);利用该多个proposal,训练Fast-RCNN(可以认 为该Fast-RCNN是一个初始可疑病变成分检测模型)。训练好的Fast-RCNN就是该可疑病变成分检测模型。
使用过程
待检测的第二类型图像块输入该可疑病变成分检测模型。该可疑病变成分检测模型可以包括四个模块,分别为:卷积模块、RPN模块、兴趣区域(region of interest,ROI)池化(polling)模块和分类模块。卷积模块用于提取该第二类型图像块的特征(feature map);RPN模块用于生成建议区域(region proposal);ROI池化模块用于收集卷积模块提取的特征和PRN模块生成的建议区域,利用收集到的信息提取建议特征(proposal feature map);分类模块利用ROI池化模块提取的建议特征计算建议类别,进行边框回归(Bounding Box Regression),得到第二类型图像块中的可疑病变细胞的位置。
607,根据第一类型图像块和包含可疑病变细胞的第二类型图像块,确定该病理图像的分析结果。
可选的,在一些实施例中,该根据第一类型图像块和包含可疑病变细胞的第二类型图像块,确定该病理图像的分析结果可以包括根据第一类型图像块、包含可疑病变成分的第二类型图像块和该病理图像的质量信息,确定该病理图像的分析结果。
为了便于描述,以下假设图像块的图像质量分为正常、气泡、褪色和失焦四种,并且假设符合预设标准的图像块为图像质量为正常的图像块。可以根据图像质量信息确定出图像质量为正常的图像块占全部图像块的比例。为了便于描述,以下使用字母R表示图像质量为正常的图像块数目占全部图像块数目的比例。
在一些实施例中,可以根据R来确定病理图像是否可用。
例如,在一些实施例中,若R小于一个预设比例,则可以直接确定该病理图像的分析结果为病理图像不可用。换句话说,该病理图像分割得到的N个图像块中没有足够多的满足质量要求的图像块。这样,利用该病理图像得到的最终结果也是不可信的。若R大于该预设比例,则可以确定该病理图像可用,并继续根据图像块分类结果和可疑病变细胞检测结果确定该病理图像的分析结果。可选的,在一些实施例中,可以先根据该病理图像的质量信息确定病理图像是否可用,若病理图像不可用,则不需要继续处理分割后的图像块,而直接输出分析结果,该分析结果为病理图像不可以;若病理图像可用,则可以将图像质量为正常的图像块进行分类以及可疑病变细胞检测,并根据图像块分类结果和可疑病变细胞检测结果确定该病理图像的分析结果。
在另一些实施例中,可以根据R,或者,根据R、该第一类型图像块和该包含可疑病变细胞的第二类型图像块来确定该病理图像的分析结果的可信度。
可选的,在一些实施例中,可以根据该第一类型图像块和该包含可疑病变细胞的第二类型图像块来确定该病理图像的分析结果。为了便于描述,以下将该第一类型图像块和该包含可疑病变细胞的第二类型图像块统称为图像块分类结果信息。
可选的,在一些实施例中,可以根据图像块分类结果信息和判决结果的对应关系,确定该病理图像的分析结果为多个判决结果中与该图像块分类结果信息对应的判决结果。例如,表1是一个图像块分类结果信息和判决结果的对应关系。
表1
Num1 Num2 Num3 判决结果
Num1<T11 Num2>T21 Num3<T31 判决结果1
T11≤Num1<T12 T21≥Num2>T22 T31≤Num3<T32 判决结果2
T12≤Num1 T22≥Num2 T32≤Num3 判决结果3
表1中的T11、T12、T13、T21、T22、T31和T32分别表示不同的预设阈值,Num1表示第一类型图像块数目,Num2表示第二类型图像块数目,Num3表示第二类型图像块中的可疑病变细胞总数。
例如,如表1所示,若第一类型图像块数目小于预设阈值T11,第二类型图像块数目大于预设阈值T21且第二类型图像块中的可疑病变细胞总数小于预设阈值T31,则该病理图像的分析结果为判决结果1。
可选的,在另一些实施例中,可以根据R、图像块分类结果信息和判决结果的对应关系,确定该病理图像的分析结果为多个判决结果中与该图像块分类结果信息和R对应的判决结果。例如,表2是一个图像块分类结果信息和判决结果的对应关系。
表2
Num1 Num2 Num3 R 判决结果
Num1<T11 Num2>T21 Num3<T31 R≥T41 判决结果1
T11≤Num1<T12 T21≥Num2>T22 T31≤Num3<T32 T42≤R<T41 判决结果2
T12≤Num1 T22≥Num2 T32≤Num3 T43≤R<T42 判决结果3
/ / / R<T43 判决结果4
表2中的T11、T12、T13、T21、T22、T31、T32、T41、T42和T43分别表示不同的预设阈值,Num1表示第一类型图像块数目,Num2表示第二类型图像块数目,Num3表示第二类型图像块中的可疑病变细胞总数,R表示标签为正常的图像块数目占全部图像块数目的比例。
例如,如表2所示,若第一类型图像块数目小于预设阈值T11,第二类型图像块数目大于预设阈值T21,第二类型图像块中的可疑病变细胞总数小于预设阈值T31且R大于或等于预设阈值T41,则该病理图像的分析结果为判决结果1。
又如,如表2所示,若R小于预设阈值T43,则可以不需要确定第一类型图像块数目、第二类型图像块数目和第二类型图像块中的可疑病变细胞总数,直接确定该病理图像的分析结果为判决结果4。
可以理解的是,表1和表2只是图像块分类结果信息和判决结果的对应关系,或者,R、图像块分类结果信息和判决结果的对应关系的示意。本申请实施例对像块分类结果信息和判决结果的对应关系,或者,R、图像块分类结果信息和判决结果的对应关系并不进行限定。例如,在一些实施例中,图像块分类结果和判决结果的对应关系中还可以包括一个第二类型图像块中的平均可疑病变细胞,第一类型图像块数目占满足质量要求的图像块总数(即第一类型图像块数目和第二类型图像块数目的和)的比例,或者第二类型图像块数目占满足质量要求的图像块总数的比例等。
可选的,在另一些实施例中,该病理图像的分析结果可以利用如图5所示的训练系统500训练的判决模型确定。例如,可以输入R和图像块分类结果信息,或者,输入图像块 分类结果信息至该判决模型,其中,该可判决模型为经过训练的AI模型,判决模型输出该病理图像的分析结果。
下面将对训练系统500如何训练该判决模型进行简单介绍。训练系统500可以采用监督学习(supervised learning)的方式训练该判决模型。采集组件501可以采集多个病理图像。采集到的病理图像经过医生标注后构成一个训练数据集(即该训练数据集4)。该训练数据集4中可以包括多个训练数据,该多个训练数据中的每个训练数据可以包括病理图像数据以及与标签信息,其中该病理图像是采集组件501采集到的一个病理图像分割后得到的多个图像块的数据或者经过处理后的多个图像块的数据,该标签信息是该医生或病理学家根据该病理图像确定的与该病理图像对应的判决结果。
可选的,在一些实施例中,采集到的病理图像可以由一个医生或者病理学家确定该病理图像的判决结果,将判决结果作为标签信息。
可选的,在另一些实施例中,采集到的病理图像可以由多个医生或者病理学家独立确定该病理图像的判决结果,结合该多个判决结果,确定该病理图像最终的判决结果,将判决结果作为标签信息。
可选的,在一些实施例中,采集组件501采集的用于训练该判决模型的多个病理图像具有相同风格或者相似风格。
可选的,在一些实施例中,训练数据中包括的病理图像数据可以包括该病理图像的多个满足质量要求的图像块的图像块分类结果信息。
可选的,在另一些实施例中,训练数据中包括的病理图像数据可以包括该病理图像的多个满足质量要求的图像块的图像块分类结果信息以及该病理图像的质量信息。
可选的,在一些实施例中,训练数据中包括的图像块分类结果信息和/或该病理图像的质量信息可以是利用已经训练好的AI模型确定的。
可选的,在另一些实施例中,训练数据中包括的图像块分类结果信息和/或该病理图像的质量信息可以是人工确定的。
训练组件502可以利用该训练数据集4中的训练数据得到该判决模型。例如,训练组件502首先对初始判决模型中的每层的参数进行初始化(即,为每个参数赋予一个初始值),进而训练数据集中的训练数据对初始判决模型进行训练,直到初始判决模型中的损失函数收敛或者该训练数据集4中所有的训练数据被用于训练,则将训练后的初始判决模型称为判决模型。
初始判决模型可采用业界现有的一些可用于分类的机器学习模型或者深度学习模型,例如:决策树(decision tree,DT)、随机森林(random forest,RF)、逻辑回归(logistic regression,LR)、支持向量机(support vector machine,SVM)、卷积神经网络(convolutional neural network,CNN)、循环神经网络(rucurrent neural network,RNN)、加快R-CNN(Faster Region-CNN,Faster R-CNN)、单发多框检测(Single Shot MultiBox Detector,SSD)等中的任一个。
还以TCT宫颈癌细胞检测为例,在一些实施例中,该病理图像的分析结果可以包括鳞状上皮病变分析结果和/或腺上皮细胞分析结果中的一个或全部或者病理图像不可用。该鳞状上皮病变分析结果包括不典型鳞状上皮细胞-意义不明确、不典型鳞状上皮细胞-不排除高级别上皮内病变、低级别上皮内病变、高级别上皮内病变、或者鳞状细胞癌。该腺 上皮细胞分析结果包括不典型腺上皮细胞-非特异性,不典型腺上皮细胞-倾向癌变,或者腺癌。
本申请还提供一种数据处理装置,应理解,该数据处理装置包含的功能可以与前述病理图像处理系统包含的功能相同,或者该数据处理装置可以包含前述病理图像处理系统中的一部分功能;或者该数据处理装置还可以包含前述病理图像处理系统中的部分或全部功能以及包含前述训练系统中的部分或全部功能。
图8是本申请实施例提供的一种数据处理装置的示意性结构框图。如图8所示的数据处理装置900包括获取单元901、图像分析单元902和决策分析单元903。
获取单元901,用于获取多个图像块,该多个图像块由待分析的病理图像分割得到。
图像分析单元902,用于将该多个图像块输入至第一分析模型,获得第一分析结果,其中,该第一分析模型根据可疑病变成分的数目或面积对该多个图像块中的每个图像块进行分类,该第一分析结果指示该每个图像块的类型为第一类型或第二类型,该第一类型表示图像块中的可疑病变成分的数目或面积大于或等于预设阈值,该第二类型表示图像块中的可疑病变成分的数目或面积小于所述预设阈值。第一分析模型可以是前述方法实施例中的图像块分类模型,第一分析结果即前述方法实施例中的图像块分类结果。
图像分析单元902,还用于将该第一分析结果中的至少一个第二类型图像块输入至第二分析模型,获得第二分析结果,其中,该第二分析模型分析输入的每个第二类型图像块的可疑病变成分的位置。第二分析模型可以是前述方法实施例中的可疑病变成分检测模型,第二分析结果即前述方法实施例中的检测结果。
决策分析单元903,用于综合该第一分析结果和该第二分析结果获得该病理图像的最终分析结果。
可选的,在一些实施例中,该装置还包括图像质量检测单元904,用于输入该每个图像块至第三分析模型,获得第三分析结果,其中,该第三分析模型预测该每个图像块的图像质量。图像分析单元903,具体用于综合该第一分析结果、该第二分析结果和该第三分析结果获得该病理图像的最终分析结果。第三分析模型可以是前述方法实施例中的图像质量预测模型,第二分析结果即前述方法实施例中的质量信息。
可选的,在一些实施例中,获取单元901,具体用于获取该待分析的病理图像被分割后形成的多个初始图像块;输入该多个初始图像块中的每个初始图像块至风格迁移模型,获得该多个图像块,其中,该风格迁移模型对该每个初始图像块的风格进行转换。
可选的,在一些实施例中,该装置还包括第一训练单元905,用于根据第一训练数据集对初始第一分析模型进行训练,获得该第一分析模型,其中,该初始第一分析模型为人工智能AI模型中的一种,该第一训练数据集包括多个第一训练图像,每个第一训练图像的标签为第一类型或第二类型。
可选的,在一些实施例中,该装置还包括第二训练单元906,用于根据该第二训练数据集对初始第二分析模型进行训练,获得该第二分析模型,其中,该初始第二分析模型为人工智能AI模型中的一种,该第二训练数据集包括多个包含可疑病变成分的第二训练图像,每个第二训练图像的标签为该可疑病变成分在训练图像中的位置信息。
可选的,在一些实施例中,该装置还包括第三训练单元907,用于根据该第三训练数 据集对初始第三分析模型进行训练,获得该第三分析模型,其中,该初始第三分析模型为人工智能AI模型中的一种,该第三训练数据集包括多个第三训练图像,每个第三训练图像的标签为该每个第三训练图像的图像质量类型。
可选的,在一些实施例中,决策分析单元903具体用于输入该第一分析结果和该第二分析结果至判决模型,获得该病理图像的最终分析结果。
获取单元901、图像分析单元902、决策分析单元903、图像质量检测单元904、第一训练单元905、第二训练单元906和第三训练单元907的具体功能和有益效果可以参见上述方法实施例中的描述。例如图像分析单元902可以执行上述步骤605和步骤606;决策分析单元903可以执行上述步骤607;图像质量检测单元904可以执行上述步骤602和步骤603。
可以理解的是,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如,图像分析单元902可以被统一描述为图像分析组件;决策分析单元903可以被描述为决策分析组件。获取单元901可以进一步被划分为采集单元和风格迁移单元。该采集单元可以被描述为图像采集组件。该风格迁移单元和图像质量检测单元904可以被统一描述为图像预处理组件。第一训练单元905、第二训练单元906和第三训练单元907可以被统一描述为训练系统。
可选的,在一些实施例中如图8所示的数据处理装置900中的各个单元可以通过不同的设备实现。例如,第一训练单元905、第二训练单元906和第三训练单元907可以通过一个独立的训练设备实现。该训练设备可以将训练好的模型发送至数据处理装置900。
本申请还提供一种如图4所示的计算设备400,计算设备400中的处理器402读取存储器404存储的可执行代码以执行前述处理病理图像的方法。
由于本申请的数据处理装置900中的各个单元可以分别部署在多个计算设备上,因此,本申请还提供一种如图9所示的计算设备系统,该计算设备系统包括多个计算设备1000,每个计算设备1000包括总线1001、处理器1002、通信接口1003和存储器1004。处理器1002、存储器1004和通信接口1003之间通过总线1001通信。
其中,处理器1002可以为CPU。存储器1004可以包括易失性存储器(英文:volatile memory),例如RAM。存储器1004还可以包括非易失性存储器,例如ROM,快闪存储器,HDD或SSD。存储器1004中存储有可执行代码,处理器1002执行该可执行代码以执行处理图像的部分方法。存储器1004中还可以包括操作系统等其他运行进程所需的软件模块。操作系统可以为LINUX TM,UNIX TM,WINDOWS TM等。
每个计算设备1000间通过通信网络建立通信通路。每个计算设备1000上运行获取单元901、图像分析单元902、决策分析单元903、图像质量检测单元904、第一训练单元905、第二训练单元906和第三训练单元907中的任意一个或多个。任一计算设备1000可以为云数据中心中的计算设备,或边缘数据中心中的计算设备,或终端计算设备。
上述各个附图对应的流程的描述各有侧重,某个流程中没有详述的部分,可以参见其他流程的相关描述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令,在计算机上加载和执行这些计算机程序指令时,全部或部分地 产生按照本发明实施例图6所述的流程或功能。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行图图6所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种非瞬态的可读存储介质,该非瞬态的可读存储介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图6所示实施例中任意一个实施例的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种处理图像的方法,其特征在于,包括:
    获取多个图像块,所述多个图像块由待分析的病理图像分割得到;
    将所述多个图像块输入至第一分析模型,获得第一分析结果,其中,所述第一分析模型根据可疑病变成分的数目或面积对所述多个图像块中的每个图像块进行分类,所述第一分析结果指示所述每个图像块的类型为第一类型或第二类型,所述第一类型表示图像块中的可疑病变成分的数目或面积大于或等于预设阈值,所述第二类型表示图像块中的可疑病变成分的数目或面积小于所述预设阈值;
    将所述第一分析结果中的至少一个第二类型图像块输入至第二分析模型,获得第二分析结果,其中,所述第二分析模型分析输入的每个第二类型图像块的可疑病变成分的位置;
    综合所述第一分析结果和所述第二分析结果获得所述病理图像的最终分析结果。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    输入所述每个图像块至第三分析模型,获得第三分析结果,其中,所述第三分析模型预测所述每个图像块的图像质量;
    所述综合所述第一分析结果和所述第二分析结果获得所述病理图像的最终分析结果,包括:
    综合所述第一分析结果、所述第二分析结果和所述第三分析结果获得所述病理图像的最终分析结果。
  3. 如权利要求1或2所述的方法,其特征在于,所述获取多个图像块,包括:
    获取所述待分析的病理图像被分割后形成的多个初始图像块;
    输入所述多个初始图像块中的每个初始图像块至风格迁移模型,获得所述多个图像块,其中,所述风格迁移模型对所述每个初始图像块的风格进行转换。
  4. 如权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    根据第一训练数据集对初始第一分析模型进行训练,获得所述第一分析模型,其中,所述初始第一分析模型为人工智能AI模型中的一种,所述第一训练数据集包括多个第一训练图像,每个第一训练图像的标签为第一类型或第二类型。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第二训练数据集对初始第二分析模型进行训练,获得所述第二分析模型,其中,所述初始第二分析模型为人工智能AI模型中的一种,所述第二训练数据集包括多个包含可疑病变成分的第二训练图像,每个第二训练图像的标签为所述可疑病变成分在训练图像中的位置信息。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第三训练数据集对初始第三分析模型进行训练,获得所述第三分析模型,其中,所述初始第三分析模型为人工智能AI模型中的一种,所述第三训练数据集包括多个第三训练图像,每个第三训练图像的标签为所述每个第三训练图像的图像质量类型。
  7. 如权利要求1至6中任一项所述的方法,其特征在于,所述综合所述第一分析结果和所述第二分析结果获得所述病理图像的最终分析结果,包括:
    输入所述第一分析结果和所述第二分析结果至判决模型,获得所述病理图像的最终分析结果。
  8. 一种数据处理装置,其特征在于,包括:
    获取单元,用于获取多个图像块,所述多个图像块由待分析的病理图像分割得到;
    图像分析单元,用于将所述多个图像块输入至第一分析模型,获得第一分析结果,其中,所述第一分析模型根据可疑病变成分的数目或面积对所述多个图像块中的每个图像块进行分类,所述第一分析结果指示所述每个图像块的类型为第一类型或第二类型,所述第一类型表示图像块中的可疑病变成分的数目或面积大于或等于预设阈值,所述第二类型表示图像块中的可疑病变成分的数目或面积小于所述预设阈值;
    所述图像分析单元,还用于将所述第一分析结果中的至少一个第二类型图像块输入至第二分析模型,获得第二分析结果,其中,所述第二分析模型分析输入的每个第二类型图像块的可疑病变成分的位置;
    决策分析单元,用于综合所述第一分析结果和所述第二分析结果获得所述病理图像的最终分析结果。
  9. 如权利要求8所述的装置,其特征在于,所述装置还包括图像质量检测单元,用于输入所述每个图像块至第三分析模型,获得第三分析结果,其中,所述第三分析模型预测所述每个图像块的图像质量;
    所述决策分析单元,具体用于综合所述第一分析结果、所述第二分析结果和所述第三分析结果获得所述病理图像的最终分析结果。
  10. 如权利要求8或9所述的装置,其特征在于,所述获取单元,具体用于获取所述待分析的病理图像被分割后形成的多个初始图像块;输入所述多个初始图像块中的每个初始图像块至风格迁移模型,获得所述多个图像块,其中,所述风格迁移模型对所述每个初始图像块的风格进行转换。
  11. 如权利要求8至10中任一项所述的装置,其特征在于,所述装置还包括第一训练单元,用于根据第一训练数据集对初始第一分析模型进行训练,获得所述第一分析模型,其中,所述初始第一分析模型为人工智能AI模型中的一种,所述第一训练数据集包括多个第一训练图像,每个第一训练图像的标签为第一类型或第二类型。
  12. 如权利要求8至11中任一项所述的装置,其特征在于,所述装置还包括第二训练单元,用于根据所述第二训练数据集对初始第二分析模型进行训练,获得所述第二分析模型,其中,所述初始第二分析模型为人工智能AI模型中的一种,所述第二训练数据集包括多个包含可疑病变成分的第二训练图像,每个第二训练图像的标签为所述可疑病变成分在训练图像中的位置信息。
  13. 如权利要求8至12中任一项所述的装置,其特征在于,所述装置还包括第三训练单元,用于根据所述第三训练数据集对初始第三分析模型进行训练,获得所述第三分析模型,其中,所述初始第三分析模型为人工智能AI模型中的一种,所述第三训练数据集包括多个第三训练图像,每个第三训练图像的标签为所述每个第三训练图像的图像质量类型。
  14. 如权利要求8至13中任一项所述的装置,其特征在于,所述决策分析单元具体用于输入所述第一分析结果和所述第二分析结果至判决模型,获得所述病理图像的最终分 析结果。
  15. 一种计算设备系统,其特征在于,包括至少一个存储器和至少一个处理器,所述至少一个存储器,用于存储计算机指令;
    当所述至少一个处理器执行所述计算机指令时,所述计算设备系统执行上述权利要求1至7中任一项所述的方法。
  16. 一种非瞬态的可读存储介质,其特征在于,所述非瞬态的可读存储介质存储有计算机程序代码,当所述计算机程序代码被计算设备执行时,所述计算设备执行上述权利要求1至7中任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品中包括计算机指令,当所述计算机指令被计算设备执行时,所述计算设备执行上述权利要求1至7中任一项所述的方法。
PCT/CN2019/121731 2019-11-28 2019-11-28 处理图像的方法、装置及系统 WO2021102844A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980075397.1A CN113261012B (zh) 2019-11-28 2019-11-28 处理图像的方法、装置及系统
EP19954469.3A EP3971762A4 (en) 2019-11-28 2019-11-28 IMAGE PROCESSING METHOD, DEVICE AND SYSTEM
PCT/CN2019/121731 WO2021102844A1 (zh) 2019-11-28 2019-11-28 处理图像的方法、装置及系统
US17/590,005 US20220156931A1 (en) 2019-11-28 2022-02-01 Image processing method, apparatus, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121731 WO2021102844A1 (zh) 2019-11-28 2019-11-28 处理图像的方法、装置及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/590,005 Continuation US20220156931A1 (en) 2019-11-28 2022-02-01 Image processing method, apparatus, and system

Publications (1)

Publication Number Publication Date
WO2021102844A1 true WO2021102844A1 (zh) 2021-06-03

Family

ID=76129876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121731 WO2021102844A1 (zh) 2019-11-28 2019-11-28 处理图像的方法、装置及系统

Country Status (4)

Country Link
US (1) US20220156931A1 (zh)
EP (1) EP3971762A4 (zh)
CN (1) CN113261012B (zh)
WO (1) WO2021102844A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220083762A1 (en) * 2020-09-15 2022-03-17 Shenzhen Imsight Medical Technology Co., Ltd. Digital image classification method for cervical fluid-based cells based on a deep learning detection model

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753908A (zh) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 图像分类方法和装置及风格迁移模型训练方法和装置
CN115861720B (zh) * 2023-02-28 2023-06-30 人工智能与数字经济广东省实验室(广州) 一种小样本亚类图像分类识别方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127255A (zh) * 2016-06-29 2016-11-16 深圳先进技术研究院 一种癌症数字病理细胞图像的分类方法及系统
CN108596882A (zh) * 2018-04-10 2018-09-28 中山大学肿瘤防治中心 病理图片的识别方法及装置
CN108765408A (zh) * 2018-05-31 2018-11-06 杭州同绘科技有限公司 构建癌症病理图像虚拟病例库的方法以及基于卷积神经网络的多尺度癌症检测系统
WO2019005722A1 (en) * 2017-06-26 2019-01-03 The Research Foundation For The State University Of New York SYSTEM, METHOD AND COMPUTER-ACCESSIBLE MEDIA FOR VIRTUAL PANCREATOGRAPHY
CN109493308A (zh) * 2018-11-14 2019-03-19 吉林大学 基于条件多判别生成对抗网络的医疗图像合成与分类方法
CN109583440A (zh) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 结合影像识别与报告编辑的医学影像辅助诊断方法及系统
CN110007455A (zh) * 2018-08-21 2019-07-12 腾讯科技(深圳)有限公司 病理显微镜、显示模组、控制方法、装置及存储介质
CN110110750A (zh) * 2019-03-29 2019-08-09 广州思德医疗科技有限公司 一种原始图片的分类方法及装置
CN110121749A (zh) * 2016-11-23 2019-08-13 通用电气公司 用于图像采集的深度学习医疗系统和方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629772B (zh) * 2018-05-08 2023-10-03 上海商汤智能科技有限公司 图像处理方法及装置、计算机设备和计算机存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127255A (zh) * 2016-06-29 2016-11-16 深圳先进技术研究院 一种癌症数字病理细胞图像的分类方法及系统
CN110121749A (zh) * 2016-11-23 2019-08-13 通用电气公司 用于图像采集的深度学习医疗系统和方法
WO2019005722A1 (en) * 2017-06-26 2019-01-03 The Research Foundation For The State University Of New York SYSTEM, METHOD AND COMPUTER-ACCESSIBLE MEDIA FOR VIRTUAL PANCREATOGRAPHY
CN109583440A (zh) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 结合影像识别与报告编辑的医学影像辅助诊断方法及系统
CN108596882A (zh) * 2018-04-10 2018-09-28 中山大学肿瘤防治中心 病理图片的识别方法及装置
CN108765408A (zh) * 2018-05-31 2018-11-06 杭州同绘科技有限公司 构建癌症病理图像虚拟病例库的方法以及基于卷积神经网络的多尺度癌症检测系统
CN110007455A (zh) * 2018-08-21 2019-07-12 腾讯科技(深圳)有限公司 病理显微镜、显示模组、控制方法、装置及存储介质
CN109493308A (zh) * 2018-11-14 2019-03-19 吉林大学 基于条件多判别生成对抗网络的医疗图像合成与分类方法
CN110110750A (zh) * 2019-03-29 2019-08-09 广州思德医疗科技有限公司 一种原始图片的分类方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3971762A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220083762A1 (en) * 2020-09-15 2022-03-17 Shenzhen Imsight Medical Technology Co., Ltd. Digital image classification method for cervical fluid-based cells based on a deep learning detection model
US11468693B2 (en) * 2020-09-15 2022-10-11 Shenzhen Imsight Medical Technology Co., Ltd. Digital image classification method for cervical fluid-based cells based on a deep learning detection model

Also Published As

Publication number Publication date
CN113261012B (zh) 2022-11-11
EP3971762A4 (en) 2022-07-27
EP3971762A1 (en) 2022-03-23
US20220156931A1 (en) 2022-05-19
CN113261012A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
JP6968177B2 (ja) 一次染色および免疫組織化学画像に基づくコンピュータ採点
JP7197584B2 (ja) デジタル病理学分析結果の格納および読み出し方法
US20200372635A1 (en) Systems and methods for analysis of tissue images
US20220156931A1 (en) Image processing method, apparatus, and system
US20220351860A1 (en) Federated learning system for training machine learning algorithms and maintaining patient privacy
JP7422235B2 (ja) 腫瘍検出および分析を支援するための非腫瘍セグメンテーション
Guo et al. Deep learning for assessing image focus for automated cervical cancer screening
JP2006153742A (ja) 病理診断支援装置、病理診断支援プログラム、病理診断支援方法、及び病理診断支援システム
US20240079116A1 (en) Automated segmentation of artifacts in histopathology images
US11959848B2 (en) Method of storing and retrieving digital pathology analysis results
EP3789914A1 (en) Methods and systems for automated assessment of respiratory cytology specimens
Tosun et al. Histological detection of high-risk benign breast lesions from whole slide images
CN113222928B (zh) 一种尿细胞学人工智能尿路上皮癌识别系统
CN117529750A (zh) 使用多重免疫荧光成像的组织学染色的数字合成
Benny et al. Semantic segmentation in immunohistochemistry breast cancer image using deep learning
JP7431753B2 (ja) デジタルパソロジーのための感度分析
Bueno-Crespo et al. Diagnosis of Cervical Cancer Using a Deep Learning Explainable Fusion Model
Selcuk et al. Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling
Scognamiglio et al. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images
JP2024507290A (ja) 弱教師あり深層学習人工知能を利用した乳房超音波診断の方法及びシステム{Method and system for breast ultrasonic image diagnosis using weakly-supervised deep learning artificial intelligence}
CN117425912A (zh) 将组织化学染色图像转换成合成免疫组织化学(ihc)图像
CN116868229A (zh) 用于数字化病理学样本中的生物标志物检测的系统和方法
Carter et al. Histological Detection of High-Risk Benign Breast Lesions from Whole Slide Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954469

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019954469

Country of ref document: EP

Effective date: 20211216

NENP Non-entry into the national phase

Ref country code: DE