WO2021169161A1 - 图像识别方法、识别模型的训练方法及相关装置、设备 - Google Patents

图像识别方法、识别模型的训练方法及相关装置、设备 Download PDF

Info

Publication number
WO2021169161A1
WO2021169161A1 PCT/CN2020/103628 CN2020103628W WO2021169161A1 WO 2021169161 A1 WO2021169161 A1 WO 2021169161A1 CN 2020103628 W CN2020103628 W CN 2020103628W WO 2021169161 A1 WO2021169161 A1 WO 2021169161A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detection
model
target cell
sub
Prior art date
Application number
PCT/CN2020/103628
Other languages
English (en)
French (fr)
Inventor
杨爽
李嘉辉
黄晓迪
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to KR1020217021261A priority Critical patent/KR20210110823A/ko
Priority to JP2021576344A priority patent/JP2022537781A/ja
Publication of WO2021169161A1 publication Critical patent/WO2021169161A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This application relates to the field of artificial intelligence technology, in particular to an image recognition method, a training method of a recognition model, and related devices and equipment.
  • the embodiments of the present application provide an image recognition method, a training method of a recognition model, and related devices and equipment.
  • the embodiment of the present application provides an image recognition method, including: acquiring a pathological image to be recognized; using a detection sub-model in a recognition model to perform target detection on the pathological image to be recognized to obtain a detection area containing target cells in the pathological image to be recognized; The classification sub-model in the model performs the first classification process on the detection area to obtain the target cell category.
  • the detection sub-model in the recognition model to perform target detection on the acquired pathological image to be recognized, the detection area containing the target cell in the pathological image to be recognized is obtained, and then the analysis sub-model in the recognition model is used to repair the detection area
  • the first classification process obtains the type of the target cell, and then the detection of the target cell is performed, and then the classification of the target cell is performed, and the detection and the classification are separated, so that the target cell in the pathological image can be accurately and efficiently identified.
  • using the detection sub-model in the recognition model to perform target detection on the pathological image to be recognized, and obtaining the detection area containing the target cell in the pathological image to be recognized includes: using the first part of the detection sub-model to detect the pathological image to be recognized Perform the second classification process to obtain the image classification result of the pathological image to be identified, where the image classification result is used to indicate whether the pathological image to be identified contains the target cell; if the image classification result indicates that the pathological image to be identified contains the target cell, use The second part of the detection sub-model performs area detection on the pathological image to be identified to obtain the detection area containing the target cell.
  • the image classification result of the pathological image to be recognized is obtained, and the image classification result is used to indicate whether the pathological image to be recognized contains target cells.
  • the image classification result It means that when the pathological image to be identified contains the target cell, the second part of the detection submodel is used to detect the area of the pathological image to be identified to obtain the detection area containing the target cell. Therefore, the dynamic detection of the target cell can be realized and the target cell identification can be improved. efficient.
  • the method further includes: if the image classification result indicates the pathological image to be recognized If the target cell is not contained in the target cell, the first part outputs the detection result prompt that the target cell is not contained in the pathological image to be identified.
  • the first part outputs the detection result prompt that the pathological image to be identified does not contain the target cell, so the dynamic detection of the target cell can be realized and the efficiency of target cell identification can be improved.
  • using the detection sub-model in the recognition model to perform target detection on the pathological image to be recognized, and obtaining the detection area containing the target cell in the pathological image to be recognized also includes: using the third part of the detection sub-model to be recognized Feature extraction is performed on the pathological image to obtain the image features of the pathological image to be recognized.
  • the third part of the detection sub-model to perform feature extraction on the pathological image to be recognized, the image features of the pathological image to be recognized can be obtained, so that the pathological image to be recognized can be performed first, and then the detection sub-model can be used for other processing on this basis. , So it can help improve the operating efficiency of the model.
  • using the first part of the detection sub-model to perform the second classification process on the pathological image to be recognized to obtain the image classification result of the pathological image to be recognized includes: using the first part of the detection sub-model to perform the first part of the image features The two-classification process obtains the image classification result of the pathological image to be recognized.
  • using the second part of the detection sub-model to perform area detection on the pathological image to be identified to obtain the detection area containing the target cell includes: using the second part of the detection sub-model to perform area detection on image features, Obtain the detection area containing the target cell.
  • the first part is a global classification network
  • the second part is an image detection network
  • the third part is a feature extraction network.
  • the feature extraction network includes a deformable convolutional layer and a global information enhancement module. At least one.
  • the accuracy of identifying multi-morphological target cells can be improved, and by setting the feature extraction network to include at least one of the global information enhancement modules, there can be It is beneficial to obtain long-distance and dependent characteristics, and is beneficial to improve the accuracy of target cell recognition.
  • using the classification sub-model in the recognition model to perform the first classification process on the detection area to obtain the target cell category includes: using the classification sub-model to perform feature extraction on the detection area of the pathological image to be identified to obtain The image feature of the detection area; the first classification process is performed on the image feature of the detection area to obtain the target cell category.
  • the image feature of the detection area is obtained, and the first classification process is performed on the image feature of the detection area to obtain the target cell category, which can help improve the efficiency of the classification process.
  • the target cell includes any one of a single diseased cell and a cluster of diseased cells, and the type of the target cell is used to indicate the degree of disease of the target cell.
  • the target cell includes any one of a single diseased cell and a diseased cell cluster, which can help identify a single diseased cell and a diseased cell cluster, and the type of the target cell is used to indicate the degree of disease of the target cell, which is conducive to achieving the goal Grading of cell lesions.
  • the embodiment of the application provides a method for training a recognition model.
  • the recognition model includes a detection sub-model and a classification sub-model.
  • the training method includes: acquiring a first sample image and a second sample image, wherein the first sample image is marked with The actual area corresponding to the target cell, the second sample image is marked with the actual category of the target cell; the detection sub-model is used to perform target detection on the first sample image to obtain the predicted area containing the target cell in the first sample image, and Use the classification sub-model to perform the first classification processing on the second sample image to obtain the predicted category of the target cell; determine the first loss value of the detection sub-model based on the actual area and the predicted area, and determine the classification sub-model based on the actual category and the predicted category The second loss value of the model; the first loss value and the second loss value are used to correspondingly adjust the parameters of the detection sub-model and the classification sub-model.
  • the target cell can be detected first, and then the target cell can be classified, and the detection and classification can be separated, so as to solve the problem of unbalanced sample data categories, which can help improve the accuracy of the trained model. This can help improve the accuracy and efficiency of target cell identification.
  • using the detection sub-model to perform target detection on the first sample image to obtain the predicted region containing the target cell in the first sample image includes: performing a second classification process on the first sample image, Obtain the image classification result of the first sample image, where the image classification result is used to indicate whether the first sample image contains the target cell; if the image classification result indicates that the first sample image contains the target cell, the first sample image is the same This image performs area detection to obtain the predicted area containing the target cell.
  • the first sample image is then subjected to region detection to obtain the predicted region containing the target cells, which can enhance the model's ability to identify positive and negative samples.
  • ability to reduce the probability of false detection is conducive to improving the accuracy of the trained model, which can help improve the accuracy of target cell recognition.
  • the detection sub-model is used to perform target detection on the first sample image to obtain the predicted region containing the target cell in the first sample image
  • the classification sub-model is used to perform the first sample image on the second sample image.
  • a classification process before obtaining the predicted category of the target cell, the method further includes: performing data enhancement on the first sample image and the second sample image; and/or, combining the pixel values in the first sample image and the second sample image
  • the normalization process is performed; the target cell includes any one of a single diseased cell and a diseased cell cluster, and the type of the target cell is used to indicate the degree of disease of the target cell.
  • the sample diversity can be improved, which is beneficial to avoid over-fitting and improve the generalization performance of the model; by combining the first sample image and the second sample image Normalization of the pixel values of, can help improve the convergence speed of the model;
  • the target cell includes any one of a single diseased cell or a cluster of diseased cells, and the type of target cell is used to indicate the degree of disease of the target cell. It is conducive to identifying single diseased cells and diseased cell clusters, and the type of target cells is used to indicate the degree of disease of the target cell, which is conducive to achieving the disease grading of the target cell.
  • An embodiment of the application provides an image recognition device, including: an image acquisition module, an image detection module, and an image classification module.
  • the image acquisition module is configured to acquire pathological images to be identified;
  • the image detection module is configured to use the detection sub-model in the recognition model to treat The pathological image is recognized for target detection to obtain a detection area containing the target cell in the pathological image to be recognized;
  • the image classification module is configured to perform a first classification process on the detection area using the classification sub-model in the recognition model to obtain the target cell category.
  • An embodiment of the application provides a training device for a recognition model.
  • the recognition model includes a detection sub-model and a classification sub-model.
  • the training device for the recognition model includes: an image acquisition module, a model execution module, a loss determination module, a parameter adjustment module, and an image acquisition module It is configured to obtain a first sample image and a second sample image, wherein the actual area corresponding to the target cell is marked in the first sample image, and the actual category of the target cell is marked in the second sample image;
  • the model execution module is configured to Use the detection sub-model to perform target detection on the first sample image to obtain the predicted area containing the target cell in the first sample image, and use the classification sub-model to perform the first classification process on the second sample image to obtain the predicted category of the target cell ;
  • the loss determination module is configured to determine the first loss value of the detection sub-model based on the actual area and the predicted area, and determine the second loss value of the classification sub-model based on the actual category and the predicted category;
  • An embodiment of the present application provides an electronic device including a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory to implement the image recognition method in one or more of the foregoing embodiments, or to implement the foregoing The training method of the recognition model in one or more embodiments.
  • the embodiments of the present application provide a computer-readable storage medium with program instructions stored thereon, and the program instructions, when executed by a processor, implement the image recognition method in one or more of the above embodiments, or implement one or more of the above embodiments
  • the training method of the recognition model in.
  • the embodiments of the present application provide a computer program, which includes computer-readable code.
  • the processor in the electronic device executes the The image recognition method of, or the training method of the recognition model in one or more of the above embodiments.
  • the detection sub-model in the recognition model is used to perform target detection on the acquired pathological image to be recognized, so as to obtain the detection area containing the target cell in the pathological image to be recognized, and then the analysis sub-model in the recognition model is used to detect the detection area.
  • the first classification process is overhauled to obtain the target cell type, and then the target cell can be detected first, and then the target cell can be classified, and the detection and the classification can be separated, so that the target cell in the pathological image can be accurately and efficiently identified.
  • FIG. 1 is a schematic flowchart of an image recognition method provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a state of an image recognition method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image recognition method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a state of an image recognition method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for training a recognition model provided by an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an image recognition device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural frame diagram of a training device for a recognition model provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • system and "network” in this article are often used interchangeably in this article.
  • the term “and/or” in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations.
  • the character "/” in this text generally indicates that the associated objects before and after are in an "or” relationship.
  • "many” in this document means two or more than two.
  • FIG. 1 is a schematic flowchart of an image recognition method provided by an embodiment of the present application. Specifically, it can include the following steps:
  • Step S11 Obtain a pathological image to be identified.
  • the pathological image to be recognized may include, but is not limited to: cervical pathological image, liver pathological image, and kidney pathological image, which are not limited here.
  • Step S12 Use the detection sub-model in the recognition model to perform target detection on the pathological image to be recognized, to obtain a detection area containing the target cell in the pathological image to be recognized.
  • the recognition model includes a detection sub-model.
  • the detection sub-model can use the Faster RCNN (Region with Convolutional Neural Networks) network model.
  • the detection sub-model may also use Fast RCNN, YOLO (You Only Look Once), etc., which are not limited here.
  • the detection sub-model to detect the pathological image to be recognized to obtain the detection area containing the target cell in the pathological image to be recognized, for example, to detect the cervical pathological image to obtain the detection area containing squamous epithelial cells in the cervical pathological cell; or The pathological image of the liver is detected, and the detection area containing the diseased cells in the pathological image of the liver is obtained.
  • the pathological image to be identified is another image, the analogy can be used, and no examples are given here.
  • the detection area can be represented by the center coordinates of a rectangle containing the target cell and the length and width of the rectangle. For example, (50, 60, 10, 20) can be used to indicate a pathology image to be identified in pixels.
  • a rectangle with coordinates (50, 60) as the center, length 10 and width 20 can also be expressed by the ratio of the center coordinates of a rectangle containing the target cell and the length and width of the rectangle to a preset rectangle, for example ,
  • the preset rectangle can be a rectangle with a length of 10 and a width of 20, then (50,60,1,1) can be used to indicate a pathology image to be identified with the pixel coordinates (50,60) as the center, and the length is
  • a rectangle with 10 and a width of 20 is not limited here.
  • the pathological image to be recognized may also be an image that does not contain target cells.
  • the detection sub-model in the recognition model is used to perform target detection on the pathological image to be recognized. Since the detection area is not obtained, it can be output
  • the pathological image to be identified does not contain hints of target cells, thereby eliminating the need for subsequent classification processing steps and improving the operating efficiency of the model. For example, it is possible to directly output the suggestion that the cervical pathological image does not contain squamous epithelial cells, and other pathological images can be deduced by analogy, so we will not give examples one by one here.
  • FIG. 2 is a schematic diagram of a state of an image recognition method provided by an embodiment of the present application.
  • the pathological image to be recognized is a cervical pathological image
  • the pathological image to be recognized is subjected to target detection through the detection sub-model in the recognition model, and two detection areas containing target cells are obtained.
  • Step S13 Perform a first classification process on the detection area by using the classification sub-model in the recognition model to obtain the target cell category.
  • the recognition model may also include a classification sub-model.
  • the classification sub-model may use the EfficientNet network model.
  • the classification sub-model can also use ResNet, MobileNet, etc., which are not limited here.
  • the classification sub-model in the recognition model to classify the detection area can obtain the target cell type.
  • the classification sub-model can be used to extract features of the detection area of the pathological image to be identified to obtain the detection area Image features, so that the first classification process is performed on the image features of the detection area to obtain the target cell category.
  • the image features of the detection area can be pooled and fully connected to obtain the target cell category, which will not be repeated here.
  • the type of the target cell may indicate the degree of the lesion of the target cell.
  • the target cells may specifically include but are not limited to the following categories: High-grade Squamous Intraepithelial Lesion (HSIL), and mild Squamous Intraepithelial Lesion (HSIL) Low-grade Squamous Intraepithelial Lesion (LSIL), Atypical Squamous Cells of Undetermined Significance (ASC-US), Atypical Squamous Cells with high intraepithelial neoplasia (ASC-US) -cannot exclude HSIL, ASC-H).
  • HSIL High-grade Squamous Intraepithelial Lesion
  • HSIL mild Squamous Intraepithelial Lesion
  • LSIL Low-grade Squamous Intraepithelial Lesion
  • ASC-US Atypical Squamous Cells of Undetermined Significance
  • ASC-US Atypical Squamous Cells with high intraepithelial ne
  • the target cell may include any one of a single diseased cell or a diseased cell cluster, so that a single diseased cell or a diseased cell cluster can be identified.
  • the classification sub-models respectively classify the two detection areas detected by the detection sub-models to obtain the types of target cells contained in the two detection areas:
  • the target cells in one detection area are high-grade squamous cell intraepithelial neoplasia (HSIL), and the target cells in the other detection area are atypical squamous cells (ASC-H) that cannot be excluded from high-grade intraepithelial neoplasia.
  • HSIL high-grade squamous cell intraepithelial neoplasia
  • ASC-H atypical squamous cells
  • the classification sub-model may also perform the first classification process on the detection area to obtain the target cell category and its confidence, where the confidence indicates that the true category of the target cell is the value of the category predicted by the model. Credibility, the higher the confidence, the higher the credibility. Please continue to refer to Figure 2.
  • the classification sub-models respectively classify the detection area to obtain the target cell type and its confidence.
  • the target cell in one detection area is high-grade squamous cell intraepithelial neoplasia (HSIL), and The confidence level is 0.97 (ie 97% confidence level).
  • the target cells in the other detection area are atypical squamous cells (ASC-H) that cannot be ruled out for high-grade intraepithelial neoplasia, and the confidence level is 0.98 ( That is, 98% confidence level).
  • the detection sub-model in the recognition model is used to perform target detection on the acquired pathological image to be recognized, so as to obtain the detection area containing the target cell in the pathological image to be recognized, and then the analysis sub-model in the recognition model is used to detect the detection area.
  • the first classification process is overhauled to obtain the target cell type, and then the target cell can be detected first, and then the target cell can be classified, and the detection and the classification can be separated, so that the target cell in the pathological image can be accurately and efficiently identified.
  • FIG. 3 is a schematic flowchart of an image recognition method provided by an embodiment of the present application. Specifically, it can include the following steps:
  • Step S31 Obtain a pathological image to be identified.
  • Step S32 Use the first part of the detection sub-model to perform classification processing on the pathological image to be recognized to obtain an image classification result of the pathological image to be recognized.
  • the image classification result is used to indicate whether the pathological image to be recognized contains target cells, specifically, "0" can be used to indicate that the pathological image to be recognized does not contain target cells, and "1" is used to indicate that the pathological image to be recognized contains target cells. , It is not limited here.
  • the first part of the detection sub-model is a global classification network.
  • the global classification network is a neural network model including neurons. Unlike the classification sub-model in the foregoing embodiments, the global classification network is used to treat Recognize the pathological image and perform two-classification processing to obtain the image classification result of whether the pathological image to be recognized contains the target cell.
  • the classification processing of the first part of the detection sub-model may be referred to as the second classification processing, which is not limited here.
  • Step S33 Determine whether the result of the image classification indicates that the pathological image to be identified contains target cells, if it is, then step S34 is executed, otherwise, step S36 is executed.
  • the pathological image to be identified contains target cells. If it contains target cells, the pathological image to be identified can be processed in the next step.
  • the classification process is separated from the detection area of the specific detection target cell, which can further improve the operating efficiency of the model, and further improve the efficiency of target cell recognition in the image.
  • Step S34 Use the second part of the detection sub-model to perform region detection on the pathological image to be identified to obtain a detection region containing the target cell.
  • the second part of the detection sub-model is an image detection network
  • the image detection network is a neural network model including neurons.
  • the second part can be RPN For (Region Proposal Networks) networks, when the detection sub-model is other network models, it can be deduced by analogy, and we will not give examples one by one here.
  • FIG. 2 is a schematic diagram of a state of an image recognition method provided by an embodiment of the present application.
  • the pathological image to be recognized is a cervical pathological image
  • the pathological image to be recognized is subjected to target detection through the detection sub-model in the recognition model, and two detection areas containing target cells are obtained.
  • the third part of the detection submodel can also be used to perform feature extraction on the pathological image to be recognized to obtain the image features of the pathological image to be recognized.
  • the third part can be a feature extraction network.
  • the feature extraction The network can be a ResNet101 network, or the feature extraction network can also be a ResNet50 network, etc., which is not limited here.
  • the feature extraction network may include a deformable convolution layer. The deformable convolution is based on the position information used in the space.
  • the feature extraction network may further include a global information enhancement module.
  • Figure 4 is a state diagram of an image recognition method provided by an embodiment of the present application.
  • the first part of the detection sub-model can be used to classify image features to obtain The image classification result of the pathological image to be identified, and when the image classification result indicates that the target cell is contained in the pathological image to be identified (that is, when the image classification result is positive), the second part of the detection sub-model is used to perform region detection on the image features, and the result is The detection area containing the target cell is used for subsequent classification processing.
  • the relevant steps in this embodiment which will not be repeated here.
  • Step S35 Use the classification sub-model in the recognition model to classify the detection area to obtain the target cell type.
  • FIG. 2 is a schematic diagram of a state of an image recognition method provided by an embodiment of the present application.
  • the pathological image to be recognized is a cervical pathological image
  • the pathological image to be recognized is subjected to target detection through the detection sub-model in the recognition model, and two detection areas containing target cells are obtained.
  • Step S36 The first part outputs the detection result prompt that the target cell is not included in the pathological image to be identified.
  • the image detection result indicates that the pathological image to be identified does not contain target cells (that is, when the image classification result is negative)
  • the detection result prompt that the pathological image to be identified does not contain target cells can be directly output (That is, the result is a negative prompt) to improve the operating efficiency of the model, thereby improving the efficiency of target cell recognition in the image.
  • the image classification result of the pathological image to be recognized is obtained, and the image classification result is used to indicate whether the pathological image to be recognized contains target cells
  • the second part of the detection sub-model is used to detect the area of the pathological image to be identified to obtain the detection area containing the target cell, so that the dynamic detection of the target cell can be achieved and improved The efficiency of target cell recognition.
  • FIG. 5 is a schematic flowchart of a training method for a recognition model provided by an embodiment of the present application.
  • the recognition model may specifically include a detection sub-model and a classification sub-model. Specifically, it may include the following step:
  • Step S51 Obtain a first sample image and a second sample image.
  • the actual area corresponding to the target cell is marked in the first sample image.
  • the actual area can be expressed by the center coordinates of a rectangle containing the target cell and the length and width of the rectangle. For example, (50, 60 ,10,20) represents a rectangle with a length of 10 and a width of 20, centered on the pixel point (50,60) in the first sample image.
  • the second sample image is marked with the actual category of the target cell.
  • the actual category of the target cell is used to indicate the degree of disease of the target cell.
  • the target cells may specifically include but are not limited to the following categories: high-grade squamous cell intraepithelial neoplasia (HSIL), mild squamous cell intraepithelial neoplasia (LSIL), unclear significance
  • the atypical squamous cells (ASC-US), high-grade intraepithelial neoplasia atypical squamous cells (ASC-H) cannot be excluded.
  • the target cell may include any one of a single diseased cell or a diseased cell cluster, so that a single diseased cell or a diseased cell cluster can be identified.
  • the first sample image and the second sample image are pathological images, which may include, but are not limited to, cervical pathological images, liver pathological images, and kidney pathological images, for example.
  • the target cells may be squamous epithelial cells.
  • the pixel values in the first sample image and the second sample image may also be normalized, so as to improve the convergence speed of the model.
  • the first mean value and the first variance of the pixel values of all the first sample images may be counted first, and then the pixel values in each first sample image are used to subtract the first mean value, and then Divide by the first variance, so as to normalize each first sample image; and can count the second mean and second variance of the pixel values of all second sample images, and then use each second sample image The second average value is subtracted from the pixel value, and then divided by the second variance, so as to normalize each second Yangen image.
  • Step S52 Use the detection sub-model to perform target detection on the first sample image to obtain the predicted area containing the target cell in the first sample image, and use the classification sub-model to perform the first classification process on the second sample image to obtain the target cell The forecast category.
  • the detection sub-model can adopt Faster RCNN.
  • the prediction area can be represented by the center coordinates of a rectangle and the length and width of the rectangle.
  • (70,80,10,20) can be used to indicate a pixel point (70,80) in the first sample image.
  • the prediction area can also be represented by the center coordinates of a rectangle and the ratio of the length and width of the rectangle to the length and width of the preset rectangle.
  • a preset rectangle can be set.
  • the length is 10 and the width is 20, then (70,80,1,1) can be used to represent a prediction area in the first sample image with (70,80) as the image center, length 10 and width 20.
  • the classification sub-model can adopt the EfficientNet network model, and for details, please refer to the relevant steps in the foregoing embodiment, which will not be repeated here.
  • the detection sub-model in order to improve the model’s ability to identify positive and negative samples and achieve dynamic prediction to improve model operation efficiency, is used to perform target detection on the first sample image to obtain the first sample
  • the first sample image can also be subjected to a second classification process to obtain the image classification result of the first sample image, where the image classification result is used to represent the first sample image Whether the target cell is included, if the image classification result indicates that the first sample image contains the target cell, area detection is performed on the first sample image to obtain the predicted area containing the target cell.
  • the detection sub-model may also include a first part and a second part.
  • the first part is configured to classify the first sample image to obtain an image classification result of whether the first sample image contains the target cell
  • the second part is configured to When the target cell is included in the first sample image, the region detection is performed on the first sample image to obtain the predicted region including the target cell.
  • the detection sub-model may also include a third part configured to perform feature extraction on the first sample image to obtain image features of the first sample image, so that the first part performs feature extraction on the image features to obtain the first sample image
  • the second part performs area detection on the image features to obtain the predicted area containing the target cell.
  • the first part may be a global classification network
  • the second part is an image detection network
  • the third part is a feature extraction network.
  • the feature extraction network includes at least one of a deformable convolutional layer and a global information enhancement module. You can refer to the relevant steps in the foregoing embodiment, which will not be repeated here.
  • Step S53 Determine the first loss value of the detection sub-model based on the actual region and the predicted region, and determine the second loss value of the classification sub-model based on the actual category and the predicted category.
  • a mean square error loss function, a cross entropy loss function, etc. may be used to determine the first loss value of the detection sub-model.
  • a cross-entropy loss function may be used to determine the second loss value of the classification sub-model, which will not be repeated here.
  • Step S54 Use the first loss value and the second loss value to correspondingly adjust the parameters of the detection sub-model and the classification sub-model.
  • gradient descent optimization methods such as stochastic gradient descent, exponential average weighting, and Adam can be used to adjust the parameters of the detection sub-model and the classification sub-model, which will not be repeated here.
  • the first sample image and the second sample image can also be divided into multiple small batches, and a mini-batch training method is used to train the detection sub-model and the classification sub-model.
  • a training end condition can also be set, and when the training end condition is met, the training can be ended.
  • training end conditions may include, but are not limited to: the number of training iterations is greater than or equal to a preset threshold (for example, 100 times, 500 times, etc.); the first loss value and the second loss value are less than a preset loss threshold, and No more reduction; the performance of the model obtained by using a verification data set to verify the detection sub-model and the classification sub-model is no longer improved, and it is not limited here.
  • the target cells can be detected first, and then the target cells can be classified, and the detection and classification can be separated, so as to solve the problem of unbalanced sample data categories, which can further improve the performance of the trained model.
  • Accuracy which can help improve the accuracy and efficiency of target cell recognition.
  • FIG. 6 is a schematic structural frame diagram of an image recognition device 60 provided by an embodiment of the present application.
  • the image recognition device 60 includes an image acquisition module 61, an image detection module 62, and an image classification module 63.
  • the image acquisition module 61 is configured to acquire the pathological image to be identified;
  • the image detection module 62 is configured to use the detection sub-model in the recognition model to identify the pathological image to be identified Perform target detection to obtain a detection area in the pathological image to be identified that contains the target cell;
  • the image classification module 63 is configured to perform a first classification process on the detection area by using the classification sub-model in the recognition model to obtain the target cell category.
  • the detection sub-model in the recognition model is used to perform target detection on the acquired pathological image to be recognized, so as to obtain the detection area containing the target cell in the pathological image to be recognized, and then the analysis sub-model in the recognition model is used to detect the detection area. Overhaul the first classification process to obtain the target cell type, and then the target cell can be detected first, and then the target cell can be classified, and the detection and classification can be separated, so that the target cell in the pathological image can be accurately and efficiently identified
  • the image detection module 62 includes a first partial sub-module configured to perform a second classification process on the pathological image to be recognized by using the first part of the detection sub-model to obtain the image classification result of the pathological image to be recognized, wherein, The image classification result is used to indicate whether the target cell is contained in the pathological image to be identified.
  • the image detection module 62 also includes a second sub-module configured to use the detection sub-model when the image classification result indicates that the target cell is contained in the pathological image to be identified.
  • the second part is to perform area detection on the pathological image to be identified to obtain the detection area containing the target cell.
  • the first part of the detection sub-model performs the second classification process on the pathological image to be recognized to obtain the image classification result of the pathological image to be recognized, and the image classification result is configured to indicate whether the pathological image to be recognized contains target cells,
  • the second part of the detection sub-model is used to detect the area of the pathological image to be identified to obtain the detection area containing the target cell, so that the dynamic detection of the target cell can be achieved and improved The efficiency of target cell recognition.
  • the image detection module 62 further includes a result prompting sub-module configured to output that the pathological image to be identified does not contain the target cell in the first part when the result of the image classification indicates that the pathological image to be identified does not contain the target cell.
  • the test result prompts are configured to output that the pathological image to be identified does not contain the target cell in the first part when the result of the image classification indicates that the pathological image to be identified does not contain the target cell.
  • the method further includes: if the image classification result indicates that the pathological image to be recognized does not contain Target cell, the first part outputs the detection result prompt that the target cell is not included in the pathological image to be identified.
  • the image detection module 62 further includes a third part sub-module configured to perform feature extraction on the pathological image to be recognized by using the third part of the detection sub-model to obtain image features of the pathological image to be recognized.
  • the third part of the detection sub-model is used to extract the features of the pathological image to be recognized to obtain the image features of the pathological image to be recognized, so that the pathological image to be recognized can be performed first, and then the detector can be reused on this basis.
  • the model performs other processing, so it can help improve the operating efficiency of the model.
  • the first part of the sub-module is specifically configured to use the first part of the detection sub-model to perform a second classification process on image features to obtain an image classification result of the pathological image to be recognized.
  • the first part of the detection submodel is used to perform the second classification process on the image features extracted from the third part to obtain the image classification result of the pathological image to be recognized, which can improve the accuracy of the classification process.
  • the second part of the sub-module is specifically configured to use the second part of the detection sub-model to perform area detection on the image features to obtain the detection area containing the target cell.
  • the second part of the detection sub-model is used to perform region detection on image features to obtain a detection region containing target cells, which can help improve the accuracy of target cell recognition.
  • the first part is a global classification network
  • the second part is an image detection network
  • the third part is a feature extraction network.
  • the feature extraction network includes a deformable convolutional layer and a global information enhancement module. At least one.
  • the accuracy of recognizing polymorphic target cells can be improved, and by setting the feature extraction network to include at least one of the global information enhancement modules In addition, it can help to obtain long-distance and dependent characteristics, and help improve the accuracy of target cell recognition.
  • the image classification module 63 includes a feature extraction sub-module, configured to use the classification sub-model to perform feature extraction on the detection area of the pathological image to be identified to obtain image features of the detection area, and the image classification module 63 includes classification processing The sub-module is configured to perform a first classification process on the image features of the detection area to obtain the target cell category.
  • the image feature of the detection area is obtained by feature extraction of the detection area of the pathological image to be recognized, and the first classification process is performed on the image feature of the detection area to obtain the target cell category, which can help improve the classification process. s efficiency.
  • the target cell includes any one of a single diseased cell and a cluster of diseased cells, and the type of the target cell is used to indicate the degree of disease of the target cell.
  • the target cell includes any one of a single diseased cell and a diseased cell cluster, which can help identify a single diseased cell and a diseased cell cluster, and the type of the target cell is used to indicate the degree of disease of the target cell. Conducive to achieve the lesion grading of target cells.
  • FIG. 7 is a schematic structural diagram of a training device 70 for a recognition model provided by an embodiment of the present application.
  • the recognition model includes a detection sub-model and a classification sub-model.
  • the training device 70 for the recognition model includes an image acquisition module 71, a model execution module 72, a loss determination module 73, and a parameter adjustment module 74.
  • the image acquisition module 71 is configured to acquire a first sample image And a second sample image, wherein the first sample image is marked with the actual area corresponding to the target cell, and the second sample image is marked with the actual category of the target cell;
  • the model execution module 72 is configured to use the detection sub-model to compare the first Perform target detection on the sample image to obtain the predicted region containing the target cell in the first sample image, and use the classification sub-model to perform the first classification process on the second sample image to obtain the predicted category of the target cell;
  • the loss determination module 73 is configured to Determine the first loss value of the detection sub-model based on the actual area and the predicted region, and determine the second loss value of the classification sub-model based on the actual category and the predicted category;
  • the parameter adjustment module 74 is configured to use the first loss value and the second loss Value, corresponding to adjust the parameters of the detection sub-model and the classification sub-model.
  • the target cell in the training process, the target cell can be detected first, and then the target cell can be classified, and the detection and classification can be separated, so as to solve the problem of the imbalance of the sample data category, and then can help improve the training of the model.
  • Accuracy which can help improve the accuracy and efficiency of target cell recognition.
  • the model execution module 72 includes an initial classification sub-module configured to perform a second classification process on the first sample image to obtain an image classification result of the first sample image, where the image classification result is To indicate whether the first sample image contains target cells, the model execution module 72 includes a region detection sub-module configured to perform region detection on the first sample image when the image classification result indicates that the first sample image contains target cells , Get the predicted area containing the target cell.
  • the first sample image is then subjected to region detection to obtain the predicted region containing the target cell, which can enhance the model recognition
  • region detection reduces the probability of false detections, which helps to improve the accuracy of the trained model, and thus can help improve the accuracy of target cell recognition.
  • the training device 70 for the recognition model further includes a data enhancement module configured to perform data enhancement on the first sample image and the second sample image.
  • data enhancement on the first sample image and the second sample image can improve the sample diversity, which is beneficial to avoid overfitting and improve the generalization performance of the model.
  • the training device 70 for the recognition model further includes a normalization processing module configured to perform normalization processing on the pixel values in the first sample image and the second sample image.
  • normalizing the pixel values in the first sample image and the second sample image can help improve the convergence speed of the model.
  • the target cell includes any one of a single diseased cell and a cluster of diseased cells, and the type of the target cell is used to indicate the degree of disease of the target cell.
  • the target cell includes any one of a single diseased cell and a diseased cell cluster, and the type of the target cell is used to indicate the degree of disease of the target cell, which can help identify a single diseased cell and a diseased cell cluster, and The type of target cell is used to indicate the degree of disease of the target cell, which is conducive to achieving the disease grading of the target cell.
  • FIG. 8 is a schematic structural diagram of an electronic device 80 according to an embodiment of the present application.
  • the electronic device 80 includes a memory 81 and a processor 82 that are coupled to each other.
  • the processor 82 is configured to execute program instructions stored in the memory 81 to implement the steps of any of the above-mentioned image recognition method embodiments, or to implement any of the above-mentioned recognition models. Steps in the training method embodiment.
  • the electronic device 80 may include but is not limited to: a microcomputer and a server.
  • the electronic device 80 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
  • the processor 82 is configured to control itself and the memory 81 to implement the steps of any one of the above-mentioned image recognition method embodiments, or to implement the steps of any one of the above-mentioned recognition model training method embodiments.
  • the processor 82 may also be referred to as a central processing unit (Central Processing Unit, CPU).
  • the processor 82 may be an integrated circuit chip with signal processing capability.
  • the processor 82 may also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a field programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 82 may be jointly implemented by an integrated circuit chip.
  • the above scheme can accurately and efficiently identify target cells in pathological images.
  • FIG. 9 is a schematic structural diagram of a computer-readable storage medium 90 provided by an embodiment of the application.
  • the computer-readable storage medium 90 stores program instructions 901 that can be executed by the processor.
  • the program instructions 901 are used to implement the steps of any of the above-mentioned image recognition method embodiments, or implement the steps of any of the above-mentioned recognition model training method embodiments. .
  • the above scheme can accurately and efficiently identify target cells in pathological images.
  • the embodiment of the present application provides a computer program, including computer-readable code.
  • the processor in the electronic device executes any one of the methods provided in the embodiments of the present application. Image recognition method, or any recognition model training method provided in the embodiments of this application.
  • the disclosed method and device can be implemented in other ways.
  • the device implementation described above is only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in actual implementation, for example, units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of this embodiment.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the medium includes a number of instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • the embodiment of the application provides an image recognition method, a training method of a recognition model, and related devices and equipment.
  • the image recognition method includes: acquiring a pathological image to be recognized; using a detection sub-model in the recognition model to perform target detection on the pathological image to be recognized , Obtain the detection area containing the target cell in the pathological image to be identified; use the classification sub-model in the recognition model to perform the first classification process on the detection area to obtain the target cell category.
  • the target cell in the pathological image can be accurately and efficiently recognized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请实施例提供一种图像识别方法、识别模型的训练方法及相关装置、设备,其中,图像识别方法包括:获取待识别病理图像;采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域;利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。

Description

图像识别方法、识别模型的训练方法及相关装置、设备
相关申请的交叉引用
本申请基于申请号为202010121559.5、申请日为2020年02月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种图像识别方法、识别模型的训练方法及相关装置、设备。
背景技术
随着神经网络、深度学习等人工智能技术的发展,对神经网络模型进行训练,并利用经训练的神经网络模型满足医学领域中的相关业务需求,逐渐受到人们的青睐。
在相关业务需求中,由于国内细胞病理医生严重匮乏,故利用人工智能技术对病理图像进行辅助识别,以筛查其中诸如病变细胞等目标细胞,在当前细胞病理医疗资源匮乏的情况下,具有重要意义。有鉴于此,如何准确、高效地识别病理图像中的目标细胞成为亟待解决的问题。
发明内容
本申请实施例提供一种图像识别方法、识别模型的训练方法及相关装置、设备。
本申请实施例提供一种图像识别方法,包括:获取待识别病理图像;采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域;利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。
因此,通过采用识别模型中的检测子模型对获取到的待识别病理图像进行目标检测,从而得到待识别病理图像中包含目标细胞的检测区域,再利用识别模型中的分析子模型对检测区域检修第一分类处理,得到目标细胞的类别,进而能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够准确、高效地识别病理图像中的目标细胞。
在本申请的一些实施例中,采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域包括:利用检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,其中,图像分类结果用于表示待识别病理图像中是否包含目标细胞;若图像分类结果表示待识别病理图像中包含目标细胞,则利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域。
因此,通过检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,且图像分类结果用于表示待识别病理图像中是否包含目标细胞,当图像分类结果表示待识别病理图像中包含目标细胞时,再利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域,故能够实现目标细胞的动态检测,提高目标细胞识别的效率。
在本申请的一些实施例中,在利用检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果之后,还包括:若图像分类结果表示 待识别病理图像中不包含目标细胞,则第一部分输出待识别病理图像中不包含目标细胞的检测结果提示。
因此,当图像分类结果表示待识别病理图像中不包含目标细胞时,第一部分输出待识别病理图像中不包含目标细胞的检测结果提示,故能够实现目标细胞的动态检测,提高目标细胞识别的效率。
在本申请的一些实施例中,采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域还包括:利用检测子模型的第三部分对待识别病理图像进行特征提取,得到待识别病理图像的图像特征。
因此,通过检测子模型的第三部分对待识别病理图像进行特征提取,得到待识别病理图像的图像特征,从而能够先对待识别病理图像进行,进而后续在此基础上再利用检测子模型进行其他处理,故能够有利于提高模型的运行效率。
在本申请的一些实施例中,利用检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,包括:利用检测子模型的第一部分对图像特征进行第二分类处理,得到待识别病理图像的图像分类结果。
因此,利用检测子模型的第一部分对第三部分提取得到的图像特征进行第二分类处理,得到待识别病理图像的图像分类结果,能够提高分类处理的准确性。
在本申请的一些实施例中,利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域,包括:利用检测子模型的第二部分对图像特征进行区域检测,得到包含目标细胞的检测区域。
因此,利用检测子模型的第二部分对图像特征进行区域检测,得到包含目标细胞的检测区域,能够有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,第一部分为全局分类网络,第二部分为图像检测网络,第三部分为特征提取网络;其中,特征提取网络包括可变形卷积层、全局信息增强模块中的至少一者。
因此,通过将特征提取网络设置为包括可变形卷积层,能够提高对多形态的目标细胞进行识别的准确性,通过将特征提取网络设置为包括全局信息增强模块中的至少一者,能够有利于获取长距离的、具有依赖关系的特征,有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别,包括:利用分类子模型对待识别病理图像的检测区域进行特征提取,得到检测区域的图像特征;对检测区域的图像特征进行第一分类处理,得到目标细胞的类别。
因此,通过对待识别病理图像的检测区域进行特征提取,得到检测区域的图像特征,并对检测区域的图像特征进行第一分类处理,得到目标细胞的类别,能够有利于提高分类处理的效率。
在本申请的一些实施例中,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度。
因此,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,能够有利于识别单个病变细胞和病变细胞团簇,且目标细胞的类别用于表示目标细胞的病变程度,有利于实现目标细胞的病变分级。
本申请实施例提供一种识别模型的训练方法,识别模型包括检测子模型和分类子模型,训练方法包括:获取第一样本图像和第二样本图像,其中,第一样本图像中标注有与目标细胞对应的实际区域,第二样本图像中标注有目标细胞的实际类别;利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域,并 利用分类子模型对第二样本图像进行第一分类处理,得到目标细胞的预测类别;基于实际区域与预测区域,确定检测子模型的第一损失值,并基于实际类别与预测类别,确定分类子模型的第二损失值;利用第一损失值和第二损失值,对应调整检测子模型和分类子模型的参数。
因此,在训练过程中,能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够解决样本数据类别不平衡的问题,进而能够有利于提高训练得到的模型的准确性,从而能够有利于提高目标细胞识别的准确性和效率。
在本申请的一些实施例中,利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域包括:对第一样本图像进行第二分类处理,得到第一样本图像的图像分类结果,其中,图像分类结果用于表示第一样本图像中是否包含目标细胞;若图像分类结果表示第一样本图像中包含目标细胞,则对第一样本图像进行区域检测,得到包含目标细胞的预测区域。
因此,在训练过程中,当图像分类结果表示第一样本图像中包含目标细胞时,再对第一样本图像进行区域检测,得到包含目标细胞的预测区域,能够增强模型识别正负样本的能力,降低误检概率,有利于提高训练得到的模型的准确性,从而能够有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,在利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域,并利用分类子模型对第二样本图像进行第一分类处理,得到目标细胞的预测类别之前,方法还包括:对第一样本图像和第二样本图像进行数据增强;和/或,将第一样本图像和第二样本图像中的像素值进行归一化处理;目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度。
因此,通过对第一样本图像和第二样本图像进行数据增强能够提高样本多样性,有利于避免过拟合,提高模型的泛化性能;通过将第一样本图像和第二样本图像中的像素值进行归一化处理,能够有利于提高模型的收敛速度;目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度,能够有利于识别单个病变细胞和病变细胞团簇,且目标细胞的类别用于表示目标细胞的病变程度,有利于实现目标细胞的病变分级。
本申请实施例提供一种图像识别装置,包括:图像获取模块、图像检测模块和图像分类模块,图像获取模块配置为获取待识别病理图像;图像检测模块配置为采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域;图像分类模块配置为利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。
本申请实施例提供一种识别模型的训练装置,识别模型包括检测子模型和分类子模型,识别模型的训练装置包括:图像获取模块、模型执行模块、损失确定模块、参数调整模块,图像获取模块配置为获取第一样本图像和第二样本图像,其中,第一样本图像中标注有与目标细胞对应的实际区域,第二样本图像中标注有目标细胞的实际类别;模型执行模块配置为利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域,并利用分类子模型对第二样本图像进行第一分类处理,得到目标细胞的预测类别;损失确定模块配置为基于实际区域与预测区域,确定检测子模型的第一损失值,并基于实际类别与预测类别,确定分类子模型的第二损失值;参数调整模块配置为利用第一损失值和第二损失值,对应调整检测子模型和分类子模型的参数。
本申请实施例提供一种电子设备,包括相互耦接的存储器和处理器,处理器配置为执行存储器中存储的程序指令,以实现上述一个或多个实施例中的图像识别方法,或实 现上述一个或多个实施例中的识别模型的训练方法。
本申请实施例提供一种计算机可读存储介质,其上存储有程序指令,程序指令被处理器执行时实现上述一个或多个实施例中的图像识别方法,或实现上述一个或多个实施例中的识别模型的训练方法。
本申请实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述一个或多个实施例中的图像识别方法,或上述一个或多个实施例中的识别模型的训练方法。
上述方案,通过采用识别模型中的检测子模型对获取到的待识别病理图像进行目标检测,从而得到待识别病理图像中包含目标细胞的检测区域,再利用识别模型中的分析子模型对检测区域检修第一分类处理,得到目标细胞的类别,进而能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够准确、高效地识别病理图像中的目标细胞。
附图说明
图1是本申请实施例提供的一种图像识别方法的流程示意图;
图2是本申请实施例提供的一种图像识别方法的状态示意图;
图3是本申请实施例提供的一种图像识别方法的流程示意图;
图4是本申请实施例提供的一种图像识别方法的状态示意图;
图5是本申请实施例提供的一种识别模型的训练方法的流程示意图;
图6是本申请实施例提供的一种图像识别装置的结构示意图;
图7是本申请实施例提供的一种识别模型的训练装置的结构框架示意图;
图8是本申请实施例提供的一种电子设备的结构框架示意图;
图9是本申请实施例提供的一种计算机可读存储介质的结构框架示意图。
具体实施方式
以下面结合说明书附图,对本申请实施例的方案进行详细说明。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的具体细节,以便透彻理解本申请实施例。
本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,本文中的“多”表示两个或者多于两个。
请参阅图1,图1是本申请实施例提供的一种图像识别方法的流程示意图。具体而言,可以包括如下步骤:
步骤S11:获取待识别病理图像。
待识别病理图像可以包括但不限于:宫颈病理图像、肝脏病理图像、肾脏病理图像,在此不做限定。
步骤S12:采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域。
识别模型包括检测子模型,在一个具体的实施场景中,检测子模型可以采用Faster RCNN(Region with Convolutional Neural Networks)网络模型。在另一个具体的实施场景中,检测子模型还可以采用Fast RCNN、YOLO(You Only Look Once)等等,在此不做限定。
利用检测子模型对待识别病理图像进行检测,得到待识别病理图像中包含目标细胞的检测区域,例如,对宫颈病理图像进行检测,得到宫颈病理细胞中包含鳞状上皮细胞的检测区域;或者,对肝脏病理图像进行检测,得到肝脏病理图像中包含病变细胞的检测区域,当待识别病理图像为其他图像时,可以以此类推,在此不再一一举例。在一个实施场景中,检测区域具体可以采用一包含目标细胞的矩形的中心坐标以及矩形的长宽表示,例如,可以采用(50,60,10,20)表示一位于待识别病理图像中以像素坐标(50,60)为中心,长为10且宽为20的矩形,此外,还可以以一包含目标细胞的矩形的中心坐标以及矩形的长宽分别与一预设矩形的比值进行表示,例如,预设矩形可以为一个长为10且宽为20的矩形,则可以采用(50,60,1,1)表示一位于待识别病理图像中以像素坐标(50,60)为中心,长为10且宽为20的矩形,在此不做限定。
在本申请的一些实施例中,待识别病理图像还可能为一不包含目标细胞的图像,此时采用识别模型中的检测子模型对待识别病理图像进行目标检测,由于未得到检测区域,可以输出待识别病理图像不包含目标细胞的提示,从而免去后续分类处理的步骤,提高模型运行效率。例如,可以直接输出宫颈病理图像不包含鳞状上皮细胞的提示,其他病理图像可以以此类推,在此不再一一举例。
在本申请的一些实施例中,请结合参阅图2,图2是本申请实施例提供的一种图像识别方法的状态示意图。如图2所示,待识别病理图像为宫颈病理图像,待识别病理图像通过识别模型中的检测子模型进行目标检测,得到包含目标细胞的两个检测区域。
步骤S13:利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。
识别模型还可以包括分类子模型,在一个具体的实施场景中,分类子模型可以采用EfficientNet网络模型。在另一个具体的实施场景中,分类子模型还可以采用ResNet、MobileNet等等,在此不做限定。
利用识别模型中的分类子模型对检测区域进行分类处理,能够得目标细胞的类别,具体地,为了提高分类效率,可以利用分类子模型对待识别病理图像的检测区域进行特征提取,得到检测区域的图像特征,从而对检测区域的图像特征进行第一分类处理,得到目标细胞的类别。例如,可以对检测区域的图像特征进行池化处理、全连接处理,从而得到目标细胞的类别,在此不再赘述。
在本申请的一些实施例中,为了实现对目标细胞进行病变分级,目标细胞的类别可以表示目标细胞的病变程度。以待识别病变图像为宫颈病理图像为例,目标细胞具体可以包括但不限于如下类别:高度鳞状细胞上皮内瘤变(High-grade Squamous Intraepithelial Lesion,HSIL)、轻度鳞状细胞上皮内瘤变(Low-grade Squamous Intraepithelial Lesion,LSIL)、意义未明的非典型鳞状细胞(Atypical Squamous Cells of Undetermined Significance,ASC-US)、不能排除高度上皮内瘤变的非典型鳞状细胞(Atypical Squamous Cells-cannot exclude HSIL,ASC-H)。当待识别病理图像为其他病理图像时,可以以此类推,在此不再一一举例。在一个实施场景中,目标细胞可以包括单个病变细胞、病变细胞团簇中的任一者,从而能够实现对单个病变细胞或病变细胞团簇进行识别。
在本申请的一些实施例中,请继续结合参阅图2,分类子模型分别对检测子模型检测得到的两个检测区域进行分类处理,得到两个检测区域中所包含的目标细胞的类别:其中一个检测区域中的目标细胞为高度鳞状细胞上皮内瘤变(HSIL),另一个检测区域中的目标细胞为不能排除高度上皮内瘤变的非典型鳞状细胞(ASC-H)。
在本申请的一些实施例中,分类子模型还可以对检测区域进行第一分类处理,得到目标细胞的类别及其置信度,其中,置信度表示目标细胞的真实类别为模型预测得到的 类别的可信度,置信度越高,可信度越高。请继续结合参阅图2,分类子模型分别对检测区域进行分类处理,得到目标细胞的类别及其置信度,其中一个检测区域中的目标细胞为高度鳞状细胞上皮内瘤变(HSIL),且其置信度为0.97(即97%的可信度),另一个检测区域中的目标细胞为不能排除高度上皮内瘤变的非典型鳞状细胞(ASC-H),且其置信度为0.98(即98%的可信度)。
上述方案,通过采用识别模型中的检测子模型对获取到的待识别病理图像进行目标检测,从而得到待识别病理图像中包含目标细胞的检测区域,再利用识别模型中的分析子模型对检测区域检修第一分类处理,得到目标细胞的类别,进而能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够准确、高效地识别病理图像中的目标细胞。
请参阅图3,图3是本申请实施例提供的一种图像识别方法的流程示意图。具体而言,可以包括如下步骤:
步骤S31:获取待识别病理图像。
具体请参阅前述实施例中的相关步骤。
步骤S32:利用检测子模型的第一部分对待识别病理图像进行分类处理,得到待识别病理图像的图像分类结果。
其中,图像分类结果用于表示待识别病理图像中是否包含目标细胞,具体地,可以采用“0”表示待识别病理图像中不包含目标细胞,采用“1”表示待识别病理图像中包含目标细胞,在此不做限定。
在本申请的一些实施例中,检测子模型的第一部分为全局分类网络,全局分类网络为一包括神经元的神经网络模型,不同于前述实施例中的分类子模型,全局分类网络用于对待识别病理图像进行二分类处理,得到待识别病理图像是否包含目标细胞的图像分类结果。在一个具体的实施场景中,为了与分类子模型的分类处理加以区别,检测子模型的第一部分的分类处理可以称为第二分类处理,在此不做限定。
步骤S33:判断图像分类结果是否表示待识别病理图像中包含目标细胞,若是,则执行步骤S34,否则执行S36。
通过图像分类结果,判断待识别病理图像中是否包含目标细胞,若包含目标细胞,则可以对待识别病理图像进行下一步处理,反之,则不需要对其进行下一步处理,从而将是否包含目标细胞的分类处理与具体检测目标细胞的检测区域进行分离,从而能够进一步提高模型的运行效率,进而提高图像中目标细胞识别的效率。
步骤S34:利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域。
在本申请的一些实施例中,检测子模型的第二部分为图像检测网络,图像检测网络为一包括神经元的神经网络模型,以检测子模型采用Faster RCNN为例,第二部分可以为RPN(Region Proposal Networks)网络,当检测子模型为其他网络模型时,可以以此类推,在此不再一一举例。
在本申请的一些实施例中,请结合参阅图2,图2是本申请实施例提供的一种图像识别方法的状态示意图。如图2所示,待识别病理图像为宫颈病理图像,待识别病理图像通过识别模型中的检测子模型进行目标检测,得到包含目标细胞的两个检测区域。
在本申请的一些实施例中,为了提高目标细胞识别的准确性。还可以利用检测子模型的第三部分对待识别病理图像进行特征提取,得到待识别病理图像的图像特征,具体地,第三部分可以为特征提取网络,在本申请的一些实施例中,特征提取网络可以是ResNet101网络,或者,特征提取网络还可以是ResNet50网络等,在此不做限定。在本申请的一些实施例中,为了提高对多形态的目标细胞进行识别的准确性,特征提取网络 可以包括可变形卷积层(deformable convolution),可变形卷积基于对空间采用的位置信息,作进一步位移调整,以实现对不同形态细胞的特征提取。在本申请的一些实施例中,为了获取长距离的、具有依赖关系的特征,从而提高目标细胞识别的准确性,特征提取网络还可以包括全局信息增强模块。请结合参阅图4,图4是本申请实施例提供的一种图像识别方法的状态示意图,在对待识别病理图像进行特征提取之后,可以采用检测子模型的第一部分对图像特征进行分类处理,得到待识别病理图像的图像分类结果,并在图像分类结果表示待识别病理图像中包含目标细胞时(即图像分类结果为阳性时),采用检测子模型的第二部分对图像特征进行区域检测,得到包含目标细胞的检测区域,以进行后续的分类处理,具体可以参考本实施例中的相关步骤,在此不再赘述。
步骤S35:利用识别模型中的分类子模型对检测区域进行分类处理,得到目标细胞的类别。
具体请参阅前述实施例中的相关步骤。
在本申请的一些实施例中,请结合参阅图2,图2是本申请实施例提供的一种图像识别方法的状态示意图。如图2所示,待识别病理图像为宫颈病理图像,待识别病理图像通过识别模型中的检测子模型进行目标检测,得到包含目标细胞的两个检测区域。
步骤S36:第一部分输出待识别病理图像中不包含目标细胞的检测结果提示。
当图像检测结果表示待识别病理图像中不包含目标细胞时(即图像分类结果为阴性时),则可以无需进行下一步处理,从而可以直接输出待识别病理图像中不包含目标细胞的检测结果提示(即结果为阴性的提示),以提高模型的运行效率,从而提高图像中目标细胞识别的效率。
区别于前述实施例,通过检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,且图像分类结果用于表示待识别病理图像中是否包含目标细胞,当图像分类结果表示待识别病理图像中包含目标细胞时,再利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域,故能够实现目标细胞的动态检测,提高目标细胞识别的效率。
请参阅图5,图5是本申请实施例提供的一种识别模型的训练方法的流程示意图,本申请实施例中,识别模型具体可以包括检测子模型和分类子模型,具体而言可以包括如下步骤:
步骤S51:获取第一样本图像和第二样本图像。
本申请实施例中,第一样本图像中标注有与目标细胞对应的实际区域,实际区域可以采用一包含目标细胞的矩形的中心坐标以及矩形的长宽表示,例如,可以采用(50,60,10,20)表示一位于第一样本图像中以像素点(50,60)为中心,长为10且宽为20的矩形。第二样本图像中标注有目标细胞的实际类别,在本申请的一些实施例中,目标细胞的实际类别用于表示目标细胞的病变程度。以第二样本图像为宫颈病理图像为例,目标细胞具体可以包括但不限于如下类别:高度鳞状细胞上皮内瘤变(HSIL)、轻度鳞状细胞上皮内瘤变(LSIL)、意义未明的非典型鳞状细胞(ASC-US)、不能排除高度上皮内瘤变的非典型鳞状细胞(ASC-H)。当待识别病理图像为其他病理图像时,可以以此类推,在此不再一一举例。在本申请的一些实施例中,目标细胞可以包括单个病变细胞、病变细胞团簇中的任一者,从而能够实现对单个病变细胞或病变细胞团簇进行识别。
在本申请的一些实施例中,第一样本图像和第二样本图像为病理图像,例如可以包括但不限于:宫颈病理图像、肝脏病理图像、肾脏病理图像。以第一样本图像和第二样本图像为宫颈病理图像为例,目标细胞可以为鳞状上皮细胞。当第一样本图像和第二样本图像为其他病理图像时,可以以此类推,在此不再一一举例。
在本申请的一些实施例中,还可以对获取到的地样本图像和第二样本图像进行数据增强,从而提高样本多样性,有利于避免过拟合,提高模型的泛化性能。在一个具体的实施场景中,可以采用包括但不限于如下操作进行数据增强:随机切割、随机旋转、随机翻转、颜色扰动、伽马校正、高斯噪声。
在本申请的一些实施例中,还可以将第一样本图像和第二样本图像中的像素值进行归一化处理,从而提高模型的收敛速度。在本申请的一些实施例中,可以先统计所有第一样本图像像素值的第一均值和第一方差,再利用每个第一样本图像中的像素值减去第一均值,再除以第一方差,从而对每一第一样本图像进行归一化处理;并可以统计所有第二样本图像的像素值的第二均值和第二方差,再利用每个第二样本图像的像素值减去第二均值,再除以第二方差,从而对每一第二仰恩图像进行归一化处理。
步骤S52:利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域,并利用分类子模型对第二样本图像进行第一分类处理,得到目标细胞的预测类别。
检测子模型可以采用Faster RCNN,具体可以参考前述实施例中的相关步骤,在此不再赘述。预测区域可以采用一矩形的中心坐标以及矩形的长宽表示,例如,可以采用(70,80,10,20)表示一位于第一样本图像中以像素点(70,80)为中心,长为10且宽为20的预测区域,预测区域还可以采用一矩形的中心坐标以及矩形的长宽分别与预设矩形的长宽的比值表示,例如,可以设置一预设矩形,预设矩形的长度为10且宽度为20,则可以采用(70,80,1,1)表示一位于第一样本图像中以(70,80)为图像中心,长为10且宽为20的预测区域。分类子模型可以采用EfficientNet网络模型,具体可以参考前述实施例中的相关步骤,在此不再赘述。
在本申请的一些实施例中,为了提高模型识别正负样本的能力,并实现动态预测,以提高模型运行效率,在利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域过程中,还可以对第一样本图像进行第二分类处理,得到第一样本图像的图像分类结果,其中,图像分类结果用于表示第一样本图像中是否包含目标细胞,若图像分类结果表示第一样本图像中包含目标细胞,则对第一样本图像进行区域检测,得到包含目标细胞的预测区域,具体可以参考前述实施例中的相关步骤,在此不再赘述。此外,检测子模型还可以包括第一部分和第二部分,第一部分配置为对第一样本图像进行分类处理,得到第一样本图像是否包含目标细胞的图像分类结果,第二部分配置为当第一样本图像中包含目标细胞时,对第一样本图像进行区域检测,得到包含目标细胞的预测区域,具体可以参考前述实施例中的相关步骤,在此不再赘述。此外,检测子模型还可以包括第三部分,配置为对第一样本图像进行特征提取,得到第一样本图像的图像特征,从而第一部分对图像特征进行特征提取,得到第一样本图像的图像分类结果,第二部分对图像特征进行区域检测,得到包含目标细胞的预测区域。具体地,第一部分可以为全局分类网络,第二部分为图像检测网络,第三部分为特征提取网络,其中,特征提取网络包括可变形卷积层、全局信息增强模块中的至少一者,具体可以参考前述实施例中的相关步骤,在此不再赘述。
步骤S53:基于实际区域与预测区域,确定检测子模型的第一损失值,并基于实际类别与预测类别,确定分类子模型的第二损失值。
在本申请的一些实施例中,可以采用均方误差损失函数、交叉熵损失函数等确定检测子模型的第一损失值。在本申请的一些实施例中,可以采用交叉熵损失函数确定分类子模型的第二损失值,在此不再赘述。
步骤S54:利用第一损失值和第二损失值,对应调整检测子模型和分类子模型的参数。
具体地,可以采用随机梯度下降、指数平均加权、Adam等梯度下降优化方法,对检测子模型和分类子模型的参数进行调整,在此不再赘述。
此外,还可以将第一样本图像和第二样本图像分为多个小批次(batch),并采用小批次(mini-batch)的训练方式对检测子模型和分类子模型进行训练。在本申请的一些实施例中,还可以设置一训练结束条件,当满足训练结束条件时,可以结束训练。具体地,训练结束条件可以包括但不限于:训练的迭代次数大于或等于预设阈值(例如,100次、500次等);第一损失值和第二损失值小于一预设损失阈值,且不再减小;分别利用一验证数据集对检测子模型和分类子模型进行验证所得到的模型性能不再提高,在此不做限定。
上述方案,在训练过程中,能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够解决样本数据类别不平衡的问题,进而能够有利于提高训练得到的模型的准确性,从而能够有利于提高目标细胞识别的准确性和效率。
请参阅图6,图6是本申请实施例提供的一种图像识别装置60的结构框架示意图。图像识别装置60包括图像获取模块61、图像检测模块62和图像分类模块63,图像获取模块61配置为获取待识别病理图像;图像检测模块62配置为采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域;图像分类模块63配置为利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。
上述方案,通过采用识别模型中的检测子模型对获取到的待识别病理图像进行目标检测,从而得到待识别病理图像中包含目标细胞的检测区域,再利用识别模型中的分析子模型对检测区域检修第一分类处理,得到目标细胞的类别,进而能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够准确、高效地识别病理图像中的目标细胞
在本申请的一些实施例中,图像检测模块62包括第一部分子模块,配置为利用检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,其中,图像分类结果用于表示待识别病理图像中是否包含目标细胞,图像检测模块62还包括第二部分子模块,配置为在图像分类结果表示待识别病理图像中包含目标细胞时,利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域。
区别于前述实施例,通过检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果,且图像分类结果配置为表示待识别病理图像中是否包含目标细胞,当图像分类结果表示待识别病理图像中包含目标细胞时,再利用检测子模型的第二部分对待识别病理图像进行区域检测,得到包含目标细胞的检测区域,故能够实现目标细胞的动态检测,提高目标细胞识别的效率。
在本申请的一些实施例中,图像检测模块62还包括结果提示子模块,配置为在图像分类结果表示待识别病理图像中不包含目标细胞时,第一部分输出待识别病理图像中不包含目标细胞的检测结果提示。
区别于前述实施例,在利用检测子模型的第一部分对待识别病理图像进行第二分类处理,得到待识别病理图像的图像分类结果之后,还包括:若图像分类结果表示待识别病理图像中不包含目标细胞,则第一部分输出待识别病理图像中不包含目标细胞的检测结果提示。
在本申请的一些实施例中,图像检测模块62还包括第三部分子模块,配置为利用检测子模型的第三部分对待识别病理图像进行特征提取,得到待识别病理图像的图像特征。
区别于前述实施例,通过检测子模型的第三部分对待识别病理图像进行特征提取,得到待识别病理图像的图像特征,从而能够先对待识别病理图像进行,进而后续在此基础上再利用检测子模型进行其他处理,故能够有利于提高模型的运行效率。
在本申请的一些实施例中,第一部分子模块具体配置为利用检测子模型的第一部分对图像特征进行第二分类处理,得到待识别病理图像的图像分类结果。
区别于前述实施例,利用检测子模型的第一部分对第三部分提取得到的图像特征进行第二分类处理,得到待识别病理图像的图像分类结果,能够提高分类处理的准确性。
在本申请的一些实施例中,第二部分子模块具体配置为利用检测子模型的第二部分对图像特征进行区域检测,得到包含目标细胞的检测区域。
区别于前述实施例,利用检测子模型的第二部分对图像特征进行区域检测,得到包含目标细胞的检测区域,能够有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,第一部分为全局分类网络,第二部分为图像检测网络,第三部分为特征提取网络;其中,特征提取网络包括可变形卷积层、全局信息增强模块中的至少一者。
区别于前述实施例,通过将特征提取网络设置为包括可变形卷积层,能够提高对多形态的目标细胞进行识别的准确性,通过将特征提取网络设置为包括全局信息增强模块中的至少一者,能够有利于获取长距离的、具有依赖关系的特征,有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,图像分类模块63包括特征提取子模块,配置为利用分类子模型对待识别病理图像的检测区域进行特征提取,得到检测区域的图像特征,图像分类模块63包括分类处理子模块,配置为对检测区域的图像特征进行第一分类处理,得到目标细胞的类别。
区别于前述实施例,通过对待识别病理图像的检测区域进行特征提取,得到检测区域的图像特征,并对检测区域的图像特征进行第一分类处理,得到目标细胞的类别,能够有利于提高分类处理的效率。
在本申请的一些实施例中,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度。
区别于前述实施例,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,能够有利于识别单个病变细胞和病变细胞团簇,且目标细胞的类别用于表示目标细胞的病变程度,有利于实现目标细胞的病变分级。
请参阅图7,图7是本申请实施例提供的一种识别模型的训练装置70的结构框架示意图。识别模型包括检测子模型和分类子模型,识别模型的训练装置70包括图像获取模块71、模型执行模块72、损失确定模块73和参数调整模块74,图像获取模块71配置为获取第一样本图像和第二样本图像,其中,第一样本图像中标注有与目标细胞对应的实际区域,第二样本图像中标注有目标细胞的实际类别;模型执行模块72配置为利用检测子模型对第一样本图像进行目标检测,得到第一样本图像中包含目标细胞的预测区域,并利用分类子模型对第二样本图像进行第一分类处理,得到目标细胞的预测类别;损失确定模块73配置为基于实际区域与预测区域,确定检测子模型的第一损失值,并基于实际类别与预测类别,确定分类子模型的第二损失值;参数调整模块74配置为利用第一损失值和第二损失值,对应调整检测子模型和分类子模型的参数。
上述方案,在训练过程中,能够先进行目标细胞的检测,再进行目标细胞的分类,将检测与分类分离,从而能够解决样本数据类别不平衡的问题,进而能够有利于提高训练得到的模型的准确性,从而能够有利于提高目标细胞识别的准确性和效率。
在本申请的一些实施例中,模型执行模块72包括初始分类子模块,配置为对第一 样本图像进行第二分类处理,得到第一样本图像的图像分类结果,其中,图像分类结果用于表示第一样本图像中是否包含目标细胞,模型执行模块72包括区域检测子模块,配置为在图像分类结果表示第一样本图像中包含目标细胞时,对第一样本图像进行区域检测,得到包含目标细胞的预测区域。
区别于前述实施例,在训练过程中,当图像分类结果表示第一样本图像中包含目标细胞时,再对第一样本图像进行区域检测,得到包含目标细胞的预测区域,能够增强模型识别正负样本的能力,降低误检概率,有利于提高训练得到的模型的准确性,从而能够有利于提高目标细胞识别的准确性。
在本申请的一些实施例中,识别模型的训练装置70还包括数据增强模块,配置为对第一样本图像和第二样本图像进行数据增强。
区别于前述实施例,通过对第一样本图像和第二样本图像进行数据增强能够提高样本多样性,有利于避免过拟合,提高模型的泛化性能。
在本申请的一些实施例中,识别模型的训练装置70还包括归一化处理模块,配置为将第一样本图像和第二样本图像中的像素值进行归一化处理。
区别于前述实施例,通过将第一样本图像和第二样本图像中的像素值进行归一化处理,能够有利于提高模型的收敛速度。
在本申请的一些实施例中,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度。
区别于前述实施例,目标细胞包括单个病变细胞、病变细胞团簇中的任一者,目标细胞的类别用于表示目标细胞的病变程度,能够有利于识别单个病变细胞和病变细胞团簇,且目标细胞的类别用于表示目标细胞的病变程度,有利于实现目标细胞的病变分级。
请参阅图8,图8是本申请实施例提供的一种电子设备80的结构框架示意图。电子设备80包括相互耦接的存储器81和处理器82,处理器82配置为执行存储器81中存储的程序指令,以实现上述任一图像识别方法实施例的步骤,或实现上述任一识别模型的训练方法实施例中的步骤。在一个具体的实施场景中,电子设备80可以包括但不限于:微型计算机、服务器,此外,电子设备80还可以包括笔记本电脑、平板电脑等移动设备,在此不做限定。
具体而言,处理器82配置为控制其自身以及存储器81以实现上述任一图像识别方法实施例的步骤,或实现上述任一识别模型的训练方法实施例中的步骤。处理器82还可以称为中央处理单元(Central Processing Unit,CPU)。处理器82可能是一种集成电路芯片,具有信号的处理能力。处理器82还可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。另外,处理器82可以由集成电路芯片共同实现。
上述方案,能够准确、高效地识别病理图像中的目标细胞。
请参阅图9,图9为本申请实施例提供的一种计算机可读存储介质90的结构框架示意图。计算机可读存储介质90存储有能够被处理器运行的程序指令901,程序指令901用于实现上述任一图像识别方法实施例的步骤,或实现上述任一识别模型的训练方法实施例中的步骤。
上述方案,能够准确、高效地识别病理图像中的目标细胞。
本申请实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现本申请实施例提供的任一图像识别方法,或本申请实施例提供的任一识别模型的训练方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
工业实用性
本申请实施例提供一种图像识别方法、识别模型的训练方法及相关装置、设备,其中,图像识别方法包括:获取待识别病理图像;采用识别模型中的检测子模型对待识别病理图像进行目标检测,得到待识别病理图像中包含目标细胞的检测区域;利用识别模型中的分类子模型对检测区域进行第一分类处理,得到目标细胞的类别。根据本申请实施例的图像识别方法,能够准确、高效地识别病理图像中的目标细胞。

Claims (27)

  1. 一种图像识别方法,包括:
    获取待识别病理图像;
    采用识别模型中的检测子模型对所述待识别病理图像进行目标检测,得到所述待识别病理图像中包含目标细胞的检测区域;
    利用所述识别模型中的分类子模型对所述检测区域进行第一分类处理,得到所述目标细胞的类别。
  2. 根据权利要求1所述的图像识别方法,所述采用识别模型中的检测子模型对所述待识别病理图像进行目标检测,得到所述待识别病理图像中包含目标细胞的检测区域,包括:
    利用所述检测子模型的第一部分对所述待识别病理图像进行第二分类处理,得到所述待识别病理图像的图像分类结果,其中,所述图像分类结果用于表示所述待识别病理图像中是否包含所述目标细胞;
    若所述图像分类结果表示所述待识别病理图像中包含所述目标细胞,则利用所述检测子模型的第二部分对所述待识别病理图像进行区域检测,得到包含所述目标细胞的检测区域。
  3. 根据权利要求2所述的图像识别方法,在所述利用所述检测子模型的第一部分对所述待识别病理图像进行第二分类处理,得到所述待识别病理图像的图像分类结果之后,所述方法还包括:
    若所述图像分类结果表示所述待识别病理图像中不包含所述目标细胞,则所述第一部分输出所述待识别病理图像中不包含所述目标细胞的检测结果提示。
  4. 根据权利要求2或3所述的图像识别方法,所述采用识别模型中的检测子模型对所述待识别病理图像进行目标检测,得到所述待识别病理图像中包含目标细胞的检测区域,还包括:
    利用所述检测子模型的第三部分对所述待识别病理图像进行特征提取,得到所述待识别病理图像的图像特征。
  5. 根据权利要求4所述的图像识别方法,所述利用所述检测子模型的第一部分对所述待识别病理图像进行第二分类处理,得到所述待识别病理图像的图像分类结果,包括:
    利用所述检测子模型的第一部分对所述图像特征进行第二分类处理,得到所述待识别病理图像的图像分类结果。
  6. 根据权利要求4所述的图像识别方法,所述利用所述检测子模型的第二部分对所述待识别病理图像进行区域检测,得到包含所述目标细胞的检测区域,包括:
    利用所述检测子模型的第二部分对所述图像特征进行区域检测,得到包含所述目标细胞的检测区域。
  7. 根据权利要求4至6任一项所述的图像识别方法,所述第一部分为全局分类网络,所述第二部分为图像检测网络,所述第三部分为特征提取网络;其中,所述特征提 取网络包括可变形卷积层、全局信息增强模块中的至少一者。
  8. 根据权利要求1或2所述的图像识别方法,所述利用所述识别模型中的分类子模型对所述检测区域进行第一分类处理,得到所述目标细胞的类别,包括:
    利用所述分类子模型对所述待识别病理图像的所述检测区域进行特征提取,得到所述检测区域的图像特征;
    对所述检测区域的图像特征进行第一分类处理,得到所述目标细胞的类别。
  9. 根据权利要求1至8任一项所述的图像识别方法,所述目标细胞包括单个病变细胞、病变细胞团簇中的任一者,所述目标细胞的类别用于表示所述目标细胞的病变程度。
  10. 一种识别模型的训练方法,所述识别模型包括检测子模型和分类子模型,所述方法包括:
    获取第一样本图像和第二样本图像,其中,所述第一样本图像中标注有与目标细胞对应的实际区域,所述第二样本图像中标注有目标细胞的实际类别;
    利用所述检测子模型对所述第一样本图像进行目标检测,得到所述第一样本图像中包含目标细胞的预测区域,并利用所述分类子模型对所述第二样本图像进行第一分类处理,得到所述目标细胞的预测类别;
    基于所述实际区域与所述预测区域,确定所述检测子模型的第一损失值,并基于所述实际类别与所述预测类别,确定所述分类子模型的第二损失值;
    利用所述第一损失值和所述第二损失值,对应调整所述检测子模型和所述分类子模型的参数。
  11. 根据权利要求10所述的训练方法,所述利用所述检测子模型对所述第一样本图像进行目标检测,得到所述第一样本图像中包含目标细胞的预测区域,包括:
    对所述第一样本图像进行第二分类处理,得到所述第一样本图像的图像分类结果,其中,所述图像分类结果用于表示所述第一样本图像中是否包含所述目标细胞;
    若所述图像分类结果表示所述第一样本图像中包含所述目标细胞,则对所述第一样本图像进行区域检测,得到包含所述目标细胞的预测区域。
  12. 根据权利要求10或11所述的训练方法,在所述利用所述检测子模型对所述第一样本图像进行目标检测,得到所述第一样本图像中包含目标细胞的预测区域,并利用所述分类子模型对所述第二样本图像进行第一分类处理,得到所述目标细胞的预测类别之前,所述方法还包括:
    对所述第一样本图像和第二样本图像进行数据增强;
    和/或,将所述第一样本图像和第二样本图像中的像素值进行归一化处理;
    所述目标细胞包括单个病变细胞、病变细胞团簇中的任一者,所述目标细胞的类别用于表示所述目标细胞的病变程度。
  13. 一种图像识别装置,包括:
    图像获取模块,配置为获取待识别病理图像;
    图像检测模块,配置为采用识别模型中的检测子模型对所述待识别病理图像进行目标检测,得到所述待识别病理图像中包含目标细胞的检测区域;
    图像分类模块,配置为利用所述识别模型中的分类子模型对所述检测区域进行第一 分类处理,得到所述目标细胞的类别。
  14. 根据权利要求13所述的装置,所述图像检测模块包括:
    第一部分子模块,配置为利用所述检测子模型的第一部分对所述待识别病理图像进行第二分类处理,得到所述待识别病理图像的图像分类结果,其中,所述图像分类结果用于表示所述待识别病理图像中是否包含所述目标细胞;
    第二部分子模块,配置为在图像分类结果表示所述待识别病理图像中包含所述目标细胞时,利用所述检测子模型的第二部分对所述待识别病理图像进行区域检测,得到包含所述目标细胞的检测区域。
  15. 根据权利要求14所述的装置,所述图像检测模块还包括:
    结果提示子模块,配置为在所述图像分类结果表示所述待识别病理图像中不包含所述目标细胞时,所述第一部分输出所述待识别病理图像中不包含所述目标细胞的检测结果提示。
  16. 根据权利要求14或15所述的装置,所述图像检测模块还包括:
    第三部分子模块,配置为利用所述检测子模型的第三部分对所述待识别病理图像进行特征提取,得到所述待识别病理图像的图像特征。
  17. 根据权利要求16所述的装置,所述第一部分子模块还配置为利用所述检测子模型的第一部分对所述图像特征进行第二分类处理,得到所述待识别病理图像的图像分类结果。
  18. 根据权利要求16所述的装置,所述第二部分子模块还配置为利用所述检测子模型的第二部分对所述图像特征进行区域检测,得到包含所述目标细胞的检测区域。
  19. 根据权利要求16至18中任一项所述的装置,所述第一部分为全局分类网络,所述第二部分为图像检测网络,所述第三部分为特征提取网络;其中,所述特征提取网络包括可变形卷积层、全局信息增强模块中的至少一者。
  20. 根据权利要求13或14所述的装置,所述图像分类模块包括:
    特征提取子模块,配置为利用所述分类子模型对所述待识别病理图像的所述检测区域进行特征提取,得到所述检测区域的图像特征;
    分类处理子模块,配置为对所述检测区域的图像特征进行第一分类处理,得到所述目标细胞的类别。
  21. 根据权利要求13至20中任一项所述的装置,所述目标细胞包括单个病变细胞、病变细胞团簇中的任一者,所述目标细胞的类别用于表示目标细胞的病变程度。
  22. 一种识别模型的训练装置,所述识别模型包括检测子模型和分类子模型,所述识别模型的训练装置包括:
    图像获取模块,配置为获取第一样本图像和第二样本图像,其中,所述第一样本图像中标注有与目标细胞对应的实际区域,所述第二样本图像中标注有目标细胞的实际类别;
    模型执行模块,配置为利用所述检测子模型对所述第一样本图像进行目标检测,得到所述第一样本图像中包含目标细胞的预测区域,并利用所述分类子模型对所述第二样本图像进行第一分类处理,得到所述目标细胞的预测类别;
    损失确定模块,配置为基于所述实际区域与所述预测区域,确定所述检测子模型的 第一损失值,并基于所述实际类别与所述预测类别,确定所述分类子模型的第二损失值;
    参数调整模块,配置为利用所述第一损失值和所述第二损失值,对应调整所述检测子模型和所述分类子模型的参数。
  23. 根据权利要求22所述的装置,所述模型执行模块包括:
    初始分类子模块,配置为对所述第一样本图像进行第二分类处理,得到所述第一样本图像的图像分类结果,其中,所述图像分类结果用于表示所述第一样本图像中是否包含所述目标细胞;
    区域检测子模块,配置为在所述图像分类结果表示所述第一样本图像中包含所述目标细胞时,对所述第一样本图像进行区域检测,得到包含所述目标细胞的预测区域。
  24. 根据权利要求22或23所述的装置,所述识别模型的训练装置还包括:
    数据增强模块,配置为对所述第一样本图像和第二样本图像进行数据增强;
    或者,归一化处理模块,配置为将所述第一样本图像和第二样本图像中的像素值进行归一化处理;
    所述目标细胞包括单个病变细胞、病变细胞团簇中的任一者,所述目标细胞的类别用于表示所述目标细胞的病变程度。
  25. 一种电子设备,包括相互耦接的存储器和处理器,所述处理器配置为执行所述存储器中存储的程序指令,以实现权利要求1至9任一项所述的图像识别方法,或权利要求10至12任一项所述的识别模型的训练方法。
  26. 一种计算机可读存储介质,其上存储有程序指令,所述程序指令被处理器执行时实现权利要求1至9任一项所述的图像识别方法,或权利要求10至12任一项所述的识别模型的训练方法。
  27. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至9任一项所述的图像识别方法,或权利要求10至12任一项所述的识别模型的训练方法。
PCT/CN2020/103628 2020-02-26 2020-07-22 图像识别方法、识别模型的训练方法及相关装置、设备 WO2021169161A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020217021261A KR20210110823A (ko) 2020-02-26 2020-07-22 이미지 인식 방법, 인식 모델의 트레이닝 방법 및 관련 장치, 기기
JP2021576344A JP2022537781A (ja) 2020-02-26 2020-07-22 画像認識方法、認識モデルの訓練方法及び関連装置、機器

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010121559.5 2020-02-26
CN202010121559.5A CN111461165A (zh) 2020-02-26 2020-02-26 图像识别方法、识别模型的训练方法及相关装置、设备

Publications (1)

Publication Number Publication Date
WO2021169161A1 true WO2021169161A1 (zh) 2021-09-02

Family

ID=71684160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103628 WO2021169161A1 (zh) 2020-02-26 2020-07-22 图像识别方法、识别模型的训练方法及相关装置、设备

Country Status (5)

Country Link
JP (1) JP2022537781A (zh)
KR (1) KR20210110823A (zh)
CN (1) CN111461165A (zh)
TW (1) TWI767506B (zh)
WO (1) WO2021169161A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092162A (zh) * 2022-01-21 2022-02-25 北京达佳互联信息技术有限公司 推荐质量确定方法、推荐质量确定模型的训练方法及装置
CN115601749A (zh) * 2022-12-07 2023-01-13 赛维森(广州)医疗科技服务有限公司(Cn) 基于特征峰值图谱的病理图像分类方法、图像分类装置
CN117726882A (zh) * 2024-02-07 2024-03-19 杭州宇泛智能科技有限公司 塔吊吊物识别方法、系统和电子设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017162B (zh) * 2020-08-10 2022-12-06 上海杏脉信息科技有限公司 病理图像处理方法、装置、存储介质和处理器
CN111815633A (zh) * 2020-09-08 2020-10-23 上海思路迪医学检验所有限公司 医用图像诊断装置、图像处理装置和方法、判断单元以及存储介质
CN112132206A (zh) * 2020-09-18 2020-12-25 青岛商汤科技有限公司 图像识别方法及相关模型的训练方法及相关装置、设备
CN112581438B (zh) * 2020-12-10 2022-11-08 腾讯医疗健康(深圳)有限公司 切片图像识别方法、装置和存储介质及电子设备
CN112884707B (zh) * 2021-01-15 2023-05-05 复旦大学附属妇产科医院 基于阴道镜的宫颈癌前病变检测系统、设备及介质
CN113763315B (zh) * 2021-05-18 2023-04-07 腾讯医疗健康(深圳)有限公司 玻片图像的信息获取方法、装置、设备及介质
CN113313697B (zh) * 2021-06-08 2023-04-07 青岛商汤科技有限公司 图像分割和分类方法及其模型训练方法、相关装置及介质
CN113570592B (zh) * 2021-08-05 2022-09-20 印迹信息科技(北京)有限公司 肠胃病检测和模型训练方法、装置、设备及介质
CN113436191B (zh) * 2021-08-26 2021-11-30 深圳科亚医疗科技有限公司 一种病理图像的分类方法、分类系统及可读介质
CN113855079A (zh) * 2021-09-17 2021-12-31 上海仰和华健人工智能科技有限公司 基于乳腺超声影像的实时检测和乳腺疾病辅助分析方法
CN115170571B (zh) * 2022-09-07 2023-02-07 赛维森(广州)医疗科技服务有限公司 胸腹水细胞病理图像识别方法、图像识别装置、介质
CN115861719B (zh) * 2023-02-23 2023-05-30 北京肿瘤医院(北京大学肿瘤医院) 一种可迁移细胞识别工具

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190286880A1 (en) * 2018-03-16 2019-09-19 Proscia Inc. Deep learning automated dermatopathology
CN110766659A (zh) * 2019-09-24 2020-02-07 西人马帝言(北京)科技有限公司 医学图像识别方法、装置、设备和介质
CN111311578A (zh) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669828A (zh) * 2009-09-24 2010-03-17 复旦大学 基于pet/ct图像纹理特征的肺部恶性肿瘤与良性结节检测系统
EP3146463B1 (en) * 2014-05-23 2020-05-13 Ventana Medical Systems, Inc. Systems and methods for detection of biological structures and/or patterns in images
US10115194B2 (en) * 2015-04-06 2018-10-30 IDx, LLC Systems and methods for feature detection in retinal images
TWI668666B (zh) * 2018-02-14 2019-08-11 China Medical University Hospital 肝癌分群預測模型、其預測系統以及肝癌分群判斷方法
CN108510482B (zh) * 2018-03-22 2020-12-04 姚书忠 一种基于阴道镜图像的宫颈癌检测装置
CN108615236A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种图像处理方法及电子设备
CN108764329A (zh) * 2018-05-24 2018-11-06 复旦大学附属华山医院北院 一种肺癌病理图像数据集的构建方法
CN109190441B (zh) * 2018-06-21 2022-11-08 丁彦青 女性生殖道细胞病理智能分类方法、诊断仪及存储介质
CN109190567A (zh) * 2018-09-10 2019-01-11 哈尔滨理工大学 基于深度卷积神经网络的异常宫颈细胞自动检测方法
CN109191476B (zh) * 2018-09-10 2022-03-11 重庆邮电大学 基于U-net网络结构的生物医学图像自动分割新方法
CN110334565A (zh) * 2019-03-21 2019-10-15 江苏迪赛特医疗科技有限公司 一种显微镜病理照片的宫颈癌病变细胞分类系统
CN110009050A (zh) * 2019-04-10 2019-07-12 杭州智团信息技术有限公司 一种细胞的分类方法及装置
CN110110799B (zh) * 2019-05-13 2021-11-16 广州锟元方青医疗科技有限公司 细胞分类方法、装置、计算机设备和存储介质
CN110736747B (zh) * 2019-09-03 2022-08-19 深思考人工智能机器人科技(北京)有限公司 一种细胞液基涂片镜下定位的方法及系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190286880A1 (en) * 2018-03-16 2019-09-19 Proscia Inc. Deep learning automated dermatopathology
CN110766659A (zh) * 2019-09-24 2020-02-07 西人马帝言(北京)科技有限公司 医学图像识别方法、装置、设备和介质
CN111311578A (zh) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YU KUAN: "Study on Pathological Cell Aided Detection Based on Machine Learning", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 28 February 2018 (2018-02-28), CN, XP055840248, ISSN: 1674-0246 *
ZHAO MINGZHU: "Feature Analysis and Recognition of Pathologic Cell Images", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 31 July 2013 (2013-07-31), CN, XP055840231, ISSN: 1674-0246 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092162A (zh) * 2022-01-21 2022-02-25 北京达佳互联信息技术有限公司 推荐质量确定方法、推荐质量确定模型的训练方法及装置
CN114092162B (zh) * 2022-01-21 2022-07-01 北京达佳互联信息技术有限公司 推荐质量确定方法、推荐质量确定模型的训练方法及装置
CN115601749A (zh) * 2022-12-07 2023-01-13 赛维森(广州)医疗科技服务有限公司(Cn) 基于特征峰值图谱的病理图像分类方法、图像分类装置
CN117726882A (zh) * 2024-02-07 2024-03-19 杭州宇泛智能科技有限公司 塔吊吊物识别方法、系统和电子设备

Also Published As

Publication number Publication date
KR20210110823A (ko) 2021-09-09
CN111461165A (zh) 2020-07-28
JP2022537781A (ja) 2022-08-29
TW202133043A (zh) 2021-09-01
TWI767506B (zh) 2022-06-11

Similar Documents

Publication Publication Date Title
WO2021169161A1 (zh) 图像识别方法、识别模型的训练方法及相关装置、设备
US10198821B2 (en) Automated tattoo recognition techniques
CN107967475B (zh) 一种基于窗口滑动和卷积神经网络的验证码识别方法
CN110020592B (zh) 物体检测模型训练方法、装置、计算机设备及存储介质
WO2022213465A1 (zh) 基于神经网络的图像识别方法、装置、电子设备及介质
WO2019033572A1 (zh) 人脸遮挡检测方法、装置及存储介质
JP2022141931A (ja) 生体検出モデルのトレーニング方法及び装置、生体検出の方法及び装置、電子機器、記憶媒体、並びにコンピュータプログラム
US20200125836A1 (en) Training Method for Descreening System, Descreening Method, Device, Apparatus and Medium
US10803571B2 (en) Data-analysis pipeline with visual performance feedback
CN112132206A (zh) 图像识别方法及相关模型的训练方法及相关装置、设备
EP3588380A1 (en) Information processing method and information processing apparatus
WO2019184851A1 (zh) 图像处理方法和装置及神经网络模型的训练方法
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN111291749B (zh) 手势识别方法、装置及机器人
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
CN111694954A (zh) 图像分类方法、装置和电子设备
Barra et al. F-FID: fast fuzzy-based iris de-noising for mobile security applications
CN114973300B (zh) 一种构件类别识别方法、装置、电子设备及存储介质
CN106683257A (zh) 冠字号定位方法及装置
Fan et al. A robust proposal generation method for text lines in natural scene images
CN112288045B (zh) 一种印章真伪判别方法
TWI775038B (zh) 字元識別方法、裝置及電腦可讀取存儲介質
CN111242047A (zh) 图像处理方法和装置、电子设备及计算机可读存储介质
Battiato et al. Red-eyes removal through cluster-based boosting on gray codes
WO2022222143A1 (zh) 人工智能系统的安全性检测方法、装置及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20921885

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021576344

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20921885

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20921885

Country of ref document: EP

Kind code of ref document: A1