WO2021164322A1 - 基于人工智能的对象分类方法以及装置、医学影像设备 - Google Patents

基于人工智能的对象分类方法以及装置、医学影像设备 Download PDF

Info

Publication number
WO2021164322A1
WO2021164322A1 PCT/CN2020/126620 CN2020126620W WO2021164322A1 WO 2021164322 A1 WO2021164322 A1 WO 2021164322A1 CN 2020126620 W CN2020126620 W CN 2020126620W WO 2021164322 A1 WO2021164322 A1 WO 2021164322A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target detection
detection object
feature
map
Prior art date
Application number
PCT/CN2020/126620
Other languages
English (en)
French (fr)
Inventor
王亮
孙嘉睿
沈荣波
江铖
朱艳春
姚建华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2021164322A1 publication Critical patent/WO2021164322A1/zh
Priority to US17/686,950 priority Critical patent/US20220189142A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Definitions

  • This application relates to the field of image processing technology, in particular to an artificial intelligence-based object classification method, device, computer readable storage medium and computer equipment, and a medical imaging equipment.
  • an artificial intelligence-based object classification method device, computer-readable storage medium and computer equipment, and a medical imaging equipment are provided.
  • An object classification method based on artificial intelligence including:
  • An object classification method based on artificial intelligence including:
  • the breast duct image is classified to obtain the lesion category information of the breast duct in the breast tissue pathological image.
  • a medical imaging equipment including:
  • Microscope scanner for obtaining pathological images of breast tissue
  • a memory in which computer-readable instructions are stored
  • a processor the computer-readable instructions are executed by the processor to cause the processor to perform the following steps: segmenting the breast duct image of the breast duct from the breast tissue pathological image; and converting the breast duct image Input into the feature object prediction model to obtain a cell segmentation map of the cells in the breast duct image; obtain the cell feature information and sieve feature information according to the breast duct image and the cell segmentation map; Feature information and sieve feature information to classify the breast duct image to obtain the lesion category information of the breast duct in the breast tissue pathological image;
  • the display is used to display the breast tissue pathological image and the lesion category information of the breast duct in the breast tissue pathological image.
  • An object classification device based on artificial intelligence characterized in that the device comprises:
  • An image acquisition module for acquiring an image to be processed, wherein the image to be processed includes a target detection object
  • An image segmentation module configured to segment the target detection object image of the target detection object from the image to be processed
  • a feature image acquisition module configured to input the target detection object image into a feature object prediction model to obtain a feature object segmentation map of the feature object in the target detection object image;
  • a feature information acquisition module configured to acquire quantized feature information of the target detection object according to the target detection object image and the feature object segmentation map;
  • the object classification module is configured to classify the target detection object image according to the quantified feature information to obtain the category information of the target detection object in the image to be processed.
  • An object classification device based on artificial intelligence including:
  • a pathological image acquisition module for acquiring a pathological image of breast tissue, wherein the pathological image of breast tissue includes a breast duct;
  • a duct image acquisition module configured to segment the mammary duct image of the mammary duct from the breast tissue pathological image
  • a cell area map acquisition module configured to input the breast duct image into a feature object prediction model to obtain a cell segmentation map of the cells in the breast duct image;
  • a duct feature acquisition module configured to acquire the cell feature information and the sieve feature information according to the breast duct image and the cell segmentation map;
  • the duct classification module is used to classify the breast duct image according to the cell feature information and the sieve feature information to obtain the lesion category information of the breast duct in the breast tissue pathological image.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the processor executes the following steps:
  • a computer device includes a memory, a processor, and computer readable instructions stored on the memory and capable of running on the processor, and the processor implements the following steps when the processor executes the computer readable instructions:
  • FIG. 1 is an application environment diagram of an object classification method based on artificial intelligence in an embodiment
  • FIG. 2 is a schematic flowchart of an object classification method based on artificial intelligence in an embodiment
  • Fig. 3 is a schematic diagram of a pathological image of breast tissue in an embodiment
  • FIG. 4 is a schematic flowchart of a step of cutting an image of a target detection object in an embodiment
  • Fig. 5a is a schematic flowchart of a step of obtaining a first target detection object prediction map in an embodiment
  • Figure 5b is a schematic frame diagram of a target detection object segmentation model in an embodiment
  • FIG. 6 is a schematic flowchart of a step of obtaining a region where a target detection object is located in an image to be processed in an embodiment
  • FIG. 7a is a schematic flowchart of a step of obtaining a first target detection object prediction map in another embodiment
  • FIG. 7b is a schematic diagram of cutting an image to be processed into multiple sub-images to be processed in an embodiment
  • FIG. 7c is a schematic diagram of stitching sub-predicted images of each target detection object in an embodiment
  • FIG. 8 is a schematic flowchart of a step of obtaining a region where a target detection object is located in an image to be processed in another embodiment
  • FIG. 9a is a schematic flowchart of a step of acquiring a feature object segmentation map in an embodiment
  • Figure 9b is a schematic framework diagram of a feature object prediction model in an embodiment
  • FIG. 9c is a schematic diagram of the network structure of the LinkNet network in an embodiment
  • FIG. 10 is a schematic flowchart of a step of acquiring quantitative feature information in an embodiment
  • Figure 11 is a schematic flow chart of the step of acquiring quantitative feature information in another embodiment
  • FIG. 12 is a schematic flowchart of a training step of a feature object prediction model in an embodiment
  • FIG. 13 is a schematic flowchart of a training step of a target detection object segmentation model in an embodiment
  • FIG. 14 is a schematic flow chart of another implementation of an artificial intelligence-based object classification method
  • Figure 15 is a structural block diagram of a medical imaging device in an embodiment
  • FIG. 16 is a structural block diagram of an object classification device based on artificial intelligence in an embodiment
  • Figure 17 is a structural block diagram of an artificial intelligence-based object classification device in another embodiment
  • Fig. 18 is a structural block diagram of a computer device in an embodiment.
  • AI Artificial Intelligence
  • digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Computer Vision is a science that studies how to make machines "see”. Furthermore, it refers to the use of cameras and computers instead of human eyes to identify, track, and measure targets. And further graphics processing, so that computer processing becomes more suitable for human eyes to observe or send to the instrument to detect the image.
  • Computer vision studies related theories and technologies trying to establish an artificial intelligence system that can obtain information from images or multi-dimensional data.
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and mapping Construction and other technologies also include common face recognition, fingerprint recognition and other biometric recognition technologies.
  • Machine Learning is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other subjects. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
  • Machine learning is the core of artificial intelligence, the fundamental way to make computers intelligent, and its applications cover all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and style teaching learning.
  • Fig. 1 is an application environment diagram of an object classification method based on artificial intelligence in an embodiment.
  • the object detection method is applied to an object classification system.
  • the object classification system includes a terminal 110 and a server 120.
  • the terminal 110 and the server 120 are connected through a network.
  • the terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, and a notebook computer.
  • the server 120 may be implemented as an independent server or a server cluster composed of multiple servers.
  • the terminal 110 obtains the image to be processed, and sends the image to be processed to the server 120.
  • the server 120 segments the target detection object image of the target detection object from the image to be processed; inputs the target detection object image into the feature object prediction model to obtain the feature of the feature object in the target detection object image Object segmentation map; obtain quantized feature information of the target detection object according to the target detection object image and the feature object segmentation map; classify the target detection object image according to the quantized feature information to obtain the to-be-detected object image Process the category information of the target detection object in the image.
  • the server 120 returns the image to be processed, the target detection object image of the target detection object in the image to be processed, and the category information of the target detection object in the image to be processed to the terminal 110, and the terminal 110 performs processing on these images and category information. show.
  • an object classification method based on artificial intelligence is provided.
  • the method is mainly applied to the server 120 in FIG. 1 as an example.
  • the artificial intelligence-based object classification method specifically includes the following steps:
  • Step S202 Obtain a to-be-processed image, where the to-be-processed image includes a target detection object.
  • the images to be processed include but are not limited to pictures, movies, etc., which can be images obtained through cameras, scanners, etc., images obtained through screenshots, or images uploaded through image-uploadable applications, etc.; among them,
  • the image to be processed includes one or more target detection objects, and the target detection object refers to an object that needs to be detected and classified in the image to be processed.
  • the image to be processed may be, but is not limited to, a pathological slice image
  • the target detection object may be, but is not limited to, a body organ, tissue, or cell in the pathological slice image.
  • the pathological slice image may specifically be an image taken by a medical imaging device (for example, a digital pathological slice scanner, a digital slice microscope, etc.), for example, it may be a whole-field digital pathological slice image (WSI).
  • the image to be processed is a pathological image of breast tissue
  • the target detection object is a breast duct.
  • FIG. 3 is a schematic diagram of a pathological image of breast tissue in an embodiment, and the marked area 310 in the figure is a target detection object, that is, a breast duct.
  • Step S204 segmenting the target detection object image of the target detection object from the image to be processed.
  • the target detection object image refers to the area image of the area where the target detection object is located in the image to be processed. After the image to be processed is acquired, the area image of the area where the target detection object is located can be segmented from the image to be processed, and the area image is the target detection object image.
  • segmenting the target detection object image of the target detection object from the image to be processed may be to use an image segmentation algorithm to determine the area of the target detection object from the image to be processed, and then intercept the area image of the area where the target detection object is located, as the target
  • the target of the detection object is to detect the object image, where the image segmentation algorithm can use a threshold-based segmentation algorithm, an edge detection-based segmentation method, etc.; it can also input the image to be processed into the deep learning model for image segmentation, and pass the depth
  • the learning model predicts the area where the target detection object is located, and then intercepts the area image of the area where the target detection object is located from the image to be processed according to the predicted area where the target detection object is located, and uses it as the target detection object image of the target detection object.
  • Step S206 Input the target detection object image into the feature object prediction model to obtain a feature object segmentation map of the feature object in the target detection object image.
  • the characteristic object refers to the object that contains the characteristic information of the target detection object in the target detection object.
  • the characteristic object of the target detection object may be the cells of the body tissue.
  • the feature object segmentation map refers to an image that has the same size as the target detection object image and has marked the area where the feature object is located.
  • the feature object segmentation map may be a binary image, that is, the feature object segmentation map presents a visual effect of only black and white.
  • the area where the feature object is located in the target detection object image can be displayed as white, and the area where the non-feature object is located can be displayed as black.
  • the feature object prediction model is a network model used to determine whether each pixel in the target detection object image belongs to a feature object, so as to output a feature object segmentation map.
  • the feature object prediction model here is a trained network model, which can be directly used to determine whether each pixel in the target detection object image belongs to a feature object, and output a feature object segmentation map.
  • the feature object prediction model can adopt neural network structures such as full convolutional network structure FCN, convolutional neural network structure U-net, etc., which are not limited here.
  • the feature object prediction model includes, but is not limited to, the coding layer and the decoding layer.
  • the coding layer is used to encode and compress the target detection object image, and to extract low-level semantic feature maps with lower dimensions, and the decoding layer is used to The low-level semantic feature map output by the coding layer is decoded, and the feature object segmentation map with the same size as the target detection object image is output.
  • Step S208 Acquire quantitative feature information of the target detection object according to the target detection object image and the feature object segmentation map.
  • the quantized feature information refers to the quantized information of each feature of the feature object in the target detection object, such as the number, size, circularity, and gray value of the pixel in the target detection object.
  • the quantitative feature information of the target detection object can be calculated according to the image data of the target detection object image and the feature object segmentation map.
  • the area where the feature object is located can be obtained from the feature object segmentation map, and then from the target detection object image, the area corresponding to the area where the feature object is located is determined, and the image of the area is determined as the area image of the feature object. Finally, the quantized feature information is calculated according to the pixel information of the pixels on the area image of the feature object.
  • Step S210 Classify the target detection object image according to the quantized feature information to obtain the category information of the target detection object in the image to be processed.
  • the target detection object image after obtaining the quantized feature information of the target detection object image, the target detection object image can be classified according to the quantized feature information of the target detection object image to obtain the target detection object category information corresponding to the target detection object image.
  • classifying the target detection object image according to the quantized feature information may be inputting the quantized feature information of the target detection object image into a trained classifier, and using the classifier to classify the target detection object image, where the classification
  • the classifier can be a classifier based on machine learning, such as a support vector machine (SVM) classifier, or a classifier based on deep learning, such as a classifier based on CNN (Convolutional Neural Networks, convolutional neural network). Classifier.
  • machine learning such as a support vector machine (SVM) classifier
  • CNN Convolutional Neural Networks, convolutional neural network
  • the training of the classifier may specifically be to obtain the sample detection object image and the standard category label corresponding to the sample detection object image, and by inputting the sample detection object image into the pre-built classifier, the and The sample detects the predicted category label corresponding to the target image, and then by comparing the label category label of the sample detection target image with the predicted category label, the loss value of the classifier is calculated, and finally the parameters in the classifier are adjusted according to the loss value of the classifier. Get the trained classifier.
  • the target detection object image is segmented from the image to be processed to realize the separation of the image of a single target detection object from the image to be processed, reducing unnecessary The effect of image data on subsequent object classification, and then perform feature object detection on a single target detection object to obtain a feature object segmentation map, so as to obtain the quantification of the feature object from the target detection object image according to the area of the feature object marked in the feature object segmentation map Feature information, to quantify the feature information of the target detection object at the feature object level to obtain quantized feature information, and then classify the target detection object according to the quantized feature information, realize the cascade processing of the image to be processed, and effectively reduce the image to be processed Unnecessary image data in the image to be processed reduces the impact of unnecessary image data in the image to be processed on object classification, and improves the classification accuracy of the target detection object in the image to be processed.
  • step S204 segmenting the target detection object image of the target detection object from the image to be processed includes:
  • Step S402 Input the image to be processed into the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed, and obtain the area of the target detection object in the image to be processed according to the first target detection object prediction map.
  • step S404 the image to be processed is segmented according to the area where the target detection object is located to obtain the target detection object image.
  • the target detection object segmentation model is a network model used to determine whether each pixel in the target detection object image belongs to the target detection object, and the output is the first target detection object prediction map.
  • the target detection object segmentation model here is a trained network model, which can be directly used to determine whether each pixel in the image to be processed belongs to the target detection object, and output the first target detection object prediction map.
  • the target detection object segmentation model can adopt neural network structures such as full convolutional network structure FCN, convolutional neural network structure U-net, etc., which are not limited here.
  • the target detection object segmentation model includes, but is not limited to, the coding layer and the decoding layer.
  • the coding layer is used to encode and compress the image to be processed, and to extract low-level semantic feature maps with lower dimensions, and the decoding layer is used to encode The low-level semantic feature map output by the layer is decoded, and the first target detection object prediction map with the same size as the image to be processed is output.
  • the first target detection object prediction map refers to an image that has the same size as the image to be processed and has an area where the target detection object is marked. After the first target detection object prediction map is obtained, the area where the target detection object is located is obtained from the first target detection object prediction map, so as to correspondingly determine the area of the target detection object in the image to be processed, and the target detection object in the image to be processed The area where it is located is segmented, and the target detection object image is obtained.
  • the first target detection object prediction map may be a binary image, that is, the first target detection object prediction map presents a visual effect of only black and white.
  • the area where the target detection object is located may be displayed in white, and the area where the non-target detection object is located may be displayed in black.
  • the image to be processed is input to the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed.
  • the target detection object segmentation model can be used to calculate that each pixel in the image to be processed belongs to the target detection object.
  • the target detection object segmentation model classifies each pixel in the image to be processed according to the probability value to obtain pixels belonging to the target detection object and pixels not belonging to the target detection object, and then the target detection object segmentation model will belong to the target detection object Set the gray value of the pixel point to 0, and set the gray value of the pixel point that does not belong to the target detection object to 255, so that the area where the target detection object is located can be displayed as white, and the area where the non-target detection object is located can be displayed as black The first target detection object prediction map.
  • the target detection object segmentation model includes an encoding layer and a decoding layer; the image to be processed is input to the target detection object segmentation model to obtain the first target detection object corresponding to the image to be processed
  • the steps of the forecast map include:
  • Step S502 Input the image to be processed into the encoding layer of the target detection object segmentation model, and perform encoding processing on the image to be processed through the encoding layer to obtain image feature information of the image to be processed;
  • the coding layer includes multiple convolutional layers.
  • the decoding layer is connected to the coding layer, and the connection of the coding layer and the decoding layer can be connected by skip connection, which can improve the accuracy of pixel classification.
  • the image to be processed is input to the encoding layer of the target detection object segmentation model, and the image to be processed is encoded and compressed through the encoding layer.
  • the specific encoding layer can encode and compress the image to be detected through the convolutional layer to extract the low-level semantics of the image to be processed Feature information
  • the low-level semantic feature information of the image to be processed may be basic visual information of the image to be processed, such as brightness, color, texture, and so on.
  • step S504 the image feature information is input into the decoding layer of the target detection object segmentation model, and the image feature information is decoded through the decoding layer to obtain the first target detection object prediction map corresponding to the image to be processed.
  • the encoding layer outputs the low-level semantic feature information of the image to be processed
  • the low-level semantic feature information of the image to be processed is input to the decoding layer of the target object detection model, and the decoding layer performs the decoding operation on the low-level semantic feature information of the image to be processed , And finally obtain the first target detection object prediction map that identifies whether each pixel of the image to be processed belongs to the target detection object.
  • the low-level semantic feature information of the image to be processed extracted from the coding layer is input to the decoding layer, where the deconvolution layer and the up-sampling layer can be used to decode the low-level semantic feature information to obtain the corresponding first target Detected object prediction map.
  • the deconvolution layer and the up-sampling layer can be used to decode the low-level semantic feature information to obtain the corresponding first target Detected object prediction map.
  • FIG. 5b shows the principle framework diagram of the target detection object segmentation model in an embodiment.
  • the image to be processed is input into the target object detection model, and the input image to be processed is first encoded and compressed through the Encoder to obtain lower-dimensional low-level semantics.
  • Feature information such as color, brightness, etc.
  • Connected to the coding layer is the decoder layer, which inputs the low-level semantic feature information output by the coding layer into the decoding layer.
  • the decoding layer decodes the low-level semantic feature information and outputs the first target that is the same as the original size of the image to be processed Detected object prediction map.
  • the pathological image of breast tissue is input to the target object detection model, and the first target detection object prediction map output by the target object detection model is shown in Figure 5b. It can be known from the first target detection object prediction map whether each pixel in the image to be processed (breast tissue pathological image) belongs to the target detection object (breast duct), the first target detection object prediction map shows the area where the target detection object is located .
  • the image to be processed is input to the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed, and the first target detection object prediction map is obtained according to the first target detection object prediction map.
  • the steps of detecting the area where the target is located in the image include:
  • step S602 the image to be processed is zoomed to obtain a zoomed image
  • Step S604 Input the zoomed image into the target detection object segmentation model to obtain a second target detection object prediction map of the zoomed image;
  • Step S606 Acquire the area information of the area where the target detection object is located in the zoomed image according to the second target detection object prediction map;
  • Step S608 Acquire the area where the target detection object is located in the image to be processed according to the area information.
  • the image to be processed may be scaled first to obtain a scaled image with a small amount of image data.
  • the zoomed image and the image content in the image to be processed are consistent, except for the difference in the size of the image and the amount of image data. Scaling the image to be processed to obtain the corresponding zoomed image can effectively reduce the amount of image data processing and speed up the segmentation of the image to be processed.
  • the second target detection object prediction map refers to an image that has the same size as the zoomed image and has marked the area where the target detection object is located.
  • the second target detection object prediction map may be a binary map.
  • the processing process of the target detection object segmentation model on the zoomed image is the same as the processing process of the target detection object segmentation model to process the image.
  • the step of inputting the image to be processed into the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed includes:
  • Step S702 cutting the to-be-processed image into multiple to-be-processed sub-images according to the cutting rule
  • Step S704 input each sub-image to be processed into the target detection object segmentation model, and obtain the sub-prediction image of the target detection object corresponding to each sub-image to be processed;
  • each target detection object sub-prediction image is spliced according to the cutting rule to obtain a first target detection object prediction map corresponding to the image to be processed.
  • the image to be processed may be cut into multiple tiles to obtain multiple sub-images to be processed. After the image to be processed is cut, each sub-image to be processed is input into the target detection image segmentation model, so that the image processed by the target detection image segmentation model is greatly reduced in size and image data amount, which can effectively reduce The processing volume of image data speeds up the speed of image segmentation.
  • the image to be processed is cut into multiple sub-images to be processed according to the cutting rule; then the sub-images to be processed are input into the target detection object segmentation model one by one to obtain the corresponding Multiple target detection object sub-prediction images corresponding to the sub-images to be processed are finally spliced according to the cutting rule to obtain the first target detection object prediction map of the entire image to be processed.
  • the pathological image of the breast tissue is cut according to a preset cutting rule, where the cutting rule is as shown in Figure 7b As shown, the pathological image of the breast tissue is cut into 6*6 sub-images of the pathological breast tissue. Then, input the breast histopathological sub-images into the target detection object segmentation model one by one to obtain the breast duct sub-predicted images corresponding to each breast histopathological sub-image, for example, take the breast histopathological sub-image 702 in FIG.
  • the processing process of the sub-image to be processed by the target detection object segmentation model is the same as the processing process of the image to be processed by the target detection object segmentation model.
  • the image to be processed is input to the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed.
  • the steps of detecting the area where the target is located in the image include:
  • step S802 the image to be processed is scaled to obtain a scaled image
  • Step S804 cutting the zoomed image into multiple zoomed image sub-images according to the cutting rule
  • Step S806 input the sub-images of each zoomed image into the target detection object segmentation model, and obtain the sub-images of each zoomed image corresponding to the target detection object sub-predicted image;
  • Step S808 splicing each target detection object sub-prediction image according to the cutting rule to obtain a second target detection object prediction map corresponding to the zoomed image;
  • Step S810 Obtain the area information of the area where the target detection object is located in the zoomed image according to the second target detection object prediction map;
  • Step S812 Acquire the area where the target detection object is located in the image to be processed according to the area information.
  • the image to be processed may be zoomed first to obtain a zoomed image; then the zoomed image is cut into multiple tiles to obtain multiple sub-images of the zoomed image.
  • the size of the image is reduced, and the amount of image data is reduced, which can effectively reduce the amount of image data processing and speed up the segmentation of the image to be processed.
  • the image to be processed is zoomed, the zoomed image is obtained, and the zoomed zoomed image is cut according to the cutting rules to obtain sub-images of multiple zoomed images; then, the sub-images of the zoomed image are obtained one by one.
  • the image inputs the image into the target detection object segmentation model to obtain the target detection object sub-prediction images corresponding to the sub-images of each zoomed image, and then stitch each target detection object sub-prediction image according to the cutting rule to obtain the entire image zoomed image The second target detection object prediction map.
  • the area where the target detection object is located is determined from the second target detection object prediction map, that is, the area where the target detection object is located in the zoomed image of the target detection object is obtained. After the area information of the area where the target detection object is located in the zoomed image is obtained, Corresponding to the image to be processed, obtain the area where the target detection object in the image to be processed is located.
  • the feature object prediction model includes a heat map prediction network
  • the step of inputting the target detection object image into the feature object prediction model to obtain the feature object segmentation map of the feature object in the target detection object image include:
  • step S902 the target detection object image is input to the heat map prediction network, and a characteristic object heat spot map corresponding to the target detection object image is obtained.
  • the heat map prediction network is a network model used to calculate the heat value of each pixel in the target detection object image that belongs to the characteristic object.
  • the heat map prediction network here is a trained network model, which can be directly used to calculate the heat value of each pixel in the target detection object image that belongs to the characteristic object.
  • the thermal value here refers to the probability value of each pixel belonging to the characteristic object in the target detection image.
  • the heat map prediction network can adopt full convolutional network structure FCN, convolutional neural network structure U-net, Linknet network structure and so on.
  • the feature object thermal point map describes the thermal value (ie probability value) of each pixel point belonging to the feature object in the target detection object image, and the feature can be extracted according to the thermal value of each pixel point described by the feature object thermal point map The area where the object is located.
  • Step S904 Acquire the heat value of each pixel point belonging to the characteristic object in the target detection object image according to the characteristic object heat point map.
  • Step S906 according to the thermal value, determine the area where the feature object is located from the feature object thermal spot map to obtain the feature object segmentation map.
  • the thermal value of each pixel of the characteristic object in the target detection object image can be obtained, and contour extraction is performed according to the thermal value of each pixel to determine the area where the characteristic object is located, and the characteristic object is obtained Segmentation diagram.
  • the watershed algorithm can be used to extract the contour of the feature object thermal point map to determine the area of the feature object, and obtain the feature object segmentation map ;
  • the feature object segmentation map can be a binary image.
  • the feature object hot spot map can be divided into pixels in the area where the feature object is located The value is set to 0 to realize the visual display as white, and the pixel value of the pixel in the area where the non-characteristic object is located in the target detection object image is set to 255 to realize the visual display as black.
  • the thermal point map of the feature object can be directly binarized according to the preset thermal value threshold to determine The region where the feature object is located, and the feature object segmentation map is obtained.
  • the pixel value of the pixel with the heating value greater than the preset heating value threshold is set to 0 to realize the visual display as white
  • the pixel value of the pixel with the heating value less than or equal to the preset heating value threshold is set to 255, to achieve the visual display as black.
  • the feature object prediction model is as shown in Figure 9b, where the heat map prediction network is a LinkNet network structure, and the LinkNet network structure is shown in Figure 9c.
  • each encoder Encoder Block
  • Decoder Block Connected to the encoder is the decoder, which inputs the low-level semantic feature information output by the encoder into the decoder.
  • the decoder decodes the low-level semantic feature information and outputs the feature object segmentation map that is the same as the original size of the target detection object image.
  • the white area is the area where the cells in the mammary duct are located, and the black area is the background or the interstitium of the mammary duct.
  • the input of the encoder in the LinkNet network structure is connected to the output of the corresponding decoder.
  • the encoder Before the decoder outputs the feature object segmentation map, the encoder can integrate the low-level semantic feature information into the decoder, so that the decoder integrates the low-level semantics
  • the feature information and high-level semantic feature information can effectively reduce the spatial information lost during the downsampling operation, and the decoder shares the parameters learned from each layer of the encoder, which can effectively reduce the parameters of the decoder.
  • step S208 obtains the quantitative feature information of the target detection object according to the target detection object image and the feature object segmentation map, including:
  • Step S1002 Determine the area where the feature object is located from the feature object segmentation map
  • Step S1004 According to the area where the characteristic object is located, the area image of the characteristic object is intercepted from the target detection object image.
  • the area image of the characteristic object refers to the image data of the characteristic object intercepted in the target detection object image.
  • Step S1006 Calculate the quantized feature information of the target detection object according to the pixel value of each pixel in the area image of the feature object.
  • the quantized feature information can be the color depth of the feature object.
  • the pixel value of the pixel in the area where the feature object is located can be used for quantitative expression.
  • the characteristic object can be a cell in the breast duct.
  • the characteristic information of the breast duct includes the nucleus staining value.
  • the nuclear staining value can be obtained by obtaining the area of the cell in the breast duct.
  • the pixel value of the pixel point is quantified.
  • the pixel values of the pixels in the region where the feature object is obtained in the feature object segmentation map are difficult to characterize the color depth of the feature object. Therefore, after the feature object segmentation map is obtained, the area of the feature object can be determined from the feature object segmentation map, so that according to the area of the feature object in the feature object segmentation map, the corresponding interception of the feature object location in the target detection object image The area image of the area; then the pixel value of each pixel is obtained in the area image of the feature object, and the quantized feature of the target detection object is calculated. Among them, according to the pixel value of each pixel in the area image of the feature object, the quantified feature information of the target detection object is calculated. Specifically, it can be calculated by calculating the average value of the pixel values of all pixels in the area image, and the average value is determined as the target detection object Quantitative feature information.
  • step S208 obtains the quantitative feature information of the target detection object according to the target detection object image and the feature object segmentation map, including:
  • Step S1102 Determine the number of pixels in the region where the feature object is located from the feature object segmentation map
  • Step S1104 Obtain the total number of pixels of the target detection object image, calculate the ratio of the number of pixels in the area where the feature object is located to the total number of pixels, and obtain quantitative feature information of the target detection object.
  • the quantized feature information may be the shape and size of the feature object, and in specific applications, the number of pixels in the area where the feature object is located can be used for quantitative expression.
  • the target detection object is a breast duct
  • the characteristic object can be a cell in the breast duct.
  • the characteristic information of the breast duct includes the size of the cell nucleus.
  • the number of pixels is quantified. It is understandable that in an image, the number of pixels can be equivalent to the area occupied in the image.
  • the area where the feature object is located can be determined according to the feature object segmentation map, and the number of pixels in the area where the feature object is located is calculated; finally, the total number of the target detection object image or the feature object segmentation map is counted.
  • the number of pixels is determined as the quantitative feature of the target detection object by calculating the ratio of the number of pixels in the area where the feature object is located to the total number of pixels.
  • the method for acquiring quantitative feature information further includes: determining the area where the feature object is located and the area value of the area where the feature object is located from the feature object segmentation map, and then performing contour extraction on the area where the feature object is located, and Calculate the perimeter of the contour of the area where the feature object is located.
  • the perimeter of the contour of the area where the feature object is located can be expressed by the number of pixels of the contour of the area where the feature object is located. The ratio of the area value of the long pure circle to the area of the area where the feature object is located to obtain the quantitative feature information of the feature object on the degree of circularity.
  • the training step of the feature object prediction model includes:
  • Step S1202 Obtain a sample detection object image and a contour area annotation map of the sample feature object in the sample detection object image.
  • the sample detection object image is an image used to train the feature object prediction model, where the sample detection object image includes multiple sample feature objects; the contour area annotation map of the sample feature object in the sample detection object image refers to the sample features that have been labeled The area where the object is located has a one-to-one correspondence with the sample detection object image.
  • the contour area annotation map of the sample feature object can be annotated by professional annotators, or it can be a public sample detection object data set.
  • the sample detection object image may be an image of a single body tissue
  • the contour area annotation map of the sample characteristic object may be an image where the cell (or cell nucleus) is annotated.
  • Step S1204 Acquire a corresponding thermal spot map of the sample feature object according to the contour area annotation map of the sample feature object.
  • the contour area annotation map of the sample feature object is converted into a thermal spot map of the sample feature object.
  • the sample feature object thermal point map describes the probability value of each pixel in the sample detection object image belonging to the sample feature object.
  • the thermal spot map of the sample feature object here can be understood as a standard thermal spot map, which accurately describes the probability value of each pixel in the sample detection object image belonging to the sample feature object.
  • the corresponding thermal point map of the sample feature object is obtained according to the contour area annotation map of the sample feature object, which can be specifically determined by determining the probability value of the pixel point in the area where the sample feature object is located in the contour area annotation map of the sample feature object as 1.
  • Step S1206 Input the sample detection object image to the pre-built feature object prediction model heat map prediction network to obtain a feature object thermal point prediction map corresponding to the sample detection object image.
  • the obtained sample detection object image is input to the heat map prediction network of the feature object prediction model.
  • the network structure of the heat map prediction network includes but not limited to the encoding layer and the decoding layer.
  • the heat map prediction network detects each sample through the encoding layer
  • the target image is encoded and compressed, and the lower-dimensional low-level semantic feature information of each sample detection target image is extracted, and the extracted low-level semantic feature information is decoded through the decoding layer to calculate that each pixel in the sample detection target image belongs to the feature The probability value of the object, thereby obtaining the hot spot prediction map of the characteristic object.
  • the feature object thermal point prediction map describes the probability value of each pixel in the sample detection object image belonging to the sample feature object.
  • Step S1208 Calculate the loss value of the feature object prediction model based on the feature object thermal point prediction map and the sample feature object thermal point map.
  • Step S1210 According to the loss value of the feature object prediction model, the heat map prediction network of the feature object prediction model is trained until the convergence condition is reached, and the trained feature object prediction model is obtained.
  • the feature object hot spot prediction map describes the probability value of each pixel in the sample detection object image belonging to the sample feature object; while the sample feature object hot spot map also describes the probability value of each pixel point belonging to the feature object Probability value, and it is considered that the probability value marked in the hot spot map of the sample feature object accurately describes the probability that each pixel in the sample detection object image belongs to the sample feature object.
  • the loss value can be calculated according to the probability value of each pixel in the hot spot prediction map of the characteristic object belonging to the training target detection object, and the probability value of each pixel in the hot spot map of the sample characteristic object, for example, according to the hot spot of the characteristic object
  • the probability value of each pixel in the prediction map belonging to the training target detection object, and the probability value of each pixel in the sample feature object thermal point map calculate the distance value of the feature object thermal point prediction map and the sample feature object thermal point map, and The distance value is determined as the loss value; it can also be calculated based on the probability value of each pixel in the feature object hot spot prediction map belonging to the training target detection object, and the probability value of each pixel in the sample feature object hot spot map, using the softmax function to calculate Get the loss value.
  • the heat map prediction network of the feature object prediction model is trained according to the loss value, and the network parameters of the heat map prediction network of the feature object prediction model are adjusted until the convergence is satisfied.
  • the convergence condition can be set or adjusted according to actual needs. For example, when the training loss value reaches the minimum, it can be considered as meeting the convergence condition, or when the loss value can no longer change, it can be considered as meeting the convergence condition.
  • the segmentation accuracy of the feature object prediction model for cells can reach 0.6632.
  • the training step of the target detection object segmentation model includes:
  • Step S1302 Obtain a sample image, and an outline area labeling map of the sample detection object in the sample image.
  • the feature object prediction model and the target detection object segmentation model are separately trained.
  • the training process of the target detection object segmentation model may specifically be to first obtain the sample image and the contour area annotation map of the sample detection object in the sample object.
  • the sample image is an image used to train the target detection object segmentation model, where the sample image includes one or more sample detection objects;
  • the contour area annotation map of the sample detection object refers to the binary value of the area where the sample detection object is marked Picture, there is a one-to-one correspondence with the sample image.
  • the contour area annotation map of the sample detection object can be marked by professional annotators.
  • the sample image can be a complete pathological slice image
  • the sample detection object can be a body organ, tissue, or cell, etc.
  • the contour area annotation map of the sample detection object in the sample image can be an annotation An image of the contour position of the region where the sample detection object (for example, body tissue) is located.
  • the sample image may be a pathological image of breast tissue
  • the contour area annotation map may be a pathological image of breast tissue that has marked the contour position of the region where the breast duct is located.
  • Step S1304 Input the sample image into a pre-built detection object segmentation model to obtain a sample detection object prediction map corresponding to the sample image.
  • the obtained sample image is input to the detection object segmentation model.
  • the network structure of the detection object segmentation model includes but is not limited to the coding layer and In the decoding layer, the detection object segmentation model encodes and compresses each sample image through the encoding layer, extracts the lower-dimensional semantic feature information of each sample image, and then decodes the extracted low-level semantic feature information through the decoding layer, and calculates The probability value of each pixel in the sample image belonging to the sample detection object, so as to determine whether each pixel belongs to the sample detection object according to the probability value, and obtain the sample detection object prediction map.
  • the sample detection object prediction map intuitively describes the result of whether each pixel in the sample image belongs to the sample detection object, and shows the area where the sample detection object is located.
  • Step S1306 Calculate the loss value of the detection object segmentation model based on the sample detection object prediction map and the contour area label map of the sample detection object.
  • the sample detection object prediction map describes the area where the sample detection object is located; and the outline area label icon of the sample detection object indicates the area where the sample detection object is located; therefore, the loss value of the detection object segmentation model can be specifically obtained to obtain the sample detection object prediction
  • the area information (such as area position coordinates) of the area where the sample detection object is located in the figure, and the contour area of the sample detection object is marked with the area information (such as the area position coordinates) of the area where the sample detection object is located in the figure, and the difference between the two area information is calculated
  • the value is used as the loss value of the detection object segmentation model.
  • the edge information of the sample detection object can be constructed to calculate the loss value. That is to say, it can also be to obtain the contour information of the area where the sample detection object is located in the sample detection object prediction map (for example, the position coordinates where the contour is located), and the contour area of the sample detection object labeling the contour information of the area where the sample detection object is in the map ( For example, the position coordinates of the contour), and then calculate the difference between the two contour information as the loss value of the detection object segmentation model.
  • the contour information of the area where the sample detection object is located in the sample detection object prediction map for example, the position coordinates where the contour is located
  • the contour area of the sample detection object labeling the contour information of the area where the sample detection object is in the map For example, the position coordinates of the contour
  • the detection object segmentation model can also be obtained by first obtaining the area information and contour information of the area where the sample detection object is located in the sample detection object prediction map, and the area information and contour information of the area where the sample detection object is located in the contour area annotation map of the sample detection object. After the difference between the two area information and the difference between the two contour information, the sum of the difference between the area information and the difference between the contour information is determined as the loss value of the detection object segmentation model.
  • the sample detection object prediction map describes the probability value of each pixel in the sample image belonging to the sample detection object; and each pixel in the area where the sample detection object is marked in the contour area annotation map of the sample detection object belongs to the sample detection 100%
  • the probability value of each pixel in the area where the labeled sample detection object is located is determined to be 1
  • the probability value of each pixel in the area where the labeled non-sample detection object is located is determined to be 0. Therefore, the training loss value can also be calculated according to the probability value of each pixel in the sample detection object prediction map belonging to the training target detection object, and the probability value of each pixel in the contour area annotation map of the sample detection object.
  • step S1308 the detection object segmentation model is trained according to the loss value until the convergence condition is reached, and the target detection object segmentation model is obtained.
  • the detection object segmentation model is trained according to the loss value, and the model parameters of the detection object segmentation model are adjusted to adjust until the convergence condition is met, and the target object detection model is obtained.
  • the convergence condition can be set or adjusted according to actual needs. For example, when the loss value reaches the minimum, it can be considered as meeting the convergence condition, or when the value can no longer change, it can be considered as meeting the convergence condition.
  • an object classification method based on artificial intelligence including:
  • Step S1402 Obtain a pathological image of the breast tissue, where the pathological image of the breast tissue includes a breast duct.
  • the pathological image of breast tissue refers to an image taken by a medical imaging device, which may be a full-field digital pathological slice image.
  • the medical imaging device includes, but is not limited to, a digital pathological slice scanner, a digital slice microscope, and the like.
  • the specific location of the target detection object can be known through the breast tissue pathological image, and in the actual application scenario, the target detection object in the breast tissue pathological image may be, but is not limited to, a breast duct.
  • Step S1404 segmenting the mammary duct image of the mammary duct from the pathological image of the breast tissue.
  • the region image of the region where the breast duct is located can be segmented from the pathological image of the breast tissue, and the region image is the breast duct image.
  • segmenting the breast duct image of the breast duct from the breast tissue pathological image can be specifically inputting the breast tissue pathological image into a deep learning model for image segmentation, and predicting the area where the breast duct is located through the deep learning model, and then according to The area where the breast duct is predicted is intercepted from the pathological image of the breast tissue, and the area image of the area where the breast duct is located is used as the target detection object image of the target detection object.
  • the pathological image of breast tissue is usually a digital pathological slice image acquired under a 40x or 20x lens.
  • the amount of data is huge.
  • the 40x The pathological image of the breast tissue obtained under the microscope or 20 times microscope is zoomed into the pathological image of the breast tissue of the size of the 10 times microscope or the size of the 5 times microscope, and the area of the breast duct is obtained through the pathological image of the breast tissue of the size of the 10 times microscope or the size of the 5 times microscope.
  • the pathological image of breast tissue can be cut into multiple pathological sub-images of breast tissue according to the preset cutting rules, and then the area where the duct of the breast is located in each pathological sub-image of the breast can be obtained. Finally, the pathological sub-images of each breast tissue The results of the area where the middle breast duct is located are spliced according to the cutting rules, and the result of the area where the breast duct is located in the entire breast tissue pathological image is obtained. For example, as shown in Figure 7b and Figure 7c, after obtaining the pathological image of the breast tissue, the pathological image of the breast tissue is cut according to the preset cutting rule shown in Figure 7b to obtain 6*6 pieces of breast tissue pathology. image.
  • the breast histopathological sub-images are input into the target detection object segmentation model one by one to obtain the breast duct sub-predicted image corresponding to each breast histopathological sub-image, for example, as shown in Figure 7b
  • the breast tissue pathological sub-image 702 in is input to the target detection object segmentation model, and the target detection object segmentation model outputs the breast duct sub-predicted image 704 as shown in FIG. 7c.
  • the multiple mammary duct sub-predicted images are spliced according to the cutting rules to obtain the breast duct prediction map and the breast duct prediction map of the entire breast tissue pathological image As shown in Figure 7c.
  • Step S1406 Input the breast duct image into the feature object prediction model to obtain a cell segmentation map of the cells in the breast duct image.
  • the cell segmentation map refers to an image that has the same size as the breast duct image and the area where the cells are marked.
  • the cell segmentation map may be a binary image, that is, the cell segmentation map presents only black and white visual effects.
  • the area where the cells are located in the breast duct image can be displayed as white, and the area where the cells are not located can be displayed as black.
  • the feature object prediction model is a network model used to determine whether each pixel in the breast duct image belongs to a cell to output a cell segmentation map.
  • the feature object prediction model here is a trained network model, which can be directly used to determine whether each pixel in the breast duct image belongs to a cell, and output a feature object segmentation map.
  • the feature object prediction model can adopt a network structure such as a full convolutional network structure FCN, a convolutional neural network structure U-net, etc., which is not limited here.
  • the feature object prediction model includes, but is not limited to, the coding layer and the decoding layer.
  • the coding layer is used to encode and compress breast duct images, and to extract low-level semantic feature maps with lower dimensions, and the decoding layer is used to encode The low-level semantic feature map output by the layer is decoded, and the cell segmentation map with the same size as the breast duct image is output.
  • the feature object prediction model may include a heat map prediction network.
  • a cell heat point map corresponding to the mammary duct image is obtained, and then according to the cell heat
  • the dot map obtains the heat value of each pixel in the breast duct image belonging to the cell.
  • the cell area is determined from the cell heat dot image to obtain a cell segmentation map. Specifically, as shown in Fig.
  • the breast duct image is input into the feature object prediction model as the target detection object image, and the heat map prediction network is first used to output a feature object heat point map with the same size as the original breast duct image, that is, cell heat Point map, in which each pixel of the cell thermal spot map identifies the corresponding pixel in the mammary duct image and belongs to the thermal value of the cell; then, according to the thermal value of each pixel on the cell thermal spot map, the watershed algorithm is used to calculate the cell thermal spot map Perform contour extraction to determine the area of the cell, and obtain a cell segmentation map.
  • the heat map prediction network is a LinkNet network structure.
  • each encoder (Encoder Block) is connected to a decoder (Decoder Block). Further, after obtaining the area where the cell is located according to the heat value, the pixel value of the pixel in the cell area in the cell heat spot map can be set to 0 to achieve visual display as white, and the cell heat spot map The pixel value of the pixel in the area where the non-cell is located is set to 255 to realize the visual display as black.
  • Step S1408 Obtain cell feature information and sieve feature information based on the breast duct image and the cell segmentation map.
  • the cell characteristic information and the sieve hole characteristic information refer to the quantified information of the various characteristics of the cells in the breast duct.
  • the cell characteristic information includes, but is not limited to, the number of cells in the breast duct, cell size, cell circularity, and cell staining.
  • sieve hole feature information includes but is not limited to the size of the sieve hole in the breast duct and the cytoplasmic staining value.
  • the cell feature information and sieve feature information of the cells in the breast duct can be calculated based on the image data of the breast duct image and the cell segmentation map.
  • the area where the cell is located can be obtained from the cell segmentation map, and then the area corresponding to the area where the cell is located is determined from the mammary duct image, and the image of the area is determined as the cell area image, and finally based on the pixels on the cell area image
  • the point information is used to calculate the cell characteristic information used to express the staining value of the nucleus and the sieve characteristic information used to express the staining value of the cytoplasm.
  • the method for acquiring cell feature information further includes: determining the area where the cell is located and the area value of the area where the cell is located from the cell segmentation map; then extracting the contour of the area where the cell is located, and calculating the pixel length of the contour of the area where the cell is located, namely Obtain the perimeter of the contour of the area where the cell is located; finally calculate the ratio of the area value of the pure circle of equal perimeter to the area value of the area where the cell is located to obtain the circularity, so as to obtain the cell characteristic information of the cell on the circularity.
  • Step S1410 Classify the breast duct image according to the cell feature information and the sieve feature information to obtain the lesion category information of the breast duct in the breast tissue pathological image.
  • the breast duct image after obtaining the cell characteristic information and the sieve hole characteristic information, the breast duct image can be classified according to the cell characteristic information and the sieve hole characteristic information of the breast duct image to obtain the lesion type of each breast duct in the breast tissue pathological image information.
  • to classify breast duct images according to cell feature information and sieve feature information can be inputting the cell feature information and sieve feature information of the breast duct image into a trained classifier, and using the classifier to classify the breast duct The image is classified, where the classifier can be a classifier based on machine learning, such as an SVM classifier, or a classifier based on deep learning, such as a classifier based on a CNN model.
  • machine learning such as an SVM classifier
  • a classifier based on deep learning such as a classifier based on a CNN model.
  • the training of the classifier may specifically be to obtain a sample breast duct image and a standard lesion category label corresponding to the sample breast duct image, and input the sample breast duct image into a pre-built classifier to obtain The predicted lesion category label corresponding to the sample breast duct image, and then the loss value of the classifier is calculated by comparing the standard lesion category label of the sample breast catheter image with the predicted lesion category label, and finally the parameters in the classifier are calculated according to the loss value of the classifier Make adjustments to obtain the trained classifier.
  • segmenting the mammary duct image of the mammary duct from the mammary gland histopathological image may be specifically inputting the mammary histopathological image into the target detection object segmentation model to obtain the corresponding breast histopathological image Breast duct prediction map, and then obtain the area where the breast duct is located in the pathological image of the breast tissue according to the prediction image of the breast duct, and finally segment the pathological image of the breast tissue according to the area where the breast duct is located to obtain the image of the breast duct.
  • the target detection object segmentation model is a network model used to determine whether each pixel in the breast tissue pathological image belongs to a breast duct, so as to output a breast duct prediction map.
  • the feature object segmentation model here is a trained network model. Specifically, the target detection object segmentation model is shown in Figure 5b, including but not limited to the encoding layer and the decoding layer.
  • the encoding layer is used to encode and compress breast tissue pathological images, and extract lower-dimensional semantic feature maps with lower dimensions.
  • the decoding layer is used to decode the low-level semantic feature map output by the encoding layer, and output a breast duct prediction map with the same size as the pathological image of the breast tissue.
  • the breast duct prediction map may be a binary image, that is, the first breast duct prediction map presents only black and white visual effects.
  • the area where the breast duct is located can be displayed as white, and the area where the breast duct is located (for example, background, interstitial) can be displayed as black.
  • the pathological image of the breast tissue is input to the target detection object segmentation model to obtain the breast duct prediction map corresponding to the image to be processed.
  • the probability value of each pixel in the pathological image of the breast tissue belonging to the breast duct is calculated through the target detection object segmentation model.
  • each pixel in the breast tissue pathological image is classified, and the pixels belonging to the breast duct and the pixels not belonging to the breast duct are obtained, and then the gray value of the pixels belonging to the breast duct is set to 0, and the pixels belonging to the breast duct are set to 0.
  • the gray scale of the pixel points belonging to the breast duct is set to 255, and the first target detection object in which the area where the target detection object is located can be displayed as white and the area where the non-target detection object is located can be displayed as black.
  • a medical imaging device includes:
  • Microscope scanner 1502 used to obtain pathological images of breast tissue
  • a memory 1504 in which computer-readable instructions are stored
  • the processor 1506 when the computer-readable instructions are executed by the processor, causes the processor to perform the following steps: segment the mammary duct image of the mammary duct from the mammary tissue pathological image; input the mammary duct image into the feature object prediction model to obtain Cell segmentation map of the cells in the breast duct image; obtain cell feature information and sieve feature information according to the breast duct image and cell segmentation map; classify the breast duct image according to the cell feature information and sieve feature information to obtain the breast tissue pathology Type information of breast duct lesions in the image;
  • the display 1508 is used to display the breast tissue pathological image and the lesion type information of the breast duct in the breast tissue pathological image.
  • the medical imaging equipment may include a microscope scanner 1502, a memory 1504, a processor 1506, and a display 1508.
  • the microscope scanner 1502 sends the collected pathological images of breast tissue to the memory 1504.
  • the memory stores computer-readable instructions.
  • the processor 1506 executes the following steps: , Segment the mammary duct image of the mammary duct; input the mammary duct image into the feature object prediction model to obtain the cell segmentation map of the cells in the mammary duct image; obtain the cell feature information and sieve according to the mammary duct image and cell segmentation map Feature information: According to the cell feature information and the sieve feature information, the breast duct images are classified, and the lesion category information of the breast duct in the breast tissue pathological image is obtained.
  • the breast tissue pathological image and the lesion category information of the breast duct in the breast tissue pathological image can be displayed on the display 1508, that is, the area of the breast duct is marked in the mammary tissue pathological image on the display 1508, and the corresponding display marks the breast.
  • Information about the type of lesion of the catheter can be displayed on the display 1508, that is, the area of the breast duct is marked in the mammary tissue pathological image on the display 1508, and the corresponding display marks the breast.
  • an artificial intelligence-based object classification device 1600 is provided, and the device includes:
  • the image acquisition module 1602 is used to acquire an image to be processed, where the image to be processed includes a target detection object;
  • the image segmentation module 1604 is used to segment the target detection object image of the target detection object from the image to be processed
  • the feature image acquisition module 1606 is configured to input the target detection object image into the feature object prediction model to obtain a feature object segmentation map of the feature object in the target detection object image;
  • the feature information obtaining module 1608 is configured to obtain the quantitative feature information of the target detection object according to the target detection object image and the feature object segmentation map;
  • the object classification module 1610 is configured to classify the target detection object image according to the quantized feature information to obtain the category information of the target detection object in the image to be processed.
  • the image segmentation module includes:
  • the object region determining unit is configured to input the image to be processed into the target detection object segmentation model to obtain the first target detection object prediction map corresponding to the image to be processed, and to obtain the current target image according to the first target detection object prediction map.
  • the object area segmentation unit is configured to segment the image to be processed according to the area where the target detection object is located to obtain the target detection object image.
  • the image segmentation module further includes an image scaling unit
  • the image scaling unit is used to scale the image to be processed and obtain the scaled image
  • the object area determining unit is used to input the zoomed image into the target detection object segmentation model to obtain the second target detection object prediction map of the zoomed image; obtain the target detection object area in the zoomed image according to the second target detection object prediction map Area information; According to the area information, the area where the target detection object is located in the image to be processed is obtained.
  • the object region determining unit is specifically configured to cut the to-be-processed image into a plurality of to-be-processed sub-images according to the cutting rule; input each to-be-processed sub-image to the target detection object segmentation model to obtain each to-be-processed sub-image Corresponding to the target detection object sub-prediction image; according to the cutting rule, the target detection object sub-prediction images are spliced to obtain the first target detection object prediction map corresponding to the image to be processed.
  • the target detection object segmentation model includes an encoding layer and a decoding layer; the object region determining unit is specifically used to input the image to be processed into the encoding layer of the target detection object segmentation model, and the image to be processed is encoded through the encoding layer , Obtain the image feature information of the image to be processed; input the image feature information into the decoding layer of the target detection object segmentation model, and decode the image feature information through the decoding layer to obtain the first target detection object prediction map corresponding to the image to be processed .
  • the characteristic image acquisition module is used to input the target detection object image to the heat map prediction network to obtain the characteristic object heat spot map corresponding to the target detection object image; obtain the target detection object image according to the characteristic object heat spot map The thermal value of each pixel point belonging to the feature object in the, according to the thermal value, determine the area of the feature object from the feature object thermal point map, and obtain the feature object segmentation map.
  • the feature information acquisition module is used to determine the area where the feature object is located from the feature object segmentation map; intercept the area image of the feature object in the target detection object image according to the area where the feature object is located; according to the area image of the feature object Calculate the quantitative feature information of the target detection object by the pixel value of each pixel in the.
  • the feature information acquisition module is used to determine the number of pixels in the area where the feature object is located from the feature object segmentation map; obtain the total number of pixels in the target detection object image, and calculate the number of pixels in the area where the feature object is located The ratio of the total number of pixels to obtain the quantitative feature information of the target detection object.
  • the object classification device based on artificial intelligence further includes a feature object prediction model training module, which is used to: obtain the sample detection object image and the contour area annotation map of the sample feature object in the sample detection object image; Obtain the corresponding thermal point map of the sample feature object from the contour area annotation map; input the sample detection object image to the pre-built feature object prediction model thermal map prediction network to obtain the feature object thermal point prediction map corresponding to the sample detection object image; Calculate the loss value of the feature object prediction model according to the feature object's thermal point prediction map and the sample feature object's thermal point map; according to the loss value of the feature object prediction model, train the feature object prediction model's thermal map prediction network until the convergence condition is reached , Get the feature object prediction model after training.
  • a feature object prediction model training module which is used to: obtain the sample detection object image and the contour area annotation map of the sample feature object in the sample detection object image; Obtain the corresponding thermal point map of the sample feature object from the contour area annotation map; input the sample detection object image to the pre-built feature object prediction model
  • the artificial intelligence-based object classification device further includes a target detection object segmentation model training module, which is used to: obtain a sample image, and an outline area label map of the sample detection object in the sample image; and input the sample image into the pre-built In the detection object segmentation model, the sample detection object prediction map corresponding to the sample image is obtained; the loss value of the detection object segmentation model is calculated according to the sample detection object prediction map and the contour area annotation map of the sample detection object; the detection object is calculated according to the loss value The segmentation model is trained until the convergence condition is reached, and the target detection object segmentation model is obtained.
  • a target detection object segmentation model training module which is used to: obtain a sample image, and an outline area label map of the sample detection object in the sample image; and input the sample image into the pre-built In the detection object segmentation model, the sample detection object prediction map corresponding to the sample image is obtained; the loss value of the detection object segmentation model is calculated according to the sample detection object prediction map and the contour area annotation map of the sample detection object; the detection object
  • an apparatus 1700 for object classification based on artificial intelligence includes:
  • the pathological image acquisition module 1702 is used to acquire a pathological image of breast tissue, where the pathological image of breast tissue includes a breast duct;
  • the duct image acquisition module 1704 is used to segment the mammary duct image of the mammary duct from the pathological image of the breast tissue;
  • the cell area map acquisition module 1706 is used to input the breast duct image into the feature object prediction model to obtain a cell segmentation map of the cells in the breast duct image;
  • the duct feature acquisition module 1708 is used to acquire cell feature information and sieve feature information according to the breast duct image and cell segmentation map;
  • the duct classification module 1710 is used to classify the breast duct image according to the cell feature information and the sieve feature information, and obtain the lesion type information of the breast duct in the breast tissue pathological image.
  • Fig. 18 shows an internal structure diagram of a computer device in an embodiment.
  • the computer device may specifically be the terminal 110 (or the server 120) in FIG. 1.
  • the computer equipment includes the computer equipment including a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system, and may also store computer-readable instructions.
  • the processor can enable the processor to implement an artificial intelligence-based object classification method.
  • the internal memory may also store computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor can make the processor execute an artificial intelligence-based object classification method.
  • the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen. It can be an external keyboard, touchpad, or mouse.
  • FIG. 18 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • the artificial intelligence-based object classification apparatus provided in the present application can be implemented in a form of computer-readable instructions, and the computer-readable instructions can run on the computer device as shown in FIG. 18.
  • the memory of the computer equipment can store various program modules that make up the artificial intelligence-based object classification device, such as the image acquisition module, image segmentation module, characteristic image acquisition module, characteristic information acquisition module, and object classification module shown in FIG. 16.
  • the computer-readable instructions formed by the various program modules cause the processor to execute the steps in the artificial intelligence-based object classification method described in the various embodiments of the present application described in this specification.
  • the computer device shown in FIG. 18 may execute step S202 through the image acquisition module in the artificial intelligence-based object classification apparatus shown in FIG. 16.
  • the computer device may execute step S204 through the image segmentation module.
  • the computer device may execute step S206 through the feature image acquisition module.
  • the computer device may execute step S208 through the feature information acquisition module.
  • the computer device may execute step S210 through the object classification module.
  • a computer device including a memory and a processor, and the memory stores computer-readable instructions.
  • the processor executes the above-mentioned artificial intelligence-based object classification method. step.
  • the steps of the artificial intelligence-based object classification method may be the steps in the artificial intelligence-based object classification method of each of the foregoing embodiments.
  • a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the processor executes the steps of the above artificial intelligence-based object classification method.
  • the steps of the artificial intelligence-based object classification method may be the steps in the artificial intelligence-based object classification method of each of the foregoing embodiments.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

一种基于人工智能对象分类方法、装置、计算机可读存储介质和计算机设备,所述方法包括:获取待处理图像,其中待处理图像包括目标检测对象(S202);从待处理图像中,分割出目标检测对象的目标检测对象图像(S204);将目标检测对象图像输入至特征对象预测模型中,得到目标检测对象图像中特征对象的特征对象分割图(S206);根据目标检测对象图像以及特征对象分割图,获取目标检测对象的量化特征信息(S208);根据量化特征信息对目标检测对象图像进行分类,得到待处理图像中目标检测对象的类别信息(S210)。

Description

基于人工智能的对象分类方法以及装置、医学影像设备
本申请要求于2020年02月17日提交中国专利局,申请号为2020100961860,申请名称为“基于人工智能的对象分类方法以及装置、医学影像设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种基于人工智能的对象分类方法、装置、计算机可读存储介质和计算机设备,以及一种医学影像设备。
背景技术
随着图像处理技术的发展,将图像处理技术应用于医学领域,以实现对医学图像中不同对象的识别或分类越来越普遍。传统对象分类技术中,通常直接对医学影像图像进行分类,但是由于传统对象分类技术的分类方式比较粗糙,容易造成误分类,导致对分类准确率较低。
发明内容
根据本申请提供的各种实施例,提供一种基于人工智能的对象分类方法、装置、计算机可读存储介质和计算机设备,以及一种医学影像设备。
一种基于人工智能的对象分类方法,包括:
获取待处理图像,其中所述待处理图像包括目标检测对象;
从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;
根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;
根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
一种基于人工智能的对象分类方法,包括:
获取乳腺组织病理图像,其中所述乳腺组织病理图像包括乳腺导管;
从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像;
将所述乳腺导管图像输入至特征对象预测模型中,得到所述乳腺导管图像中细胞的细胞分割图;
根据所述乳腺导管图像以及所述细胞分割图,获取所述细胞特征信息以及筛孔特征信息;
根据所述细胞特征信息以及筛孔特征信息,对所述乳腺导管图像进行分类,得到所述乳腺组织病理图像中所述乳腺导管的病灶类别信息。
一种医学影像设备,包括:
显微镜扫描仪,用于获取乳腺组织病理图像;
存储器,所述存储器中存储有计算机可读指令;
处理器,所述计算机可读指令被所述处理器执行是,使得处理器执行以下步骤:从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像;将所述乳腺导管图像输入至特征对象预测模型中,得到所述乳腺导管图像中细胞的细胞分割图;根据所述乳腺导管图像以及所述细胞分割图,获取所述细胞特征信息以及筛孔特征信息;根据所述细胞特征信息以及筛孔特征信息,对所述乳腺导管图像进行分类,得到所述乳腺组织病理图像中所述乳腺导管的病灶类别信息;
显示器,用于显示所述乳腺组织病理图像以及所述乳腺组织病理图像中所述乳腺导管的病灶类别信息。
一种基于人工智能的对象分类装置,其特征在于,所述装置包括:
图像获取模块,用于获取待处理图像,其中所述待处理图像包括目标检测 对象;
图像分割模块,用于从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
特征图像获取模块,用于将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;
特征信息获取模块,用于根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;
对象分类模块,用于根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
一种基于人工智能的对象分类装置,包括:
病理图像获取模块,用于获取乳腺组织病理图像,其中所述乳腺组织病理图像包括乳腺导管;
导管图像获取模块,用于从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像;
细胞区域图获取模块,用于将所述乳腺导管图像输入至特征对象预测模型中,得到所述乳腺导管图像中细胞的细胞分割图;
导管特征获取模块,用于根据所述乳腺导管图像以及所述细胞分割图,获取所述细胞特征信息以及筛孔特征信息;
导管分类模块,用于根据所述细胞特征信息以及筛孔特征信息,对所述乳腺导管图像进行分类,得到所述乳腺组织病理图像中所述乳腺导管的病灶类别信息。
一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,,使得处理器执行以下步骤:
获取待处理图像,其中所述待处理图像包括目标检测对象;
从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测 对象图像中特征对象的特征对象分割图;
根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;
根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,该处理器执行所述计算机可读指令时实现以下步骤:
获取待处理图像,其中所述待处理图像包括目标检测对象;
从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;
根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;
根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中基于人工智能的对象分类方法的应用环境图;
图2为一个实施例中基于人工智能的对象分类方法的流程示意图;
图3为一个实施例中乳腺组织病理图像的示意图;
图4为一个实施例中目标检测对象图像切割步骤的流程示意图;
图5a为一个实施例中第一目标检测对象预测图获取步骤的流程示意图;
图5b为一个实施例中目标检测对象分割模型的原理框架图;
图6为一个实施例中在待处理图像中目标检测对象所在区域获取步骤的流程示意图;
图7a为另一个实施例中第一目标检测对象预测图获取步骤的流程示意图;
图7b为一个实施例中将待处理图像切割为多个待处理子图像的示意图;
图7c为一个实施例中将各目标检测对象子预测图像进行拼接的示意图;
图8为另一个实施例中在待处理图像中目标检测对象所在区域获取步骤的流程示意图;
图9a为一个实施例中特征对象分割图获取步骤的流程示意图;
图9b为一个实施例中特征对象预测模型的原理框架图;
图9c为一个实施例中LinkNet网络的网络结构示意图;
图10为一个实施例中量化特征信息获取步骤的流程示意图;
图11为另一个实施例中量化特征信息获取步骤的流程示意图
图12为一个实施例中特征对象预测模型训练步骤的流程示意图;
图13为一个实施例中目标检测对象分割模型训练步骤的流程示意图;
图14为另一个实施中基于人工智能的对象分类方法的流程示意图;
图15为一个实施例中医学影像设备的结构框图;
图16为一个实施例中基于人工智能的对象分类装置的结构框图;
图17为另一个实施例中基于人工智能的对象分类装置的结构框图;
图18为一个实施例中计算机设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获 得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
计算机视觉技术(Computer Vision,CV)计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能系统。计算机视觉技术通常包括图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
机器学习(Machine Learning,ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、式教学习等技术。
图1为一个实施例中基于人工智能的对象分类方法的应用环境图。参照图1,该对象检测方法应用于对象分类系统。该对象分类系统包括终端110和服务器120。终端110和服务器120通过网络连接。终端110具体可以是台式终端或移 动终端,移动终端具体可以手机、平板电脑、笔记本电脑等中的至少一种。服务器120可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
具体地,终端110获取待处理图像,将待处理图像发送至服务器120。服务器120从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。进一步地,服务器120将待处理图像、待处理图像中的目标检测对象的目标检测对象图像、以及待处理图像中的目标检测对象的类别信息返回终端110,由终端110对这些图像以及类别信息进行显示。
如图2所示,在一个实施例中,提供了一种基于人工智能的对象分类方法。本实施例主要以该方法应用于上述图1中的服务器120来举例说明。参照图2,该基于人工智能的对象分类方法具体包括如下步骤:
步骤S202,获取待处理图像,其中待处理图像包括目标检测对象。
其中,待处理图像包括但不限于图片、影片等,具体可以是通过相机、扫描仪等装置获取的图像、通过截屏获取的图像或者是通过可上传图像的应用程序上传的图像等等;其中,待处理图像中包括一个或多个目标检测对象,目标检测对象是指待处理图像中需要检测、分类的对象。
在实际应用场景中,待处理图像可以但不限于是病理切片图像,目标检测对象可以但不限于是病理切片图像中的机体器官、组织或细胞。其中,病理切片图像具体可以是通过医学影像设备(例如数字病理切片扫描仪、数字切片显微镜等)拍摄的图像,例如,可以是全视野数字病理切片图像(Whole Slide Image,WSI)。
在一个实施例中,待处理图像是乳腺组织病理图像,目标检测对象是乳腺导管。如图3所示,图3为一个实施例中乳腺组织病理图像的示意图,图中标注区域310为一个目标检测对象,即乳腺导管。
步骤S204,从待处理图像中,分割出目标检测对象的目标检测对象图像。
其中,目标检测对象图像是指在待处理图像中,目标检测对象所在区域的区域图像。在获取到待处理图像以后,可从待处理图像中将目标检测对象所在区域的区域图像分割出来,该区域图像即为目标检测对象图像。
具体地,从待处理图像中分割出目标检测对象的目标检测对象图像,可以是利用图像分割算法从待处理图像中确定目标检测对象所在区域,然后截取目标检测对象所在区域的区域图像,作为目标检测对象的目标检测对象图像,其中,图像分割算法可以采用基于阈值的分割算法、基于边缘检测的分割方法等;也可以是将待处理图像输入至用于图像分割的深度学习模型中,通过深度学习模型预测目标检测对象所在区域,然后根据预测得到的目标检测对象所在区域从待处理图像中截取目标检测对象所在区域的区域图像,作为目标检测对象的目标检测对象图像。
步骤S206,将目标检测对象图像输入至特征对象预测模型中,得到目标检测对象图像中特征对象的特征对象分割图。
其中,特征对象是指在目标检测对象中包含目标检测对象特征信息的对象。在实际应用场景中,以待处理图像为病理切片图像、目标检测对象为机体组织为例,目标检测对象的特征对象可以为机体组织的细胞。
其中,特征对象分割图是指与目标检测对象图像尺寸一样的、已标注出特征对象所在区域的图像。在一个实施例中,特征对象分割图可以是二值图,也就是特征对象分割图呈现出只有黑和白的视觉效果。例如,在特征对象分割图中,目标检测对象图像中特征对象所在区域可显示为白色,非特征对象所在区域可显示为黑色。
其中,特征对象预测模型是用于判断目标检测对象图像中各个像素点是否属于特征对象,以输出特征对象分割图的网络模型。这里的特征对象预测模型是已训练好的网络模型,可直接用于判断目标检测对象图像中各个像素点是否属于特征对象,输出特征对象分割图。其中,特征对象预测模型可采用全卷积网络结构FCN、卷积神经网络结构U-net等神经网络结构,在此不做限定。具体地,特征对象预测模型中包括但不限于编码层和解码层,编码层是用来对目标检测对象图像进行编码压缩的,提取维度更低的低层语义特征图,而解码层是 用来对编码层输出的低层语义特征图进行解码运算,输出与目标检测对象图像尺寸一样的特征对象分割图。
步骤S208,根据目标检测对象图像以及特征对象分割图,获取目标检测对象的量化特征信息。
其中,量化特征信息是指目标检测对象中特征对象的各项特征实现量化后的信息,例如目标检测对象中特征对象的数量、大小、圆形度、像素的灰度值等。
其中,在获取到目标检测对象图像,以及与目标检测对象图像对应的特征对象分割图,可根据目标检测对象图像以及特征对象分割图的图像数据,计算目标检测对象的量化特征信息。
具体地,可以是从特征对象分割图中获取特征对象所在区域,然后从目标检测对象图像中,确定与特征对象所在区域相对应的区域,并将该区域的图像确定为特征对象的区域图像,最后根据特征对象的区域图像上像素点的像素信息计算量化特征信息。
步骤S210,根据量化特征信息对目标检测对象图像进行分类,得到待处理图像中目标检测对象的类别信息。
其中,在获取到目标检测对象图像的量化特征信息后,可根据目标检测对象图像的量化特征信息,对目标检测对象图像进行分类,以获取目标检测对象图像对应的目标检测对象的类别信息。
具体地,根据量化特征信息对目标检测对象图像进行分类,可以是将目标检测对象图像的量化特征信息输入至已经过训练的分类器中,利用分类器对目标检测对象图像进行分类,其中,分类器可以采用基于机器学习的分类器,例如SVM(support vector machine,支持向量机)分类器,也可以是基于深度学习的分类器中,例如基于CNN(Convolutional Neural Networks,卷积神经网络)构建的分类器。
进一步地,在一个实施例中,分类器的训练具体可以是获取样本检测对象图像,以及样本检测对象图像对应的标准类别标签,通过将样本检测对象图像输入至预先构建的分类器中,得到与样本检测对象图像对应的预测类别标签, 然后通过对比样本检测对象图像的标注类别标签与预测类别标签,计算分类器的损失值,最后根据分类器的损失值对分类器中的参数进行调整,以获取训练后的分类器。
上述基于人工智能的对象分类方法,在获取到待处理图像后,通过将目标检测对象图像从待处理图像中分割出来,实现从待处理图像中分离出单个目标检测对象的图像,减少不必要的图像数据对后续对象分类的影响,然后对单个目标检测对象进行特征对象检测得到特征对象分割图,从而根据特征对象分割图中标记的特征对象所在区域,从目标检测对象图像中获取特征对象的量化特征信息,实现对目标检测对象在特征对象级的特征信息进行量化,得到量化特征信息,进而根据量化特征信息对目标检测对象进行分类,实现将待处理图像进行级联处理,有效减少待处理图像中不必要的图像数据,降低待处理图像中不必要的图像数据对对象分类的影响,提高对待处理图像中目标检测对象的分类准确性。
在一个实施例中,如图4所示,步骤S204从待处理图像中,分割出目标检测对象的目标检测对象图像,包括:
步骤S402,将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图,根据第一目标检测对象预测图获取在待处理图像中目标检测对象所在区域。
步骤S404,根据目标检测对象所在区域对待处理图像进行分割,得到目标检测对象图像。
其中,目标检测对象分割模型是用于判断目标检测对象图像中各个像素点是否属于目标检测对象的网络模型,输出的是第一目标检测对象预测图。这里的目标检测对象分割模型是已训练好的网络模型,可直接用于判断待处理图像中各个像素点是否属于目标检测对象,输出第一目标检测对象预测图。其中,目标检测对象分割模型可采用全卷积网络结构FCN、卷积神经网络结构U-net等神经网络结构,在此不做限定。具体地,目标检测对象分割模型中包括但不限于编码层和解码层,编码层是用来对待处理图像进行编码压缩的,提取维度更低的低层语义特征图,而解码层是用来对编码层输出的低层语义特征图进行解 码运算,输出与待处理图像尺寸一样的第一目标检测对象预测图。
其中,第一目标检测对象预测图是指与待处理图像尺寸一样的、已标注出目标检测对象所在区域的图像。在得到第一目标检测对象预测图后,从第一目标检测对象预测图中获取目标检测对象所在区域,从而对应确定在待处理图像中目标检测对象所在区域,并将待处理图像中目标检测对象所在区域进行分割,得到目标检测对象图像。
在一个实施例中,第一目标检测对象预测图可以是二值图,也就是第一目标检测对象预测图呈现出只有黑和白的视觉效果。例如,在第一目标检测对象预测图中,目标检测对象所在区域可显示为白色,非目标检测对象所在区域可显示为黑色。
具体地,将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图,可以是通过目标检测对象分割模型计算待处理图像中各个像素点属于目标检测对象的概率值,目标检测对象分割模型根据概率值将待处理图像中各个像素点进行分类,得到属于目标检测对象的像素点以及不属于目标检测对象像素点,然后目标检测对象分割模型将属于目标检测对象的像素点的灰度值设置为0,将不属于目标检测对象的像素点的灰度至设置为255,得到目标检测对象所在区域可显示为白色、非目标检测对象所在区域可显示为黑色的第一目标检测对象预测图。
进一步地,在一个实施例中,如图5a所示,目标检测对象分割模型包括编码层以及解码层;将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图的步骤,包括:
步骤S502,将待处理图像输入至目标检测对象分割模型的编码层,通过编码层对待处理图像进行编码处理,得到待处理图像的图像特征信息;
其中,编码层包括多个卷积层。与编码层连接的是解码层,其中编码层与解码层连接可以使用跳跃连接的连接方式,能够提高像素点分类的准确性。
具体地,将待处理图像输入至目标检测对象分割模型的编码层,通过编码层对待处理图像进行编码压缩,具体编码层可以通过卷积层对待检测图像进行编码压缩,提取待处理图像的低层语义特征信息,待处理图像的低层语义特征 信息可以是待处理图像的基本视觉信息,比如亮度、颜色、纹理等等。
步骤S504,将图像特征信息输入至目标检测对象分割模型的解码层中,通过解码层对图像特征信息进行解码运算,得到待处理图像对应的第一目标检测对象预测图。
其中,在编码层输出得到待处理图像的低层语义特征信息后,将待处理图像的低层语义特征信息输入至目标对象检测模型的解码层,由解码层对待处理图像的低层语义特征信息进行解码运算,最后得到标识了待处理图像的各个像素点是否属于目标检测对象的第一目标检测对象预测图。
具体地,将编码层提取出的待处理图像的低层语义特征信息输入至解码层,在解码层可以使用反卷积层和上采样层对低层语义特征信息进行解码运算,得到对应的第一目标检测对象预测图。其中,解码层输出的第一目标检测对象预测图过程中,可以恢复成与待处理图像相同尺寸大小的图像。进一步地,解码层输出的第一目标检测对象预测图,直观地描述了待处理图像中各个像素点是否属于目标检测对象的结果,展现了目标检测对象所在的区域。
在一个实施例中,如图5b所示,图5b示出一个实施例中目标检测对象分割模型的原理框架图。如图5b的目标对象检测模型的框架中所示,将待处理图像输入至目标对象检测模型中,首先通过编码层(Encoder)对输入的待处理图像进行编码压缩,得到维度较低的低层语义特征信息,如颜色、亮度等。与编码层连接的是解码层(Decoder),将编码层输出的低层语义特征信息输入至解码层中,解码层对低层语义特征信息行解码运算,输出与待处理图像原尺寸一样的第一目标检测对象预测图。以待处理图像为如图3所示的乳腺组织病理图像为例,将乳腺组织病理图像输入至目标对象检测模型,目标对象检测模型输出的第一目标检测对象预测图如图5b中所示,可从第一目标检测对象预测图中得知待处理图像(乳腺组织病理图像)中各个像素点是否属于目标检测对象(乳腺导管),第一目标检测对象预测图展现了目标检测对象所在的区域。
在一个实施例中,如图6所示,将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图,根据第一目标检测对象预测图获取在待处理图像中目标检测对象所在区域的步骤,包括:
步骤S602,对待处理图像进行缩放,获取缩放图像;
步骤S604,将缩放图像输入至目标检测对象分割模型,得到缩放图像的第二目标检测对象预测图;
步骤S606,根据第二目标检测对象预测图,获取在缩放图像中的目标检测对象所在区域的区域信息;
步骤S608,根据区域信息,获取在待处理图像中的目标检测对象所在区域。
其中,在待处理图像的图像数据量较大时,可先对待处理图像进行缩放,以获取图像数据量较小的缩放图像。应该理解的是,缩放图像与待处理图像中的图像内容是一致的,只是在图像的尺寸大小、图像数据量多少上的差异。对待处理图像进行缩放以获取与之对应的缩放图像,可有效降低图像数据的处理量,加快对待处理图像分割的速度。
其中,第二目标检测对象预测图是指与缩放图像尺寸一样的、已标注出目标检测对象所在区域的图像。在获取到缩放图像以后,将缩放图像输入至目标检测对象分割模型中,以获取与缩放图像对应的第二目标检测对象预测图,然后从第二目标检测对象预测图确定目标检测对象所在区域,即得到目标检测对象在缩放图像中的目标检测对象所在区域,在获取在缩放图像中的目标检测对象所在区域的区域信息后,对应在待处理图像中获取在待处理图像中的目标检测对象所在的区域。
同样的,在一个实施例中,第二目标检测对象预测图可以是二值图。
应该理解的是,将缩放图像输入至目标检测对象分割模型后,目标检测对象分割模型对缩放图像的处理过程,与目标检测对象分割模型对待处理图像的处理过程是一样的。
在一个实施例中,如图7a所示,将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图的步骤,包括:
步骤S702,将待处理图像按照切割规则切割为多个待处理子图像;
步骤S704,将各待处理子图像输入至目标检测对象分割模型,得到各待处理子图像对应目标检测对象子预测图像;
步骤S706,根据切割规则将各目标检测对象子预测图像进行拼接,得到与 待处理图像对应的第一目标检测对象预测图。
其中,在待处理图像的图像数据量较大时,可将待处理图像切割为多个图块,得到多个待处理子图像。在将待处理图像进行切割后,分别将各个待处理子图像输入至目标检测图像分割模型中,使得目标检测图像分割模型所处理的图像在尺寸大小、图像数据量多少上大大缩减,可有效降低图像数据的处理量,加快对图像分割的速度。
具体地,在获取到待处理图像以后,将待处理图像按照切割规则,切割为多个待处理子图像;然后逐一将待处理子图像将像输入至目标检测对象分割模型中,以获取与各个待处理子图像对应的多个目标检测对象子预测图像,最后再按切割规则,将多个目标检测对象子预测图像进行拼接,得到整图待处理图像的第一目标检测对象预测图。
以待处理图像为如图3所示的乳腺组织病理图像为例,在获取到乳腺组织病理图像以后,对乳腺组织病理图像按照预先设定的切割规则进行切割,其中,切割规则如图7b所示,将乳腺组织病理图像切割为6*6块乳腺组织病理子图像。然后,逐一将乳腺组织病理子图像将像输入至目标检测对象分割模型中,以获取与各个乳腺组织病理子图像对应的乳腺导管子预测图像,例如,将图7b中的乳腺组织病理子图像702输入至目标检测对象分割模型中,目标检测对象分割模型输出如图7c所示的乳腺导管子预测图像704。最后再按切割规则,将多个乳腺导管子预测图像进行拼接,得到整图的乳腺组织病理图像的乳腺导管预测图,如图7c所示。
应该理解的是,将待处理子图像输入至目标检测对象分割模型后,目标检测对象分割模型对待处理子图像的处理过程,与目标检测对象分割模型对待处理图像的处理过程是一样的。
在一个实施例中,如图8所示,将待处理图像输入至目标检测对象分割模型,得到待处理图像对应的第一目标检测对象预测图,根据第一目标检测对象预测图获取在待处理图像中目标检测对象所在区域的步骤,包括:
步骤S802,对待处理图像进行缩放,获取缩放图像;
步骤S804,将缩放图像按照切割规则切割为多个缩放图像的子图像;
步骤S806,将各缩放图像的子图像输入至目标检测对象分割模型,得到各缩放图像的子图像对应目标检测对象子预测图像;
步骤S808,根据切割规则将各目标检测对象子预测图像进行拼接,得到与缩放图像对应的第二目标检测对象预测图;
步骤S810,根据第二目标检测对象预测图,获取目标检测对象在缩放图像中所在区域的区域信息;
步骤S812,根据区域信息对应在待处理图像中获取目标检测对象所在区域。
其中,在待处理图像的图像数据量较大时,可先对待处理图像进行缩放,得到缩放图像;然后将缩放图像切割为多个图块,得到多个缩放图像的子图像。通过对待处理图像进行缩放、切割处理,实现缩小图像的尺寸大小,降低图像的数据量,可有效降低图像数据的处理量,加快对待处理图像分割的速度。
其中,在获取到待处理图像后,对待处理图像进行缩放,获取缩放图像,并按照切割规则对缩放后的缩放图像进行切割,得到多个缩放图像的子图像;然后,逐一将缩放图像的子图像将像输入至目标检测对象分割模型中,以获取与各个缩放图像的子图像对应的目标检测对象子预测图像,再按切割规则将各个目标检测对象子预测图像进行拼接,得到整图缩放图像的第二目标检测对象预测图。最后,从第二目标检测对象预测图确定目标检测对象所在区域,即得到目标检测对象在缩放图像中的目标检测对象所在区域,在获取在缩放图像中的目标检测对象所在区域的区域信息后,对应在待处理图像中,获取在待处理图像中的目标检测对象所在的区域。
在一个实施例中,如图9a所示,特征对象预测模型包括热力图预测网络,将目标检测对象图像输入至特征对象预测模型中,得到目标检测对象图像中特征对象的特征对象分割图的步骤,包括:
步骤S902,将目标检测对象图像输入至热力图预测网络,得到与目标检测对象图像对应的特征对象热力点图。
其中,热力图预测网络是用于计算目标检测对象图像中属于特征对象的各个像素点的热力值的网络模型。这里的热力图预测网络是已训练好的网络模型,可直接用于计算目标检测对象图像中属于特征对象的各个像素点的热力值。这 里的热力值是指目标检测图像中属于特征对象的各个像素点的概率值。其中,热力图预测网络可以采用全卷积网络结构FCN、卷积神经网络结构U-net、Linknet网络结构等。
其中,特征对象热力点图是描述目标检测对象图像中属于特征对象的各个像素点的热力值(即概率值),可根据特征对象热力点图描述的各个像素点的热力值进行轮廓提取得到特征对象所在区域。
步骤S904,根据特征对象热力点图获取目标检测对象图像中属于特征对象的各个像素点的热力值。
步骤S906,根据热力值,从特征对象热力点图中确定特征对象所在区域,得到特征对象分割图。
其中,在获取到特征对象热力点图后,可获取目标检测对象图像中属于特征对象的各个像素点的热力值,并根据各个像素点的热力值进行轮廓提取确定特征对象所在区域,得到特征对象分割图。
具体地,从特征对象热力点图中获取属于特征对象的各个像素点的热力值后,具体可以利用分水岭算法,对特征对象热力点图进行轮廓提取确定特征对象的所在区域,得到特征对象分割图;其中特征对象分割图可以是二值图,在利用分水岭算法对特征对象热力点图进行轮廓提取确定特征对象的所在区域后,可以将特征对象热力点图中特征对象所在区域的像素点的像素值设置为0,实现在视觉上显示为白色,将目标检测对象图像中非特征对象所在区域的像素点的像素值设置为255,实现在视觉上显示为黑色。
进一步地,从特征对象热力点图中获取属于特征对象的各个像素点的热力值后,具体还可以对根据预设的热力值阈值,对特征对象热力点图直接进行二值化处理,以确定特征对象的所在区域,得到特征对象分割图。具体可以是将将热力值大于预设热力值阈值的像素点的像素值设置为0,实现在视觉上显示为白色,将热力值小于或者等于预设热力值阈值的像素点的像素值设置为255,实现在视觉上显示为黑色。
在一个实施例中,特征对象预测模型如图9b所述,其中,热力图预测网络为LinkNet网络结构,该LinkNet网络结构如图9c所示,LinkNet网络结构中, 每个编码器(Encoder Block)与解码器(Decoder Block)相连接。以目标检测对象为乳腺导管为例,如图9c所示,将乳腺导管对应的目标检测对象图像输入至目标对象检测模型中,首先通过编码器对输入的目标检测对象图像进行编码压缩,得到维度较低的低层语义特征信息,如颜色、亮度等。与编码器连接的是解码器,将编码器输出的低层语义特征信息输入至解码器中,解码器对低层语义特征信息行解码运算,输出与目标检测对象图像原尺寸一样的特征对象分割图,在特征对象分割图中,白色区域为乳腺导管内细胞所在区域,黑色区域为背景或乳腺导管的间质。其中,LinkNet网络结构中编码器的输入连接到对应解码器的输出上,在解码器输出特征对象分割图之前,编码器可以将低层语义特征信息融入到解码器中,使得解码器融合了低层语义特征信息和高层语义特征信息,可有效减少降采样操作时丢失的空间信息,而且解码器是共享从编码器的每一层学习到的参数,可有效减少解码器的参数。
在一个实施例中,如图10所示,步骤S208根据目标检测对象图像以及特征对象分割图,获取目标检测对象的量化特征信息,包括:
步骤S1002,从特征对象分割图中确定特征对象所在区域;
步骤S1004,根据特征对象所在区域,在目标检测对象图像中截取特征对象的区域图像。
其中,特征对象的区域图像是指在目标检测对象图像中截取到的特征对象的图像数据。
步骤S1006,根据特征对象的区域图像中各个像素点的像素值,计算目标检测对象的量化特征信息。
其中,量化特征信息可以是特征对象的颜色深浅度,在具体应用中,可以利用特征对象所在区域的像素点的像素值进行量化表示。例如,在医学应用场景中,假设目标检测对象为乳腺导管,特征对象可以为乳腺导管内的细胞,其中乳腺导管的特征信息包括细胞核染色值,细胞核染色值可通过获取乳腺导管内的细胞所在区域的像素点的像素值进行量化表示。
由于特征对象分割图一般为二值图像,在特征对象分割图中获取的特征对象所在区域的像素点的像素值难以表征特征对象的颜色深浅度。因此,在获取 到特征对象分割图后,可以从特征对象分割图中确定特征对象所在区域,从而根据在特征对象分割图中的特征对象所在区域,对应的在目标检测对象图像中截取特征对象所在区域的区域图像;然后在特征对象的区域图像中获取各个像素点的像素值,计算目标检测对象的量化特征。其中,根据特征对象的区域图像中各个像素点的像素值,计算目标检测对象的量化特征信息,具体可以是计算区域图像中所有像素点的像素值的平均值,将平均值确定为目标检测对象的量化特征信息。
在一个实施例中,如图11所示,步骤S208根据目标检测对象图像以及特征对象分割图,获取目标检测对象的量化特征信息,包括:
步骤S1102,从特征对象分割图中确定特征对象所在区域的像素点数量;
步骤S1104,获取目标检测对象图像的总像素点数量,计算特征对象所在区域的像素点数量与总像素点数量的比值,得到目标检测对象的量化特征信息。
其中,量化特征信息可以是特征对象的形状大小,在具体应用中,可以利用特征对象所在区域的像素点的数量进行量化表示。例如,在医学应用场景中,目标检测对象为乳腺导管,特征对象可以为乳腺导管内的细胞,其中乳腺导管的特征信息包括细胞核的大小,细胞核的大小可通过获取乳腺导管内的细胞所在区域的像素点的数量进行量化表示。可以理解的是,在一张图像中,像素点的数量可等价于在图像中所占面积大小。
具体地,在获取到特征对象分割图后,可以根据特征对象分割图确定特征对象所在区域,并计算特征对象所在区域中的像素点数量;最后,统计目标检测对象图像或特征对象分割图的总像素点数量,通过计算特征对象所在区域的像素数量与所述总像素点数量的比值,确定为目标检测对象的量化特征。
进一步地,在一个实施例中,量化特征信息的获取方式,还包括:从特征对象分割图中确定特征对象所在区域以及特征对象所在区域的面积值,然后对特征对象所在区域进行轮廓提取,并计算特征对象所在区域的轮廓的周长,其中特征对象所在区域的轮廓的周长可以利用特征对象所在区域的轮廓的像素数量表示,进而算出等周长的纯圆形面积值,通过获取等周长的纯圆形面积值和特征对象所在区域的面积的比例,以获得特征对象在圆形度上的量化特征信息。
在一个实施例中,如图12所示,特征对象预测模型的训练步骤,包括:
步骤S1202,获取样本检测对象图像,以及样本检测对象图像中样本特征对象的轮廓区域标注图。
其中,样本检测对象图像是用来训练特征对象预测模型的图像,其中样本检测对象图像中包括多个样本特征对象;样本检测对象图像中样本特征对象的轮廓区域标注图是指已标注出样本特征对象所在区域,与样本检测对象图像是一一对应的。进一步地,样本特征对象的轮廓区域标注图可通过专业的标注人员进行标注,也可以是公开的样本检测对象数据集。在医学应用场景中,样本检测对象图像可以是单个机体组织的图像,样本特征对象的轮廓区域标注图可以是标注了细胞(或细胞核)所在的区域的图像。
步骤S1204,根据样本特征对象的轮廓区域标注图获取对应的样本特征对象热力点图。
其中,在获取到样本特征对象的轮廓区域标注图后,将样本特征对象的轮廓区域标注图转换为样本特征对象热力点图。其中,样本特征对象热力点图描述了在样本检测对象图像中各个像素点属于样本特征对象的概率值。此处的样本特征对象热力点图可以理解为标准的热力点图,准确描述了在样本检测对象图像中各个像素点属于样本特征对象的概率值。
具体地,根据样本特征对象的轮廓区域标注图获取对应的样本特征对象热力点图,具体可以是将样本特征对象的轮廓区域标注图中样本特征对象所在区域的像素点上的概率值确定为1,将将样本特征对象的轮廓区域标注图中非样本特征对象所在区域的像素点上的概率值确定为0;也可以是确定样本特征对象的轮廓区域标注图中样本特征对象所在区域的中心像素点,将该中心像素点上的概率值设置为1,同时将样本特征对象轮廓上的轮廓像素点的概率值设置的为预定值,例如可以设置为0.5,然后利用插值法设置在样本特征对象所在区域内,中心像素点与轮廓像素点间的各个像素点上的概率值,可以理解的是,以中心像素点为起点,在样本特征对象所在区域内的像素点上的概率值逐渐降低,到达样本特征对象轮廓上的轮廓像素点时,轮廓像素点的概率值为预定值。
步骤S1206,将样本检测对象图像输入至预先构建的特征对象预测模型的热 力图预测网络,得到与样本检测对象图像对应的特征对象热力点预测图。
其中,将获取到的样本检测对象图像输入至特征对象预测模型的热力图预测网络,热力图预测网络的网络结构包括但不限于编码层和解码层,热力图预测网络通过编码层将各个样本检测对象图像进行编码压缩,提取各个样本检测对象图像中维度更低的低层语义特征信息,再将提取出的各个低层语义特征信息通过解码层进行解码运算,计算样本检测对象图像中各个像素点属于特征对象的概率值,从而得到特征对象热力点预测图。其中,特征对象热力点预测图中描述了样本检测对象图像中各个像素点属于样本特征对象的概率值。
步骤S1208,根据特征对象热力点预测图以及样本特征对象热力点图,计算特征对象预测模型的损失值。
步骤S1210,根据特征对象预测模型的损失值,对特征对象预测模型的热力图预测网络进行训练,直到达到收敛条件,得到训练后的特征对象预测模型。
具体地,特征对象热力点预测图中描述了样本检测对象图像中各个像素点属于样本特征对象的概率值;而在样本特征对象热力点图中同样描述了各个像素点的概率值属于特征对象的概率值,且认为样本特征对象热力点图中标注的概率值准确描述了在样本检测对象图像中各个像素点属于样本特征对象的概率。因此,可根据特征对象热力点预测图中各个像素点属于训练目标检测对象的概率值,和样本特征对象热力点图中的各个像素点的概率值计算得到损失值,例如可根据特征对象热力点预测图中各个像素点属于训练目标检测对象的概率值,和样本特征对象热力点图中的各个像素点的概率值,计算特征对象热力点预测图和样本特征对象热力点图的距离值,将该距离值确定为损失值;也可以根据特征对象热力点预测图中各个像素点属于训练目标检测对象的概率值,和样本特征对象热力点图中的各个像素点的概率值,利用softmax函数计算得到损失值。
进一步地,在计算得到特征对象预测模型的损失值后,根据损失值对特征对象预测模型的热力图预测网络进行训练,对特征对象预测模型的热力图预测网络的网络参数进行调整,直至满足收敛条件,得到训练后的特征对象预测模型。其中,收敛条件可根据实际需求进行设置或者调整,例如当训练损失值达 到最小时,则可认为满足收敛条件,或者当损失值无法再发生变化时,则可认为满足收敛条件。
具体地,在医学应用场景中,基于网上公开的样本检测对象数据集(Multi-organ Nucleus Segmentation Challenge)对特征对象预测模型进行训练后,特征对象预测模型对细胞的分割准确度可达到0.6632。
在一个实施例中,如图13所示,目标检测对象分割模型的训练步骤,包括:
步骤S1302,获取样本图像,以及样本图像中样本检测对象的轮廓区域标注图。
其中,特征对象预测模型与目标检测对象分割模型是分别单独训练的。其中,目标检测对象分割模型的训练过程具体可以是,先获取样本图像以及样本对象中样本检测对象的轮廓区域标注图。其中,样本图像是用来训练目标检测对象分割模型的图像,其中样本图像中包括一个或多个样本检测对象;样本检测对象的轮廓区域标注图是指已标注出样本检测对象所在区域的二值图,与样本图像是一一对应的。进一步地,样本检测对象的轮廓区域标注图可通过专业的标注人员进行标注。
在一个实施例中,在医学应用场景中,样本图像可以是完整的病理切片图像,样本检测对象可以是的机体器官、组织或细胞等,样本图像中样本检测对象的轮廓区域标注图可以是标注了样本检测对象(例如机体组织)所在的区域的轮廓位置的图像,例如样本图像可以是乳腺组织病理图像,轮廓区域标注图可以是已标注出乳腺导管所在区域的轮廓位置的乳腺组织病理图像。
步骤S1304,将样本图像输入至预先构建的检测对象分割模型中,得到与样本图像对应的样本检测对象预测图。
其中,在获取到样本图像以及与样本图像对应的样本检测对象的轮廓区域标注图后,将获取到的样本图像输入至检测对象分割模型,检测对象分割模型的网络结构包括但不限于编码层和解码层,检测对象分割模型通过编码层将各个样本图像进行编码压缩,提取各个样本图像中维度更低的低层语义特征信息,再将提取出的各个低层语义特征信息通过解码层进行解码运算,计算样本图像中各个像素点属于样本检测对象的概率值,从而根据概率值判断各个像素点是 否属于样本检测对象,得到样本检测对象预测图。其中,样本检测对象预测图中直观地描述了样本图像中各个像素点是否属于样本检测对象的结果,展现了样本检测对象所在的区域。
步骤S1306,根据样本检测对象预测图与样本检测对象的轮廓区域标注图,计算检测对象分割模型的损失值。
其中,样本检测对象预测图中描述了样本检测对象所在区域;而样本检测对象的轮廓区域标注图标注了样本检测对象所在区域;因此,检测对象分割模型的损失值,具体可以获取样本检测对象预测图中样本检测对象所在区域的区域信息(例如区域位置坐标),以及样本检测对象的轮廓区域标注图中样本检测对象所在区域的区域信息(例如区域位置坐标),通过计算两者区域信息的差值作为检测对象分割模型的损失值。
进一步地,为了加强对目标检测对象的边缘响应,可以构建样本检测对象的边缘信息来计算损失值。也就是说,也可以是获取样本检测对象预测图中样本检测对象所在区域的轮廓信息(例如轮廓所在的位置坐标),以及样本检测对象的轮廓区域标注图中样本检测对象所在区域的轮廓信息(例如轮廓所在的位置坐标),然后通过计算两个轮廓信息的差值作为检测对象分割模型的损失值。
进一步地,还可以是先获取样本检测对象预测图中样本检测对象所在区域的区域信息和轮廓信息,以及样本检测对象的轮廓区域标注图中样本检测对象所在区域的区域信息和轮廓信息,在计算到两个区域信息的差值以及两个轮廓信息的差值以后,将区域信息的差值以及轮廓信息的差值的和值确定为检测对象分割模型的损失值。
进一步地,样本检测对象预测图中描述了样本图像中各个像素点属于样本检测对象的概率值;而样本检测对象的轮廓区域标注图所标注的样本检测对象所在区域的各个像素点百分之百属于样本检测对象,可将标注的样本检测对象所在区域的各个像素点的概率值确定为1,标注的非样本检测对象所在区域的各个像素点的概率值确定为0。因此,还可根据样本检测对象预测图中各个像素点属于训练目标检测对象的概率值,和样本检测对象的轮廓区域标注图中的各个像素点的概率值计算得到训练损失值。
步骤S1308,根据损失值对检测对象分割模型进行训练,直到达到收敛条件,得到目标检测对象分割模型。
其中,在计算得到检测对象分割模型的损失值后,根据损失值对检测对象分割模型进行训练,调整检测对象分割模型的模型参数进行调整,直至满足收敛条件,得到目标对象检测模型。其中,收敛条件可根据实际需求进行设置或者调整,例如当损失值达到最小时,则可认为满足收敛条件,或者当值无法再发生变化时,则可认为满足收敛条件。
在一个实施例中,如图14所示,提供了一种基于人工智能的对象分类方法,包括:
步骤S1402,获取乳腺组织病理图像,其中乳腺组织病理图像包括乳腺导管。
其中,乳腺组织病理图像是指医学影像设备拍摄的图像,可以是全视野数字病理切片图像,医学影像设备包括但不限于数字病理切片扫描仪、数字切片显微镜等等。在具体的实际应用场景中,可通过乳腺组织病理图像得知目标检测对象的具体所在的位置,而在实际应用场景中,乳腺组织病理图像中的目标检测对象可以是但不限于乳腺导管。
步骤S1404,从乳腺组织病理图像中,分割出乳腺导管的乳腺导管图像。
其中,在获取到乳腺组织病理图像以后,可从乳腺组织病理图像中将乳腺导管所在区域的区域图像分割出来,该区域图像即为乳腺导管图像。
具体地,从乳腺组织病理图像中分割出乳腺导管的乳腺导管图像,具体可以是将乳腺组织病理图像输入至用于图像分割的深度学习模型中,通过深度学习模型预测乳腺导管所在区域,然后根据预测得到的乳腺导管所在区域从乳腺组织病理图像中截取乳腺导管所在区域的区域图像,作为目标检测对象的目标检测对象图像。
进一步地,在实际应用过程中,乳腺组织病理图像通常是在40倍镜或20倍镜下获取的数字病理切片图像,数据量巨大,为了减少数据处理量、提高数据处理速率,可以将40倍镜或20倍镜下获取的乳腺组织病理图像缩放为10倍镜大小或5倍镜大小的乳腺组织病理图像,并通过10倍镜大小或5倍镜大小的乳腺组织病理图像获取乳腺导管所在区域,从而对应的在原始的40倍镜大小或 20倍镜大小的乳腺组织病理图像中切割出乳腺导管所在区域的区域图像,作为目标检测对象的目标检测对象图像。
进一步地,还可以将乳腺组织病理图像按照预设的切割规则切割为多个乳腺组织病理子图像,然后分别获取各个乳腺组织病理子图像中乳腺导管所在区域,最后,将各个乳腺组织病理子图像中乳腺导管所在区域的结果按照切割规则进行拼接,获取整图乳腺组织病理图像中乳腺导管所在区域的结果。例如,如图7b以及图7c所示,在获取到乳腺组织病理图像以后,将乳腺组织病理图像按照预先设定的如图7b所示的切割规则进行切割,得到6*6块乳腺组织病理子图像。然后,对于各个乳腺组织病理子图像,逐一将乳腺组织病理子图像将像输入至目标检测对象分割模型中,以获取与各个乳腺组织病理子图像对应的乳腺导管子预测图像,例如,将图7b中的乳腺组织病理子图像702输入至目标检测对象分割模型中,目标检测对象分割模型输出如图7c中的乳腺导管子预测图像704。在获得各个乳腺组织病理子图像对应的乳腺导管子预测图像后,再按切割规则将多个乳腺导管子预测图像进行拼接,得到整图的乳腺组织病理图像的乳腺导管预测图,乳腺导管预测图如图7c所示。
步骤S1406,将乳腺导管图像输入至特征对象预测模型中,得到乳腺导管图像中细胞的细胞分割图。
其中,细胞分割图是指与乳腺导管图像尺寸一样的、已标注出细胞所在区域的图像。在一个实施例中,细胞分割图可以是二值图,也就是细胞分割图呈现出只有黑和白的视觉效果。例如,在特征对象分割图中,乳腺导管图像中细胞所在区域可显示为白色,非细胞所在区域可显示为黑色。
其中,特征对象预测模型是用于判断乳腺导管图像中各个像素点是否属于细胞,以输出细胞分割图的网络模型。这里的特征对象预测模型是已训练好的网络模型,可直接用于判断乳腺导管图像中各个像素点是否属于细胞,输出特征对象分割图。其中,特征对象预测模型可以采用全卷积网络结构FCN、卷积神经网络结构U-net等网络结构,在此不做限定。具体地,特征对象预测模型中包括但不限于编码层和解码层,编码层是用来对乳腺导管图像进行编码压缩的,提取维度更低的低层语义特征图,而解码层是用来对编码层输出的低层语义特 征图进行解码运算,输出与乳腺导管图像尺寸一样的细胞分割图。
进一步地,在一个实施例中,特征对象预测模型可以包括热力图预测网络,通过将乳腺导管图像输入至所述热力图预测网络,得到与乳腺导管图像对应的细胞热力点图,然后根据细胞热力点图获取在乳腺导管图像中各个像素点属于细胞的热力值,最后根据乳腺导管图像中各个像素点属于细胞的热力值,从细胞热力点图确定细胞所在区域,得到细胞分割图。具体地,如图9b所述,将乳腺导管图像作为目标检测对象图像输入至特征对象预测模型中,首先通过热力图预测网络输出与乳腺导管图像原尺寸一样的特征对象热力点图,即细胞热力点图,其中细胞热力点图各个像素点标识在乳腺导管图像中对应像素点属于细胞的热力值;然后,根据细胞热力点图上各个像素点上的热力值,利用分水岭算法对细胞热力点图进行轮廓提取确定细胞的所在区域,得到细胞分割图。其中,热力图预测网络为LinkNet网络结构,LinkNet网络结构中,每个编码器(Encoder Block)与解码器(Decoder Block)相连接。进一步地,在根据该热力值获取到细胞的所在区域后,可将细胞热力点图中细胞所在区域的像素点的像素值设置为0,实现在视觉上显示为白色,将细胞热力点图中非细胞所在区域的像素点的像素值设置为255,实现在视觉上显示为黑色。
步骤S1408,根据乳腺导管图像以及细胞分割图,获取细胞特征信息以及筛孔特征信息。
其中,细胞特征信息以及筛孔特征信息是指乳腺导管中细胞的各项特征实现量化后的信息,例如细胞特征信息包括但不限于乳腺导管中细胞数量、细胞大小、细胞圆形度以及细胞染色值,筛孔特征信息包括但不限于乳腺导管内筛孔大小以及细胞质染色值。
其中,在获取到乳腺导管图像,以及与乳腺导管图像对应的细胞分割图,可根据乳腺导管图像以及细胞分割图的图像数据,计算乳腺导管中细胞的细胞特征信息以及筛孔特征信息。
具体地,可以是从细胞分割图中获取细胞所在区域,然后从乳腺导管图像中确定与细胞所在区域相对应的区域,并将该区域的图像确定为细胞区域图像,最后根据细胞区域图像上像素点的信息计算用于表示细胞核染色值的细胞特征 信息以及用于表示细胞质染色值的筛孔特征信息。还可以是从细胞分割图中确定细胞所在区域的像素点数量,然后获取乳腺导管图像的总像素点数量,计算细胞所在区域的像素点数量与总像素点数量的比值,得到表示细胞大小的细胞特征信息以及用于表示筛孔大小的筛孔特征信息。
进一步地,细胞特征信息的获取方式,还包括:从细胞分割图中确定细胞所在区域以及细胞所在区域的面积值;然后对细胞所在区域进行轮廓提取,计算细胞所在区域的轮廓的像素长度,即得到细胞所在区域的轮廓的周长;最后计算等周长的纯圆形面积值和细胞所在区域的面积值的比例,得到圆形度,以获得细胞在圆形度上的细胞特征信息。
步骤S1410,根据细胞特征信息以及筛孔特征信息,对乳腺导管图像进行分类,得到乳腺组织病理图像中乳腺导管的病灶类别信息。
其中,在获取到细胞特征信息以及筛孔特征信息后,可根据乳腺导管图像的细胞特征信息以及筛孔特征信息,对乳腺导管图像进行分类,以获取乳腺组织病理图像中各个乳腺导管的病灶类别信息。
具体地,根据细胞特征信息以及筛孔特征信息对乳腺导管图像进行分类,可以是将乳腺导管图像的细胞特征信息以及筛孔特征信息输入至已经过训练的分类器中,利用分类器对乳腺导管图像进行分类,其中,分类器可以采用基于机器学习的分类器,例如SVM分类器,也可以是基于深度学习的分类器中,例如基于CNN模型构建的分类器。
进一步地,在一个实施例中,分类器的训练具体可以是获取样本乳腺导管图像,以及样本乳腺导管图像对应的标准病灶类别标签,通过将样本乳腺导管图像输入至预先构建的分类器中,得到与样本乳腺导管图像对应的预测病灶类别标签,然后通过对比样本乳腺导管图像的标准病灶类别标签与预测病灶类别标签间计算分类器的损失值,最后根据分类器的损失值对分类器中的参数进行调整,以获取训练后的分类器。
在一个实施例中,从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像,具体可以是先将乳腺组织病理图像输入至目标检测对象分割模型,得到乳腺组织病理图像对应的乳腺导管预测图,然后根据乳腺导管预测图像获 取在乳腺组织病理图像中乳腺导管所在区域,最后根据乳腺导管所在区域对乳腺组织病理图像进行分割,得到像乳腺导管图像。其中,目标检测对象分割模型是用于判断乳腺组织病理图像中各个像素点是否属于乳腺导管的网络模型,以输出乳腺导管预测图。这里的特征对象分割模型是已训练好的网络模型。具体地,目标检测对象分割模型如图5b所示,包括但不限于编码层和解码层,编码层是用来对乳腺组织病理图像进行编码压缩的,提取维度更低的低层语义特征图,而解码层是用来对编码层输出的低层语义特征图进行解码运算,输出与乳腺组织病理图像尺寸一样的乳腺导管预测图。
进一步地,在一个实施例中,乳腺导管预测图可以是二值图,也就是第一乳腺导管预测图呈现出只有黑和白的视觉效果。例如,在乳腺导管预测图中,乳腺导管所在区域可显示为白色,非乳腺导管所在区域(例如背景、间质)可显示为黑色。具体地,将乳腺组织病理图像输入至目标检测对象分割模型,得到待处理图像对应的乳腺导管预测图,可以是通过目标检测对象分割模型计算乳腺组织病理图像中各个像素点属于乳腺导管的概率值,从而根据概率值将乳腺组织病理图像中各个像素点进行分类,得到属于乳腺导管的像素点以及不属于乳腺导管像素点,然后将属于乳腺导管的像素点的灰度值设置为0,将不属于乳腺导管的像素点的灰度至设置为255,得到目标检测对象所在区域可显示为白色、非目标检测对象所在区域可显示为黑色的第一目标检测对象。
在一个实施例中,如图15所示,一种医学影像设备,包括:
显微镜扫描仪1502,用于获取乳腺组织病理图像;
存储器1504,存储器中存储有计算机可读指令;
处理器1506,计算机可读指令被处理器执行时,使得处理器执行以下步骤:从乳腺组织病理图像中,分割出乳腺导管的乳腺导管图像;将乳腺导管图像输入至特征对象预测模型中,得到乳腺导管图像中细胞的细胞分割图;根据乳腺导管图像以及细胞分割图,获取细胞特征信息以及筛孔特征信息;根据细胞特征信息以及筛孔特征信息,对乳腺导管图像进行分类,得到乳腺组织病理图像中乳腺导管的病灶类别信息;
显示器1508,用于显示乳腺组织病理图像以及乳腺组织病理图像中乳腺导 管的病灶类别信息。
具体地,医学影像设备可以包括显微镜扫描仪1502、存储器1504、处理器1506和显示器1508。显微镜扫描仪1502将采集到的乳腺组织病理图像发送至存储器1504,存储器中存储有计算机可读指令,计算机可读指令被处理器1506执行时,使得处理器1506执行以下步骤:从乳腺组织病理图像中,分割出乳腺导管的乳腺导管图像;将乳腺导管图像输入至特征对象预测模型中,得到乳腺导管图像中细胞的细胞分割图;根据乳腺导管图像以及细胞分割图,获取细胞特征信息以及筛孔特征信息;根据细胞特征信息以及筛孔特征信息,对乳腺导管图像进行分类,得到乳腺组织病理图像中乳腺导管的病灶类别信息。最后,乳腺组织病理图像以及乳腺组织病理图像中的乳腺导管的病灶类别信息可以在显示器1508上显示,即在显示器1508上的乳腺组织病理图像中标注出乳腺导管所在区域,并对应显示标注出乳腺导管的病灶类别信息。
应该理解的是,虽然上述流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图16所示,提供了一种基于人工智能的对象分类装置1600,该装置包括:
图像获取模块1602,用于获取待处理图像,其中待处理图像包括目标检测对象;
图像分割模块1604,用于从待处理图像中,分割出目标检测对象的目标检测对象图像;
特征图像获取模块1606,用于将目标检测对象图像输入至特征对象预测模型中,得到目标检测对象图像中特征对象的特征对象分割图;
特征信息获取模块1608,用于根据目标检测对象图像以及特征对象分割图, 获取目标检测对象的量化特征信息;
对象分类模块1610,用于根据量化特征信息对目标检测对象图像进行分类,得到待处理图像中目标检测对象的类别信息。
在一个实施例中,图像分割模块,包括:
对象区域确定单元,用于将所述待处理图像输入至目标检测对象分割模型,得到所述待处理图像对应的第一目标检测对象预测图,根据所述第一目标检测对象预测图获取在所述待处理图像中所述目标检测对象所在区域;
对象区域分割单元,用于根据所述目标检测对象所在区域对所述待处理图像进行分割,得到目标检测对象图像。
在一个实施例中,图像分割模块,还包括图像缩放单元;
图像缩放单元,用于对待处理图像进行缩放,获取缩放图像;
对象区域确定单元,用于将缩放图像输入至目标检测对象分割模型,得到缩放图像的第二目标检测对象预测图;根据第二目标检测对象预测图,获取在缩放图像中的目标检测对象所在区域的区域信息;根据区域信息,获取在待处理图像中的目标检测对象所在区域。
在一个实施例中,对象区域确定单元,具体用于将待处理图像按照切割规则切割为多个待处理子图像;将各待处理子图像输入至目标检测对象分割模型,得到各待处理子图像对应目标检测对象子预测图像;根据切割规则将各目标检测对象子预测图像进行拼接,得到与待处理图像对应的第一目标检测对象预测图。
在一个实施例中,目标检测对象分割模型包括编码层以及解码层;对象区域确定单元,具体用于将待处理图像输入至目标检测对象分割模型的编码层,通过编码层对待处理图像进行编码处理,得到待处理图像的图像特征信息;将图像特征信息输入至目标检测对象分割模型的解码层中,通过解码层对图像特征信息进行解码运算,得到待处理图像对应的第一目标检测对象预测图。
在一个实施例中,特征图像获取模块,用于将目标检测对象图像输入至热力图预测网络,得到与目标检测对象图像对应的特征对象热力点图;根据特征对象热力点图获取目标检测对象图像中属于特征对象的各个像素点的热力值; 根据热力值,从特征对象热力点图确定特征对象所在区域,得到特征对象分割图。
在一个实施例中,特征信息获取模块,用于从特征对象分割图中确定特征对象所在区域;根据特征对象所在区域,在目标检测对象图像中截取特征对象的区域图像;根据特征对象的区域图像中各个像素点的像素值,计算目标检测对象的量化特征信息。
在一个实施例中,特征信息获取模块,用于从特征对象分割图中确定特征对象所在区域的像素点数量;获取目标检测对象图像的总像素点数量,计算特征对象所在区域的像素点数量与总像素点数量的比值,得到目标检测对象的量化特征信息。
在一个实施例中,基于人工智能的对象分类装置还包括特征对象预测模型训练模块,用于:获取样本检测对象图像,以及样本检测对象图像中样本特征对象的轮廓区域标注图;根据样本特征对象的轮廓区域标注图获取对应的样本特征对象热力点图;将样本检测对象图像输入至预先构建的特征对象预测模型的热力图预测网络,得到与样本检测对象图像对应的特征对象热力点预测图;根据特征对象热力点预测图以及样本特征对象热力点图,计算特征对象预测模型的损失值;根据特征对象预测模型的损失值,对特征对象预测模型的热力图预测网络进行训练,直到达到收敛条件,得到训练后的特征对象预测模型。
在一个实施例中,基于人工智能的对象分类装置还包括目标检测对象分割模型训练模块,用于:获取样本图像,以及样本图像中样本检测对象的轮廓区域标注图;将样本图像输入至预先构建的检测对象分割模型中,得到与样本图像对应的样本检测对象预测图;根据样本检测对象预测图与样本检测对象的轮廓区域标注图,计算检测对象分割模型的损失值;根据损失值对检测对象分割模型进行训练,直到达到收敛条件,得到目标检测对象分割模型。
在一个实施例中,如图17所示,提供了一种基于人工智能的对象分类装置1700,该装置包括:
病理图像获取模块1702,用于获取乳腺组织病理图像,其中乳腺组织病理图像包括乳腺导管;
导管图像获取模块1704,用于从乳腺组织病理图像中,分割出乳腺导管的乳腺导管图像;
细胞区域图获取模块1706,用于将乳腺导管图像输入至特征对象预测模型中,得到乳腺导管图像中细胞的细胞分割图;
导管特征获取模块1708,用于根据乳腺导管图像以及细胞分割图,获取细胞特征信息以及筛孔特征信息;
导管分类模块1710,用于根据细胞特征信息以及筛孔特征信息,对乳腺导管图像进行分类,得到乳腺组织病理图像中乳腺导管的病灶类别信息。
图18示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110(或服务器120)。如图18所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现基于人工智能的对象分类方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行基于人工智能的对象分类方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图18中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的基于人工智能的对象分类装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图18所示的计算机设备上运行。计算机设备的存储器中可存储组成该基于人工智能的对象分类装置的各 个程序模块,比如,图16所示的图像获取模块、图像分割模块、特征图像获取模块、特征信息获取模块和对象分类模块。各个程序模块构成的计算机可读指令使得处理器执行本说明书中描述的本申请各个实施例的基于人工智能的对象分类方法中的步骤。
例如,图18所示的计算机设备可以通过如图16所示的基于人工智能的对象分类装置中的图像获取模块执行步骤S202。计算机设备可通过图像分割模块执行步骤S204。计算机设备可通过特征图像获取模块执行步骤S206。计算机设备可通过特征信息获取模块执行步骤S208。计算机设备可通过对象分类模块执行步骤S210。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述基于人工智能的对象分类方法的步骤。此处基于人工智能的对象分类方法的步骤可以是上述各个实施例的基于人工智能的对象分类方法中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述基于人工智能的对象分类方法的步骤。此处基于人工智能的对象分类方法的步骤可以是上述各个实施例的基于人工智能的对象分类方法中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、 直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种基于人工智能的对象分类方法,由计算机设备执行,包括:
    获取待处理图像,其中所述待处理图像包括目标检测对象;
    从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
    将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;
    根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;及
    根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
  2. 根据权利要求1所述的方法,其特征在于,所述从所述待处理图像中,分割出目标检测对象的目标检测对象图像的步骤,包括:
    将所述待处理图像输入至目标检测对象分割模型,得到所述待处理图像对应的第一目标检测对象预测图,根据所述第一目标检测对象预测图获取在所述待处理图像中所述目标检测对象所在区域;及
    根据所述目标检测对象所在区域对所述待处理图像进行分割,得到目标检测对象图像。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述待处理图像输入至目标检测对象分割模型,得到所述待处理图像对应的第一目标检测对象预测图,根据所述第一目标检测对象预测图获取在所述待处理图像中所述目标检测对象所在区域的步骤,包括:
    对所述待处理图像进行缩放,获取缩放图像;
    将所述缩放图像输入至目标检测对象分割模型,得到所述缩放图像的第二目标检测对象预测图;
    根据所述第二目标检测对象预测图,获取在所述缩放图像中的目标检测对象所在区域的区域信息;及
    根据所述区域信息,获取在所述待处理图像中的目标检测对象所在区域。
  4. 根据权利要求2所述的方法,其特征在于,所述将所述待处理图像输入 至目标检测对象分割模型,得到所述待处理图像对应的第一目标检测对象预测图的步骤,包括:
    将所述待处理图像按照切割规则切割为多个待处理子图像;
    将各所述待处理子图像输入至所述目标检测对象分割模型,得到各所述待处理子图像对应目标检测对象子预测图像;及
    根据所述切割规则将各所述目标检测对象子预测图像进行拼接,得到与所述待处理图像对应的第一目标检测对象预测图。
  5. 根据权利要求2所述的方法,其特征在于,所述目标检测对象分割模型包括编码层以及解码层;
    所述将所述待处理图像输入至目标检测对象分割模型,得到所述待处理图像对应的第一目标检测对象预测图的步骤,包括:
    将所述待处理图像输入至所述目标检测对象分割模型的编码层,通过所述编码层对所述待处理图像进行编码处理,得到所述待处理图像的图像特征信息;及
    将所述图像特征信息输入至所述目标检测对象分割模型的解码层中,通过所述解码层对所述图像特征信息进行解码运算,得到所述待处理图像对应的第一目标检测对象预测图。
  6. 根据权利要求1至5任意一项所述的方法,其特征在于,所述特征对象预测模型包括热力图预测网络;所述将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图的步骤,包括:
    将所述目标检测对象图像输入至所述热力图预测网络,得到与所述目标检测对象图像对应的特征对象热力点图;
    根据所述特征对象热力点图获取所述目标检测对象图像中属于所述特征对象的各个像素点的热力值;及
    根据所述热力值,从所述特征对象热力点图确定所述特征对象所在区域,得到特征对象分割图。
  7. 根据权利要求1至5任意一项所述的方法,其特征在于,所述根据所述 目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息的步骤,包括:
    从所述特征对象分割图中确定所述特征对象所在区域;
    根据所述特征对象所在区域,在所述目标检测对象图像中截取所述特征对象的区域图像;及
    根据所述特征对象的区域图像中各个像素点的像素值,计算所述目标检测对象的量化特征信息。
  8. 根据权利要求1至5任意一项所述的方法,其特征在于,根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息的步骤,包括:
    从所述特征对象分割图中确定所述特征对象所在区域的像素点数量;及
    获取所述目标检测对象图像的总像素点数量,计算所述特征对象所在区域的像素点数量与所述总像素点数量的比值,得到所述目标检测对象的量化特征信息。
  9. 根据权利要求1至5任意一项所述的方法,其特征在于,所述特征对象预测模型的训练步骤,包括:
    获取样本检测对象图像,以及所述样本检测对象图像中样本特征对象的轮廓区域标注图;
    根据所述样本特征对象的轮廓区域标注图获取对应的样本特征对象热力点图;
    将所述样本检测对象图像输入至预先构建的特征对象预测模型的热力图预测网络,得到与所述样本检测对象图像对应的特征对象热力点预测图;
    根据所述特征对象热力点预测图以及所述样本特征对象热力点图,计算所述特征对象预测模型的损失值;及
    根据所述特征对象预测模型的损失值,对所述特征对象预测模型的热力图预测网络进行训练,直到达到收敛条件,得到训练后的特征对象预测模型。
  10. 根据权利要求2所述的方法,其特征在于,所述目标检测对象分割模型的训练步骤,包括:
    获取样本图像,以及所述样本图像中样本检测对象的轮廓区域标注图;
    将所述样本图像输入至预先构建的检测对象分割模型中,得到与所述样本图像对应的样本检测对象预测图;
    根据所述样本检测对象预测图与所述样本检测对象的轮廓区域标注图,计算所述检测对象分割模型的损失值;及
    根据所述损失值对所述检测对象分割模型进行训练,直到达到收敛条件,得到目标检测对象分割模型。
  11. 一种基于人工智能的对象分类方法,由计算机设备执行,包括:
    获取乳腺组织病理图像,其中所述乳腺组织病理图像包括乳腺导管;
    从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像;
    将所述乳腺导管图像输入至特征对象预测模型中,得到所述乳腺导管图像中细胞的细胞分割图;
    根据所述乳腺导管图像以及所述细胞分割图,获取所述细胞特征信息以及筛孔特征信息;及
    根据所述细胞特征信息以及筛孔特征信息,对所述乳腺导管图像进行分类,得到所述乳腺组织病理图像中所述乳腺导管的病灶类别信息。
  12. 一种医学影像设备,包括:
    显微镜扫描仪,用于获取乳腺组织病理图像;
    存储器,所述存储器中存储有计算机可读指令;
    处理器,所述计算机可读指令被所述处理器执行时,使得处理器执行以下步骤:从所述乳腺组织病理图像中,分割出所述乳腺导管的乳腺导管图像;将所述乳腺导管图像输入至特征对象预测模型中,得到所述乳腺导管图像中细胞的细胞分割图;根据所述乳腺导管图像以及所述细胞分割图,获取所述细胞特征信息以及筛孔特征信息;根据所述细胞特征信息以及筛孔特征信息,对所述乳腺导管图像进行分类,得到所述乳腺组织病理图像中所述乳腺导管的病灶类别信息;及
    显示器,用于显示所述乳腺组织病理图像以及所述乳腺组织病理图像中所述乳腺导管的病灶类别信息。
  13. 一种基于人工智能的对象分类装置,所述装置包括:
    图像获取模块,用于获取待处理图像,其中所述待处理图像包括目标检测对象;
    图像分割模块,用于从所述待处理图像中,分割出所述目标检测对象的目标检测对象图像;
    特征图像获取模块,用于将所述目标检测对象图像输入至特征对象预测模型中,得到所述目标检测对象图像中特征对象的特征对象分割图;
    特征信息获取模块,用于根据所述目标检测对象图像以及所述特征对象分割图,获取所述目标检测对象的量化特征信息;及
    对象分类模块,用于根据所述量化特征信息对所述目标检测对象图像进行分类,得到所述待处理图像中所述目标检测对象的类别信息。
  14. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述方法的步骤。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述方法的步骤。
PCT/CN2020/126620 2020-02-17 2020-11-05 基于人工智能的对象分类方法以及装置、医学影像设备 WO2021164322A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/686,950 US20220189142A1 (en) 2020-02-17 2022-03-04 Ai-based object classification method and apparatus, and medical imaging device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010096186.0A CN111311578B (zh) 2020-02-17 2020-02-17 基于人工智能的对象分类方法以及装置、医学影像设备
CN202010096186.0 2020-02-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/686,950 Continuation US20220189142A1 (en) 2020-02-17 2022-03-04 Ai-based object classification method and apparatus, and medical imaging device and storage medium

Publications (1)

Publication Number Publication Date
WO2021164322A1 true WO2021164322A1 (zh) 2021-08-26

Family

ID=71148408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126620 WO2021164322A1 (zh) 2020-02-17 2020-11-05 基于人工智能的对象分类方法以及装置、医学影像设备

Country Status (3)

Country Link
US (1) US20220189142A1 (zh)
CN (1) CN111311578B (zh)
WO (1) WO2021164322A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549296A (zh) * 2022-04-21 2022-05-27 北京世纪好未来教育科技有限公司 图像处理模型的训练方法、图像处理方法及电子设备
CN115661815A (zh) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 基于全局特征映射的病理图像分类方法、图像分类装置
CN116703929A (zh) * 2023-08-08 2023-09-05 武汉楚精灵医疗科技有限公司 腺管极性紊乱程度参数的确定方法及装置

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458830B (zh) * 2019-03-08 2021-02-09 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质
CN110689038B (zh) * 2019-06-25 2024-02-02 深圳市腾讯计算机系统有限公司 神经网络模型的训练方法、装置和医学图像处理系统
CN111311578B (zh) * 2020-02-17 2024-05-03 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备
CN111461165A (zh) * 2020-02-26 2020-07-28 上海商汤智能科技有限公司 图像识别方法、识别模型的训练方法及相关装置、设备
CN111768382B (zh) * 2020-06-30 2023-08-15 重庆大学 一种基于肺结节生长形态的交互式分割方法
CN111862044A (zh) * 2020-07-21 2020-10-30 长沙大端信息科技有限公司 超声图像处理方法、装置、计算机设备和存储介质
CN112184635A (zh) * 2020-09-10 2021-01-05 上海商汤智能科技有限公司 目标检测方法、装置、存储介质及设备
CN112464706A (zh) * 2020-10-14 2021-03-09 鲁班嫡系机器人(深圳)有限公司 果实筛选、分拣方法、装置、系统、存储介质及设备
CN112541917B (zh) * 2020-12-10 2022-06-10 清华大学 一种针对脑出血疾病的ct图像处理方法
CN112686908B (zh) * 2020-12-25 2024-02-06 北京达佳互联信息技术有限公司 图像处理方法、信息展示方法、电子设备及存储介质
CN112614573A (zh) * 2021-01-27 2021-04-06 北京小白世纪网络科技有限公司 基于病理图像标注工具的深度学习模型训练方法及装置
CN113192056A (zh) * 2021-05-21 2021-07-30 北京市商汤科技开发有限公司 图像检测方法和相关装置、设备、存储介质
CN114708362B (zh) * 2022-03-02 2023-01-06 北京透彻未来科技有限公司 一种基于web的人工智能预测结果的展示方法
CN115147458B (zh) * 2022-07-21 2023-04-07 北京远度互联科技有限公司 目标跟踪方法、装置、电子设备及存储介质
CN115171217B (zh) * 2022-07-27 2023-03-03 北京拙河科技有限公司 一种动态背景下的动作识别方法及系统
WO2024057084A1 (en) * 2022-09-12 2024-03-21 L&T Technology Services Limited Method and system for image processing and classifying target entities within image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304841A (zh) * 2018-01-26 2018-07-20 腾讯科技(深圳)有限公司 乳头定位方法、装置及存储介质
CN109697460A (zh) * 2018-12-05 2019-04-30 华中科技大学 对象检测模型训练方法、目标对象检测方法
CN110276411A (zh) * 2019-06-28 2019-09-24 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
CN110610181A (zh) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 医学影像识别方法及装置、电子设备及存储介质
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN111311578A (zh) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517614A (zh) * 2006-09-22 2009-08-26 皇家飞利浦电子股份有限公司 肺结节的高级计算机辅助诊断
US10438050B2 (en) * 2013-02-27 2019-10-08 Hitachi, Ltd. Image analysis device, image analysis system, and image analysis method
JP6546271B2 (ja) * 2015-04-02 2019-07-17 株式会社日立製作所 画像処理装置、物体検知装置、画像処理方法
US9858496B2 (en) * 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images
CN108319953B (zh) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 目标对象的遮挡检测方法及装置、电子设备及存储介质
CN107748889A (zh) * 2017-10-16 2018-03-02 高东平 一种乳腺肿瘤超声图像自动分类方法
CN109903278B (zh) * 2019-02-25 2020-10-27 南京工程学院 基于形状直方图的超声乳腺肿瘤形态量化特征提取方法
CN110490212B (zh) * 2019-02-26 2022-11-08 腾讯科技(深圳)有限公司 钼靶影像处理设备、方法和装置
CN109978037B (zh) * 2019-03-18 2021-08-06 腾讯科技(深圳)有限公司 图像处理方法、模型训练方法、装置、和存储介质
CN110570421B (zh) * 2019-09-18 2022-03-22 北京鹰瞳科技发展股份有限公司 多任务的眼底图像分类方法和设备
CN110796656A (zh) * 2019-11-01 2020-02-14 上海联影智能医疗科技有限公司 图像检测方法、装置、计算机设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304841A (zh) * 2018-01-26 2018-07-20 腾讯科技(深圳)有限公司 乳头定位方法、装置及存储介质
US20200005460A1 (en) * 2018-06-28 2020-01-02 Shenzhen Imsight Medical Technology Co. Ltd. Method and device for detecting pulmonary nodule in computed tomography image, and computer-readable storage medium
CN109697460A (zh) * 2018-12-05 2019-04-30 华中科技大学 对象检测模型训练方法、目标对象检测方法
CN110276411A (zh) * 2019-06-28 2019-09-24 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
CN110610181A (zh) * 2019-09-06 2019-12-24 腾讯科技(深圳)有限公司 医学影像识别方法及装置、电子设备及存储介质
CN111311578A (zh) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549296A (zh) * 2022-04-21 2022-05-27 北京世纪好未来教育科技有限公司 图像处理模型的训练方法、图像处理方法及电子设备
CN114549296B (zh) * 2022-04-21 2022-07-12 北京世纪好未来教育科技有限公司 图像处理模型的训练方法、图像处理方法及电子设备
CN115661815A (zh) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 基于全局特征映射的病理图像分类方法、图像分类装置
CN115661815B (zh) * 2022-12-07 2023-09-12 赛维森(广州)医疗科技服务有限公司 基于全局特征映射的病理图像分类方法、图像分类装置
CN116703929A (zh) * 2023-08-08 2023-09-05 武汉楚精灵医疗科技有限公司 腺管极性紊乱程度参数的确定方法及装置
CN116703929B (zh) * 2023-08-08 2023-10-27 武汉楚精灵医疗科技有限公司 腺管极性紊乱程度参数的确定方法及装置

Also Published As

Publication number Publication date
CN111311578B (zh) 2024-05-03
CN111311578A (zh) 2020-06-19
US20220189142A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
WO2021164322A1 (zh) 基于人工智能的对象分类方法以及装置、医学影像设备
CN112017189B (zh) 图像分割方法、装置、计算机设备和存储介质
EP3933688B1 (en) Point cloud segmentation method, computer-readable storage medium and computer device
CN110490212B (zh) 钼靶影像处理设备、方法和装置
JP7026826B2 (ja) 画像処理方法、電子機器および記憶媒体
Setlur et al. Retargeting images and video for preserving information saliency
WO2021139324A1 (zh) 图像识别方法、装置、计算机可读存储介质及电子设备
TW202014984A (zh) 一種圖像處理方法、電子設備及存儲介質
Ge et al. Co-saliency detection via inter and intra saliency propagation
WO2022156626A1 (zh) 一种图像的视线矫正方法、装置、电子设备、计算机可读存储介质及计算机程序产品
Wei et al. Deep group-wise fully convolutional network for co-saliency detection with graph propagation
WO2019071976A1 (zh) 基于区域增长和眼动模型的全景图像显著性检测方法
CN109299303B (zh) 基于可变形卷积与深度网络的手绘草图检索方法
WO2021164280A1 (zh) 三维边缘检测方法、装置、存储介质和计算机设备
CN110942456B (zh) 篡改图像检测方法、装置、设备及存储介质
Singh et al. A novel position prior using fusion of rule of thirds and image center for salient object detection
Qin et al. Face inpainting network for large missing regions based on weighted facial similarity
WO2021159778A1 (zh) 图像处理方法、装置、智能显微镜、可读存储介质和设备
CN115546361A (zh) 三维卡通形象处理方法、装置、计算机设备和存储介质
Bao et al. Video saliency detection using 3D shearlet transform
CN115471901A (zh) 基于生成对抗网络的多姿态人脸正面化方法及系统
CN115497092A (zh) 图像处理方法、装置及设备
Ivamoto et al. Occluded Face In-painting Using Generative Adversarial Networks—A Review
CN113763313A (zh) 文本图像的质量检测方法、装置、介质及电子设备
Kalboussi et al. A spatiotemporal model for video saliency detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920661

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920661

Country of ref document: EP

Kind code of ref document: A1