CN115063637A - Image classification method, storage medium, and program product - Google Patents

Image classification method, storage medium, and program product Download PDF

Info

Publication number
CN115063637A
CN115063637A CN202210840875.7A CN202210840875A CN115063637A CN 115063637 A CN115063637 A CN 115063637A CN 202210840875 A CN202210840875 A CN 202210840875A CN 115063637 A CN115063637 A CN 115063637A
Authority
CN
China
Prior art keywords
target
image
target segmentation
segmentation image
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210840875.7A
Other languages
Chinese (zh)
Inventor
陈磊
刘爱娥
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202210840875.7A priority Critical patent/CN115063637A/en
Publication of CN115063637A publication Critical patent/CN115063637A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to an image classification method, a storage medium, and a program product. The method comprises the following steps: determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a lesion region of a first shape, and a lesion region included in the second target segmentation image is a lesion region of a second shape; and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area. By adopting the method, the labor cost and the classification time can be saved.

Description

Image classification method, storage medium, and program product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image classification method, a storage medium, and a program product.
Background
Along with the increasing of female breast diseases, the attention of contemporary women to the health of the breasts of the women is also greatly improved. Currently, many women go to the hospital regularly to examine their own breasts in order to intervene early in the knowledge of breast problems.
In the related art, generally, when a patient goes to a hospital to examine a breast, most patients take some breast images, then doctors draw the focus in the breast images taken by the patients according to experience, and repeatedly compare and classify the focus with the current standard breast signs, so as to finally obtain the category of the focus in the breast images of the patients.
However, the above method for classifying the lesion in the breast image is time-consuming and labor-consuming.
Disclosure of Invention
In view of the above, it is desirable to provide an image classification method, a storage medium, and a program product capable of saving labor cost and saving classification time.
In a first aspect, the present application provides an image classification method, including:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In one embodiment, the neural network model includes a first classification network and a second classification network; the above identifying the type of the focus area according to the first target segmentation image and the second target segmentation image in each body position and the preset neural network model, and determining the target type of the focus area includes:
inputting the first target segmentation image and the second target segmentation image in each body position into a first classification network for classification, and determining a feature map and an initial class corresponding to a focus area in each target segmentation image;
and determining the target class of the focus area according to the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the determining the target category of the lesion area according to the feature map and the initial category corresponding to the lesion area in each target segmentation image and the second classification network includes:
determining a quantitative feature corresponding to a focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for representing the distribution condition of the lesion area;
and determining the target class of the focus region according to the quantitative feature corresponding to the focus region in each target segmentation image, the feature map and the initial class corresponding to the focus region in each target segmentation image and the second classification network.
In one embodiment, the determining the target category of the lesion area according to the quantified feature corresponding to the lesion area in each target segmented image, the feature map and the initial category corresponding to the lesion area in each target segmented image, and the second classification network includes:
acquiring clinical characteristic information of a to-be-detected object;
and determining the target class of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the determining the target category of the lesion area according to the clinical feature information, the quantitative feature corresponding to the lesion area in each target segmented image, the feature map and the initial category corresponding to the lesion area in each target segmented image, and the second classification network includes:
after carrying out feature fusion on the clinical feature information, the quantitative features corresponding to the focus areas in the target segmentation images, and the feature maps and the initial classes corresponding to the focus areas in the target segmentation images, inputting the feature maps and the initial classes into a second classification network, and determining the target classes of the focus areas;
the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
In one embodiment, the determining, according to the acquired medical images of the to-be-detected part in different postures, a first target segmentation image and a second target segmentation image corresponding to a lesion area in the medical image in each posture includes:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting focus areas of the medical image of the part to be detected in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical image in each body position;
the first segmentation model is obtained by training based on a plurality of first sample medical images, and each first sample medical image is marked with a focus area of a first shape; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus area of a second shape.
In one embodiment, the first classification network is a classification network using an attention mechanism.
In one embodiment, the region to be measured is a breast region, and the different positions include a CC axial position and an MLO oblique position.
In a second aspect, the present application further provides an image classification apparatus, including:
the determining module is used for determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image under each body position according to the acquired medical images of the part to be detected under different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
and the classification module is used for identifying the class of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target class of the focus area.
In a third aspect, the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In a fourth aspect, the present application also provides a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
According to the image classification method, the storage medium and the program product, the first target segmentation image and the second target segmentation image which correspond to the focus area in the medical image of the part to be detected in different body positions are determined through the medical image of the part to be detected in different body positions, the category of the focus area is identified according to the first target segmentation image and the second target segmentation image in different body positions and a preset neural network model, and the target category of the focus area is determined; the lesion region included in the first target segmentation image is a first-shaped lesion region, and the lesion region included in the second target segmentation image is a second lesion region. The classification of the focus area can be identified through the preset neural network, and manual classification according to experience is not needed, so that the labor cost and the classification time can be saved, high errors caused by manual classification can be avoided, and the accuracy of the classification result is improved. In addition, because two different kinds of focus information, namely the second focus and the first-shape focus, are combined, and the second focus and the first-shape focus are both focuses in the images in a plurality of body positions, the images in different body positions and the focuses in different labeling types are combined to classify the focus areas, the combined information is more and richer, and the obtained classification result is more accurate.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image classification method;
FIG. 2 is a flow diagram illustrating a method for image classification in one embodiment;
FIG. 3 is a flowchart illustrating an image classification method according to another embodiment;
FIG. 4 is an exemplary diagram of a first classification network used for classification in another embodiment;
FIG. 5 is a flowchart illustrating an image classification method according to another embodiment;
FIG. 6 is a diagram illustrating an example of obtaining quantified characteristics of a lesion area in another embodiment;
FIG. 7 is a flowchart illustrating an image classification method according to another embodiment;
FIG. 8 is a diagram illustrating a detailed structure for classifying a lesion region in another embodiment;
FIG. 9 is a diagram illustrating an example of a structure for segmenting a lesion region in another embodiment;
FIG. 10 is a block diagram showing the structure of an image classification apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
At present, when the breast image of a patient is examined, the breast image is mainly determined by a manual interpretation mode, and a doctor generally refers to an existing breast image report and data system (BI-RADS report system) to make a report, wherein the system relates to the judgment and classification of a plurality of image symptoms, the workload is huge, and the diagnosis difference of the doctor exists; furthermore, when BI-RADS reaches levels 4A, 4B, 4C, and 5, pathological examination of the breast may be required to determine whether it is a malignant lesion. Based on this, it can be seen that the prior art has the problem of time and labor consumption in the method for classifying the focus in the breast image. Accordingly, embodiments of the present application provide an image classification method, a storage medium, and a program product, which may solve the above technical problems.
The image classification method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the scanning device 102 is coupled to and in communication with the computer device 104. The scanning device 102 may transmit data obtained by scanning the object to be measured to the computer device 104 for processing. The data storage system may store data that the computer device 104 needs to process. The data storage system may be integrated on the computer device 104, or may be located on the cloud or other network server. The scanning device 102 may be a scanning device in which the object to be measured is standing during scanning, or a scanning device in which the object to be measured is lying during scanning, or other scanning devices. The computer device 104 may be a terminal or a server; when the terminal is a server, the terminal can be implemented by an independent server or a server cluster formed by a plurality of servers. In addition, the scanning device 102 may be integrated with the computer device 104 on one device or may be separate and separate devices.
In one embodiment, as shown in fig. 2, an image classification method is provided, which is exemplified by the method applied to the computer device in fig. 1, and the method may include the following steps:
s202, according to the obtained medical images of the part to be detected in different body positions, a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical images in all body positions are determined.
Optionally, the part to be detected is a breast part, where the breast part may be any one of breasts of the object to be detected, or may be two breasts of the object to be detected, and the lesion area may be a tumor, a calcified mass, a mass, etc. of the breast part. The different postures include CC axial position and MLO inner oblique position, but other postures may be included. The medical images acquired in each body position may be two-dimensional images or three-dimensional images. Here, medical images in different body positions, i.e. in different views, are obtained.
Specifically, the part to be measured of the object to be measured can be placed in the scanning device according to a body position requirement, then the part to be measured is scanned by the scanning device to obtain the medical image in the body position, then the part to be measured can be placed in the scanning device according to other body position requirements and scanned to obtain the medical image in the body position, and the medical image of the part to be measured in each body position can be obtained by executing the method.
Then, the medical image in each body position may be segmented by using a segmentation model, a segmentation algorithm, or manual segmentation, to obtain two target segmentation images corresponding to the medical image in each body position, and the two target segmentation images are respectively marked as a first target segmentation image and a second target segmentation image. The focus region included in the first target segmentation image is a focus region of a first shape, and the focus region of the first shape refers to a focus of which the focus region in the segmentation image is in a cloud-slice shape. The lesion area included in the second target segmentation image is a lesion area of a second shape, and the lesion area of the second shape refers to a lesion of which the lesion area in the segmentation image is in a star point shape, that is, each lesion area is a punctate and individually distributed lesion which is not connected into a sheet shape.
Generally, the lesion region of the first shape and the lesion region of the second shape may be divided by a threshold, and the threshold may be an area, a volume, or other threshold, for example, a lesion region with an area greater than the threshold is the first-shape lesion region, and a lesion region with an area equal to or less than the threshold is the second-shape lesion region. Here, generally, the lesion region of the first shape is larger than the lesion region of the second shape, and the size relationship of the lesion region may be measured by an area, a volume, and the like, for example, where the area of the lesion region of the first shape is larger than the area of the lesion region of the second shape. In addition, the lesion region of the first shape may be a lesion region formed by connecting a plurality of lesion regions of the second shape in some cases.
Here, the lesion region in the first target segmented image and the lesion region in the second target segmented image are the same lesion in the medical image, but are present in different forms in the respective target segmented images.
And S204, identifying the type of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target type of the focus area.
In this step, the neural network model may be a model formed by a classification network, which may include one classification network, may include a plurality of classification networks, and may further include other networks, such as a segmentation network, a feature extraction network, and the like.
After the first target segmentation image and the second target segmentation image in each body position are obtained, the two target segmentation images in each body position can be combined and input into a neural network model for classification of a focus area, and a target category of the focus area is obtained; or the related characteristic information of the focus area is obtained through two target segmentation images in each body position, and the obtained characteristic information is input into a neural network model after being fused to be classified, so that the target category of the focus area is obtained; of course, other data may be input into the neural network model together to classify the lesion area, so as to obtain the target category of the lesion area.
In addition, when the target category of the lesion area is determined, in one possible embodiment, the probability that the lesion area belongs to each category may be output through a neural network model, the maximum probability may be selected from the probabilities of each category, and the category corresponding to the maximum probability may be set as the target category of the lesion area.
From the above description, it can be known that the types of the lesions are identified by using the lesions of different body positions and different shapes because the features such as the shapes formed by the lesion areas of the to-be-detected part in the medical image in different body positions are obviously different and the features of the lesions which can be characterized by the lesions of different body positions are different, so that the combined feature information is rich, the more the related information can be referred to during classification, and the more accurate the obtained classification result is.
In the image classification method, a first target segmentation image and a second target segmentation image which correspond to a focus area in a medical image of a part to be detected in different body positions are determined through the medical image of the part to be detected in the different body positions, the category of the focus area is identified according to the first target segmentation image and the second target segmentation image in the different body positions and a preset neural network model, and the target category of the focus area is determined; the lesion region included in the first target segmentation image is a first-shaped lesion region, and the lesion region included in the second target segmentation image is a second lesion region. The classification of the focus region can be identified through the preset neural network, and manual classification according to experience is not needed, so that the labor cost and the classification time can be saved, high errors caused by manual classification can be avoided, and the accuracy of the classification result is improved. In addition, because the information of the second focus and the information of the first-shape focus are combined, and the second focus and the first-shape focus are focuses in the images in a plurality of body positions, the images in different body positions and the focuses in different marking types are combined to classify the focus areas, the combined information is more and richer, and the obtained classification result is more accurate.
In the above embodiments, the classification of the lesion region is identified by two target segmentation images in each position of the neural network model, and the following embodiments describe a process of specifically identifying the classification of the lesion when the neural network model includes two classification networks, i.e., a first classification network and a second classification network.
In another embodiment, as shown in fig. 3, another image classification method is provided, and on the basis of the above embodiment, the above S204 may include the following steps:
and S302, inputting the first target segmentation image and the second target segmentation image in each body position into a first classification network for classification, and determining a feature map and an initial class corresponding to a focus area in each target segmentation image.
Specifically, as shown in fig. 4, after two target segmented images in each body position are obtained, the target segmented images in each body position may be sequentially input into the first classification network, and subjected to convolution, pooling, dense convolution block processing, full-link layer processing, and the like, so as to obtain a feature map corresponding to each target segmented image in each body position and a classification of a lesion area. The classification of the lesion area obtained here is not the final classification, and therefore is referred to as an initial classification, and the initial classification here may be determined by selecting a classification corresponding to the maximum probability from a plurality of classification probabilities obtained.
In addition, the first classification network also belongs to a neural network model, and optionally, the first classification network is a classification network using an attention mechanism. Generally, for a breast part to be detected, a focus region on a breast is distributed more, so that target segmentation images in multiple body positions and multiple different focus labels (for example, a focus in a first shape in a first target segmentation image, namely a cloud label view in the figure, and a focus in a second shape in a second target segmentation image, namely a star label view in the figure) are selected, and a classification network of an attention mechanism is used for learning focus characteristic information in multiple body positions and under multiple focus labels, so that the stability and accuracy of classifying the focus region are improved.
Further, the first classification network may be a network trained by using a mean square error, so that accuracy of the trained classification network may be improved, and accuracy of classifying the lesion area may be further improved.
S304, determining the target classification of the focus area according to the characteristic graph and the initial classification corresponding to the focus area in each target segmentation image and the second classification network.
In this step, after obtaining the feature map corresponding to the lesion area in each target segmentation image in each body position and the initial category of the lesion area, each feature map of the lesion area may be input into the second classification network for classification, and the target category of the lesion area may be determined comprehensively by combining each initial category of the lesion area; or inputting each target segmentation image in each body position, each feature map and each initial class into a second classification network for classification to obtain a target class of a focus area; other situations are also possible and are not specifically limited herein.
In this embodiment, each target segmentation image in each body position is input into the first classification network for classification, so as to obtain a feature map and an initial classification of a lesion area, and a target classification of the lesion area is determined by combining the second classification network, where the target classification of the lesion area is determined by two cascaded classification networks, so that the accuracy of the determined classification of the lesion area can be improved in a layer-by-layer progressive classification manner. Furthermore, the first classification network is a classification network adopting an attention mechanism, so that the stability and accuracy of classifying the lesion area can be further improved.
In the above embodiment, it is mentioned that the first classification network and the second classification network can be combined to identify the type of the lesion area, and the identification process of the first classification network is specifically described, and the following embodiment describes in detail how the second classification network specifically identifies the type of the lesion area.
In another embodiment, as shown in fig. 5, another image classification method is provided, and on the basis of the above embodiment, the above S304 may include the following steps:
s402, determining quantitative characteristics corresponding to the focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for characterizing the distribution of the lesion area.
In this step, the first target segmentation image is an image including a lesion region of a first shape, and the second target segmentation image is an image including a lesion region of a second shape.
Here, after obtaining each target segmented image in each body position, as shown in fig. 6, omics features may be extracted from the lesion region in the first target segmented image in each body position by using a feature extraction model, a manual extraction method, or other methods, to obtain omics features corresponding to the lesion region in each first target segmented image, or may be recorded as quantitative features corresponding to the lesion region. Here, the quantitative feature obtained from the lesion region in each first target segmentation image is generally a plurality of quantitative features, each quantitative feature is taken as a dimension, and then each first target segmentation image can obtain an N-dimensional feature, where N is greater than or equal to 1.
Illustratively, here, each first target segmented image may have more than 100 omic features, for example, shape features (e.g., flatness, elongation, etc.), pixel-level statistical features (e.g., entropy, 10% gray-level value, etc.), texture features (e.g., gray-level co-occurrence matrix, etc.), and the like.
Similarly, a feature extraction model or an artificial extraction method may be used to extract statistical features of the lesion regions in the second target segmented images in each body position, so as to obtain statistical features corresponding to the lesion regions in each second target segmented image, which may also be recorded as quantitative features corresponding to the lesion regions. Here, the quantization feature obtained for the lesion region in each second target segmentation image is also a plurality of quantization features, and each quantization feature is taken as a dimension, so that each second target segmentation image can obtain an M-dimensional feature, where M is greater than or equal to 1, and M and N may be equal in size or unequal in size.
Illustratively, here, there may be more than 50 omic features for each second target segmented image, which may include, for example, distribution features (e.g., minimum distance values, outlier numbers, etc.), pixel-level statistical features (e.g., entropy, 10% gray-scale value, etc.), and the like.
S404, determining the target classification of the focus area according to the quantitative feature corresponding to the focus area in each target segmentation image, the feature map and the initial classification corresponding to the focus area in each target segmentation image and the second classification network.
In this step, after obtaining the quantitative features of the plurality of dimensions corresponding to the lesion area in each target segmented image, the quantitative features may be input into the second classification network for classification in combination with the feature map of each target segmented image and the initial classification of the lesion area, or in combination with other information, to obtain the target classification of the lesion area.
In this embodiment, the quantitative features corresponding to the lesion areas in each target segmentation image are obtained through the target segmentation image in each body position, and the target categories of the lesion areas are obtained by combining the feature map of each target segmentation image, the initial categories of the lesion areas, and the second classification network.
In order to consider the actual perception of the target and further improve the accuracy when actually classifying the lesion area of the target site, the following embodiment may further perform analysis in combination with relevant clinical information, and the following embodiment describes in detail how to determine the type of the lesion area in combination with the clinical information.
In another embodiment, as shown in fig. 7, another image classification method is provided, and on the basis of the above embodiment, the above S404 may include the following steps:
and S502, acquiring clinical characteristic information of the object to be detected.
The clinical characteristic information at least comprises perception information of the part to be detected of the object to be detected and/or medical history information of the object to be detected. The perception information may be a pain perception of the subject on the site to be measured, and the clinical characteristic information may also include other information, such as an age, a sex, an occupation, a shooting purpose (e.g., physical examination, medical visit, etc.), and the like of the subject.
Specifically, before the object to be tested is inspected, clinical characteristic information of the object to be tested is obtained through processes such as interaction with the object to be tested, and the clinical characteristic information is input into the computer device for storage.
And S504, determining the target category of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial category corresponding to the focus area in each target segmentation image and the second classification network.
In this step, after obtaining the clinical feature information of the object to be measured, the quantitative feature corresponding to the lesion area in each target segmentation image, the feature map corresponding to the lesion area in each target segmentation image, and the initial category, optionally, the clinical feature information, the quantitative feature corresponding to the lesion area in each target segmentation image, and the feature map corresponding to the lesion area in each target segmentation image and the initial category may be subjected to feature fusion, and then input to the second classification network, so as to determine the target category of the lesion area.
Referring to the overall structure example diagram shown in fig. 8, after obtaining the above-mentioned various information, for example, two target segmentation images in two postures, and four target segmentation images in total, four Feature maps (i.e., Feature map 1/2/3/4 in the diagram) and four initial classes (i.e., classification probability 1/2/3/4 in the diagram) can be obtained. The two first target segmentation images may obtain two N-dimensional features and the two second target segmentation images may obtain two M-dimensional features, i.e. M x 2-dimensional features and N x 2-dimensional features.
Then, feature information such as the four feature maps, the four initial categories, the M × 2 dimensional features, the N × 2 dimensional features, and clinical feature information of the object to be measured may be fused and input to the second classification network to identify the category of the lesion, so as to obtain a target category of the lesion region. For example, taking two classifications as an example, it may be that probabilities belonging to two classes, such as Proavailability 1 and Proavailability 0, are output through the second classification network, and the class with the highest Probability is selected as the target class.
Here, the second classification network may be a network using DenseNet (dense network), and the second classification network may be generally trained before being used. The second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
That is to say, during training, target segmentation images of a plurality of sample objects in a plurality of body positions may be obtained first, and corresponding sample quantitative features, sample feature maps and sample initial categories may be obtained through the target segmentation images, and meanwhile, a labeling category may be set for each lesion area, and sample clinical feature information of each sample object may also be obtained; then, the sample clinical characteristic information, the sample quantitative characteristic, the sample characteristic graph, the sample initial category and the labeling category of the lesion area of each sample object can be used as a sample characteristic information set, and the second classification network is trained to obtain a trained second classification network.
In this embodiment, the target category of the lesion area is obtained by the clinical feature information of the object to be detected and combining the quantitative features corresponding to the lesion area in each target segmentation image, the feature map of each target segmentation image, the initial category of the lesion area, and the second classification network, where the clinical feature information of the object to be detected is combined, so that the category of the finally determined lesion area is directly related to the individual and better conforms to the actual situation of the individual, that is, is more accurate.
The following embodiment describes in detail how to obtain two corresponding target segmentation images from medical images in each body position.
In another embodiment, another image classification method is provided, and on the basis of the above embodiment, the step S202 may include the following steps:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting the focus region of the part to be detected in the medical images in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical images in the body positions.
Specifically, referring to fig. 9, a detection model may be first used to perform lesion area detection processing on the medical image in each body position, so as to obtain a detection result of a lesion area, that is, the lesion area is first located in the medical image in each body position. Then, continuously performing focus segmentation processing on the medical image in each body position with the focus region positioned by adopting a first segmentation model to obtain first target segmentation images in each body position, wherein the focus region in each first target segmentation image is a focus region with a first shape; meanwhile, a second segmentation model can be adopted to perform lesion segmentation processing on the medical image in each body position in which the lesion region is positioned, so as to obtain second target segmentation images in each body position, wherein the lesion region in each second target segmentation image is a second-shaped lesion region.
That is, the detection model and the first segmentation model may be in a cascade relationship, that is, the detection model is used to first locate the lesion region in the medical image, i.e., perform coarse segmentation, and then the detection result is input into the first segmentation model to perform fine segmentation, and the edge of the lesion region is finely segmented to obtain the first target segmentation image. Similarly, the detection model and the second segmentation model may be in a cascade relationship, that is, the detection model is used to first locate the lesion region in the medical image, that is, perform coarse segmentation, and then the detection result is input into the second segmentation model to perform fine segmentation, and the edge of the lesion region is finely segmented to obtain the second target segmentation image.
The focus area is segmented through the cascaded detection network and the segmentation model, and the focus area can be quickly positioned through the detection model, so that the accuracy and the efficiency of segmenting the focus area by subsequently adopting the segmentation model can be improved.
Here, before the lesion region is segmented using the detection model, the first segmentation model, and the second segmentation model, these models may be trained. The detection model may be obtained by training a pre-collected sample image and its labeled data, where the labeled data includes information of a detection frame of a lesion area. The first segmentation model may be obtained by training based on a plurality of first sample medical images, each of which is labeled with a lesion region of a first shape. The second segmentation model may be obtained by training based on a plurality of second sample medical images, each of which is labeled with a lesion region of a second shape.
In the embodiment, a medical image under each body position is segmented through a first segmentation model and a second segmentation model which are trained in advance, so as to obtain a first target segmentation image and a second target segmentation image under each body position, wherein the first segmentation model is obtained by training based on a sample image of a focus area marked with a first shape, and the first segmentation model is obtained by training based on a sample image of a focus area marked with a second shape, so that the trained segmentation model is accurate, and the accuracy of segmenting the medical image can be improved; in addition, the medical image is segmented by adopting the segmentation model, and the segmentation efficiency can be improved when more medical images exist.
The lines and the like in fig. 4, 6, 8, and 9 do not affect the essence of the embodiments of the present application.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an image classification device for realizing the image classification method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image classification apparatus provided below can be referred to as limitations on the image classification method in the foregoing, and details are not described herein again.
In one embodiment, as shown in fig. 10, there is provided an image classification apparatus including a determination module 11 and a classification module 12, wherein:
the determining module 11 is configured to determine, according to the acquired medical images of the to-be-detected part in different body positions, a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image in each body position; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region;
the classification module 12 is configured to identify a category of a lesion area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determine a target category of the lesion area.
Optionally, the part to be detected is a breast part, and the different body positions include a CC axial position and an MLO internal oblique position.
In another embodiment, another image classification apparatus is provided, on the basis of the above embodiment, the neural network model includes a first classification network and a second classification network; the classification module 12 may include:
the first classification unit is used for inputting the first target segmentation image and the second target segmentation image under each posture into a first classification network for classification, and determining a feature map and an initial category corresponding to a focus area in each target segmentation image;
and the second classification unit is used for determining the target classification of the focus area according to the characteristic graph and the initial classification corresponding to the focus area in each target segmentation image and a second classification network.
Optionally, the first classification network is a classification network using an attention mechanism.
In another embodiment, there is provided another image classification apparatus, and on the basis of the above embodiment, the second classification unit may include:
the quantitative feature determining subunit is used for determining quantitative features corresponding to the focus areas in the target segmentation images according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for representing the distribution condition of the lesion area;
and the classification subunit is used for determining the target class of the lesion area according to the quantization feature corresponding to the lesion area in each target segmentation image, the feature map and the initial class corresponding to the lesion area in each target segmentation image and the second classification network.
In another embodiment, another image classification apparatus is provided, on the basis of the above embodiment, the classification subunit is specifically configured to obtain clinical characteristic information of a subject to be detected; and determining the target class of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In another embodiment, on the basis of the above embodiment, the classification subunit is specifically configured to perform feature fusion on the clinical feature information, the quantitative feature corresponding to the lesion area in each target segmented image, and the feature map and the initial category corresponding to the lesion area in each target segmented image, and then input the result into the second classification network to determine the target category of the lesion area; the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
In another embodiment, another image classification apparatus is provided, and on the basis of the above embodiment, the determining module 11 may include:
the segmentation unit is used for respectively segmenting the focus region of the part to be detected in the medical images under different body positions according to a preset first segmentation model and a preset second segmentation model, and determining a first target segmentation image and a second target segmentation image corresponding to the medical images under different body positions; the first segmentation model is obtained by training based on a plurality of first sample medical images, and each first sample medical image is marked with a focus area of a first shape; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus area of a second shape.
The modules in the image classification device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, for example, a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image classification method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region; and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the first target segmentation image and the second target segmentation image in each body position into a first classification network for classification, and determining a feature map and an initial class corresponding to a focus area in each target segmentation image; and determining the target class of the focus area according to the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a quantitative feature corresponding to a focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for representing the distribution condition of the lesion area; and determining the target class of the focus region according to the quantitative feature corresponding to the focus region in each target segmentation image, the feature map and the initial class corresponding to the focus region in each target segmentation image and the second classification network.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring clinical characteristic information of a to-be-detected object; and determining the target class of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
after carrying out feature fusion on the clinical feature information, the quantitative features corresponding to the focus areas in the target segmentation images, and the feature maps and the initial classes corresponding to the focus areas in the target segmentation images, inputting the feature maps and the initial classes into a second classification network, and determining the target classes of the focus areas; the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
In one embodiment, the processor when executing the computer program further performs the steps of:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting focus areas of the medical image of the part to be detected in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical image in each body position; the first segmentation model is obtained by training based on a plurality of first sample medical images, and each first sample medical image is marked with a focus area of a first shape; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus area of a second shape.
In one embodiment, the first classification network is a classification network using an attention mechanism.
In one embodiment, the site to be measured is a breast site, and the different body positions include a CC axial position and an MLO oblique position.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region; and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the first target segmentation image and the second target segmentation image in each body position into a first classification network for classification, and determining a feature map and an initial class corresponding to a focus area in each target segmentation image; and determining the target class of the focus area according to the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a quantitative feature corresponding to a focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for representing the distribution condition of the lesion area; and determining the target class of the focus region according to the quantitative feature corresponding to the focus region in each target segmentation image, the feature map and the initial class corresponding to the focus region in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring clinical characteristic information of a to-be-detected object; and determining the target class of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
after carrying out feature fusion on the clinical feature information, the quantitative features corresponding to the focus areas in the target segmentation images, and the feature maps and the initial classes corresponding to the focus areas in the target segmentation images, inputting the feature maps and the initial classes into a second classification network, and determining the target classes of the focus areas; the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting focus regions of the medical image of the part to be detected in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical image in each body position; the first segmentation model is obtained by training based on a plurality of first sample medical images, and a focus region with a first shape is marked in each first sample medical image; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus area of a second shape.
In one embodiment, the first classification network is a classification network using an attention mechanism.
In one embodiment, the site to be measured is a breast site, and the different body positions include a CC axial position and an MLO oblique position.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a first-shaped lesion region, and a lesion region included in the second target segmentation image is a second-shaped lesion region; and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the first target segmentation image and the second target segmentation image in each body position into a first classification network for classification, and determining a feature map and an initial class corresponding to a focus area in each target segmentation image; and determining the target class of the focus area according to the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a quantitative feature corresponding to a focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for representing the distribution condition of the lesion area; and determining the target class of the focus region according to the quantitative feature corresponding to the focus region in each target segmentation image, the feature map and the initial class corresponding to the focus region in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring clinical characteristic information of a to-be-detected object; and determining the target class of the focus area according to the clinical characteristic information, the quantitative characteristic corresponding to the focus area in each target segmentation image, the characteristic graph and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
In one embodiment, the computer program when executed by the processor further performs the steps of:
after carrying out feature fusion on the clinical feature information, the quantitative features corresponding to the focus areas in the target segmentation images, and the feature maps and the initial classes corresponding to the focus areas in the target segmentation images, inputting the feature maps and the initial classes into a second classification network, and determining the target classes of the focus areas; the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting focus areas of the medical image of the part to be detected in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical image in each body position; the first segmentation model is obtained by training based on a plurality of first sample medical images, and each first sample medical image is marked with a focus area of a first shape; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus area of a second shape.
In one embodiment, the first classification network is a classification network using an attention mechanism.
In one embodiment, the site to be measured is a breast site, and the different body positions include a CC axial position and an MLO oblique position.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of image classification, the method comprising:
determining a first target segmentation image and a second target segmentation image corresponding to a focus area in the medical image of each body position according to the acquired medical images of the part to be detected in different body positions; a lesion region included in the first target segmentation image is a lesion region of a first shape, and a lesion region included in the second target segmentation image is a lesion region of a second shape;
and identifying the category of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target category of the focus area.
2. The method of claim 1, wherein the neural network model comprises a first classification network and a second classification network; the identifying the type of the focus area according to the first target segmentation image and the second target segmentation image in each body position and a preset neural network model, and determining the target type of the focus area comprise:
inputting the first target segmentation image and the second target segmentation image in each body position into the first classification network for classification, and determining a feature map and an initial class corresponding to the focus area in each target segmentation image;
and determining the target class of the focus area according to the feature map and the initial class corresponding to the focus area in each target segmentation image and the second classification network.
3. The method according to claim 2, wherein the determining the target classification of the lesion area according to the feature map and the initial classification corresponding to the lesion area in each target segmentation image and the second classification network comprises:
determining a quantitative feature corresponding to the focus region in each target segmentation image according to the first target segmentation image and the second target segmentation image in each body position; the quantitative characteristics are used for characterizing the distribution of the lesion area;
and determining the target class of the focus region according to the quantitative feature corresponding to the focus region in each target segmentation image, the feature map and the initial class corresponding to the focus region in each target segmentation image and the second classification network.
4. The method according to claim 3, wherein the determining the target classification of the lesion area according to the quantified feature corresponding to the lesion area in each target segmentation image, the feature map and the initial classification corresponding to the lesion area in each target segmentation image, and the second classification network comprises:
acquiring clinical characteristic information of a to-be-detected object;
and determining the category of the lesion area according to the clinical feature information, the quantitative feature corresponding to the lesion area in each target segmentation image, the feature map and the initial category corresponding to the lesion area in each target segmentation image, and the second classification network.
5. The method of claim 4, wherein determining the category of the lesion area according to the clinical feature information, the quantified feature corresponding to the lesion area in each target segmented image, the feature map and the initial category corresponding to the lesion area in each target segmented image, and the second classification network comprises:
after feature fusion is carried out on the clinical feature information, the quantitative features corresponding to the focus areas in the target segmentation images, and the feature maps and the initial categories corresponding to the focus areas in the target segmentation images, the feature fusion is input into the second classification network, and the target categories of the focus areas are determined;
the second classification network is obtained by training according to a sample feature information set corresponding to a plurality of sample objects, and the sample feature information of each sample object comprises sample clinical feature information, sample quantitative features, a sample feature map, a sample initial category and a labeling category of a lesion region.
6. The method according to any one of claims 1 to 5, wherein the determining, according to the acquired medical images of the part to be measured in different body positions, a first target segmentation image and a second target segmentation image corresponding to a lesion area in the medical images in the body positions comprises:
according to a preset first segmentation model and a preset second segmentation model, respectively segmenting focus regions of the medical image of the part to be detected in different body positions, and determining a first target segmentation image and a second target segmentation image corresponding to the medical image in each body position;
the first segmentation model is obtained by training based on a plurality of first sample medical images, and each first sample medical image is marked with a focus area of a first shape; the second segmentation model is obtained by training based on a plurality of second sample medical images, and each second sample medical image is marked with a focus region of a second shape.
7. The method according to any of claims 2-5, wherein the first classification network is a classification network employing an attention mechanism.
8. The method of any one of claims 1-5, wherein the site to be measured is a breast site and the different body positions include CC axial position and MLO oblique position.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 8 when executed by a processor.
CN202210840875.7A 2022-07-18 2022-07-18 Image classification method, storage medium, and program product Pending CN115063637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210840875.7A CN115063637A (en) 2022-07-18 2022-07-18 Image classification method, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210840875.7A CN115063637A (en) 2022-07-18 2022-07-18 Image classification method, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN115063637A true CN115063637A (en) 2022-09-16

Family

ID=83207113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210840875.7A Pending CN115063637A (en) 2022-07-18 2022-07-18 Image classification method, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN115063637A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359325A (en) * 2022-10-19 2022-11-18 腾讯科技(深圳)有限公司 Training method, device, equipment and medium of image recognition model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359325A (en) * 2022-10-19 2022-11-18 腾讯科技(深圳)有限公司 Training method, device, equipment and medium of image recognition model

Similar Documents

Publication Publication Date Title
US20220406410A1 (en) System and method for creating, querying, and displaying a miba master file
US10776963B2 (en) System and method for forming a super-resolution biomarker map image
CN111369576B (en) Training method of image segmentation model, image segmentation method, device and equipment
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
US10734107B2 (en) Image search device, image search method, and image search program
CN111861989A (en) Method, system, terminal and storage medium for detecting midline of brain
CN114998247A (en) Abnormality prediction method, abnormality prediction device, computer apparatus, and storage medium
CN111341408A (en) Image report template generation method, computer equipment and storage medium
CN112530550A (en) Image report generation method and device, computer equipment and storage medium
CN115861248A (en) Medical image segmentation method, medical model training method, medical image segmentation device and storage medium
CN110570425B (en) Pulmonary nodule analysis method and device based on deep reinforcement learning algorithm
CN112102235A (en) Human body part recognition method, computer device, and storage medium
CN115063637A (en) Image classification method, storage medium, and program product
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
CN115954101A (en) Health degree management system and management method based on AI tongue diagnosis image processing
CN111275699A (en) Medical image processing method, device, equipment and storage medium
CN113130050B (en) Medical information display method and display system
Hanh et al. Convolutional neural networks improve radiologists’ performance in breast cancer screening for Vietnamese patients
Shekar et al. An efficient stacked ensemble model for the detection of COVID-19 and skin cancer using fused feature of transfer learning and handcrafted methods
CN114820483A (en) Image detection method and device and computer equipment
CN114724016A (en) Image classification method, computer device, and storage medium
CN114241198A (en) Method, device, equipment and storage medium for obtaining local imagery omics characteristics
KR102651589B1 (en) Disease prognosis integrated prediction method, device and computer program Using mage and non-image data
Bhookya Examine Lung Disorders and Disease Classification Using Advanced CNN Approach
CN117115187B (en) Carotid artery wall segmentation method, carotid artery wall segmentation device, carotid artery wall segmentation computer device, and carotid artery wall segmentation storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination