CN116091466A - Image analysis method, computer device, and storage medium - Google Patents

Image analysis method, computer device, and storage medium Download PDF

Info

Publication number
CN116091466A
CN116091466A CN202310103687.0A CN202310103687A CN116091466A CN 116091466 A CN116091466 A CN 116091466A CN 202310103687 A CN202310103687 A CN 202310103687A CN 116091466 A CN116091466 A CN 116091466A
Authority
CN
China
Prior art keywords
image
milk
neural network
network model
breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310103687.0A
Other languages
Chinese (zh)
Inventor
李哲人
郑介志
车继飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202310103687.0A priority Critical patent/CN116091466A/en
Publication of CN116091466A publication Critical patent/CN116091466A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image analysis method, a computer device and a storage medium. Comprising the following steps: acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent; inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs. The accuracy of the analysis result of whether the double milk is symmetrical can be improved by adopting the method.

Description

Image analysis method, computer device, and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image analysis method, a computer device, and a storage medium.
Background
Breast cancer is a malignant disease that threatens female health, and therefore regular inspection of the breast is particularly important. At present, mammary gland X-ray examination is a mainstream breast cancer examination means, and doctors can obtain corresponding mammary gland examination results by carrying out contrast analysis on shot bilateral mammary gland images and further judging and processing analysis results.
In the related art, when a doctor performs contrast analysis on two breast images of a patient, the doctor usually repeatedly observes two breast images on the left side and the right side through naked eyes, and then judges according to experience, and finally obtains an analysis result of whether the two breast images of the patient are symmetrical.
However, the accuracy of the analysis results obtained by the above-described techniques is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image analysis method, apparatus, computer device, and storage medium capable of improving the accuracy of analysis results.
An image analysis method, the method comprising:
acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
Inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical;
the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
In one embodiment, before the inputting the left milk image into the first neural network model, the inputting the right milk image into the second neural network model, and performing the feature extraction operation and the feature similarity analysis operation in the first neural network model and the second neural network model to obtain the analysis result, the method further includes:
registering the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image;
correspondingly, the steps of inputting the left milk image into the first neural network model, inputting the right milk image into the second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result include:
And inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
In one embodiment, the acquiring the left milk image to be analyzed and the right milk image to be analyzed includes:
acquiring an original left breast image and an original right breast image; the original left breast image comprises a left breast area, and the original right breast image comprises a right breast area;
dividing the original left breast image and the original right breast image to obtain a left breast divided image and a right breast divided image;
the left milk segmentation image is determined as the left milk image to be analyzed, and the right milk segmentation image is determined as the right milk image to be analyzed.
In one embodiment, the left milk image and the right milk image include breast contour position information and nipple position information, and the registering the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image includes:
And registering the left milk segmentation image and the right milk segmentation image based on the breast contour position information and the nipple position information to obtain a registered left milk segmentation image and a registered right milk segmentation image.
In one embodiment, the inputting the registered left milk image to the first neural network model, inputting the registered right milk image to the second neural network model, and performing a feature extraction operation and a feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result includes:
inputting the registered left milk segmentation image into a first neural network model, inputting the registered right milk segmentation image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector;
and calculating the similarity between the left milk feature vector and the right milk feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
In one embodiment, the training method of the first neural network model and the second neural network model includes:
inputting each group of training image pairs into an initial first neural network model and an initial second neural network model to obtain training feature vector pairs corresponding to each group of training image pairs, wherein the training feature vector pairs comprise left milk training feature vectors and right milk training feature vectors;
Calculating the similarity between the left training feature vector and the right training feature vector in each training feature vector pair, and obtaining a prediction symmetry result corresponding to each group of training image pairs according to the obtained similarity of each training feature vector pair;
based on the predicted symmetric results and the corresponding labeled symmetric results of each group of training image pairs, training the initial first neural network model and the initial second neural network model to obtain the first neural network model and the second neural network model.
In one embodiment, training the initial first neural network model and the initial second neural network model based on the predicted symmetric result and the corresponding labeled symmetric result of each group of training image pairs to obtain the first neural network model and the second neural network model includes:
calculating losses between the predicted symmetrical results and the corresponding marked symmetrical results of each group of training image pairs;
and summing the losses of each group of training image pairs, and training the initial first neural network model and the initial second neural network model by using the obtained sum value to obtain the first neural network model and the second neural network model.
In one embodiment, the first and second neural network models and the similarity analysis algorithm form a twin network model.
An image analysis apparatus, the apparatus comprising:
the acquisition module is used for acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
the analysis module is used for inputting the left milk image into the first neural network model, inputting the right milk image into the second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical;
the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
Inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical;
the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
According to the image analysis method, the device, the computer equipment and the storage medium, the left milk image to be analyzed is input into the first neural network model, the right milk image to be analyzed is input into the second neural network model, the feature extraction and the similarity analysis operation are carried out on the left milk image and the right milk image in the two neural network models, the analysis results of the left milk and the right milk are obtained, and the analysis results can represent whether left milk areas and right milk areas in the left milk image and the right milk image to be analyzed are symmetrical or not. In the method, the characteristic extraction and the similarity analysis can be carried out on the left and right milk images through the neural network model, so that the symmetrical results of the left and right milk can be quantified without manually obtaining the symmetrical results of the left and right milk through visual observation and experience, the problem of lower accuracy of the symmetrical results caused by artificial factors can be avoided, and the obtained symmetrical results of the left and right milk are more accurate.
Drawings
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a flow chart of an image analysis method according to an embodiment;
FIG. 2a is a diagram illustrating whether the left and right breasts are symmetrical in one embodiment;
FIG. 3 is a flow chart of an image analysis method according to another embodiment;
FIG. 4 is a flow chart of an image analysis method according to another embodiment;
FIG. 4a is a schematic diagram of a twin network model structure in another embodiment;
FIG. 5 is a flow chart of an image analysis method according to another embodiment;
fig. 6 is a block diagram showing the structure of an image analysis apparatus in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Breast cancer is the primary malignant disease threatening the health of women, and statistics indicate that on average every eight women suffer from breast cancer in western developed countries, and the estimated incidence of breast cancer is lower in china than in western developed countries. However, with changes in the pace of life and lifestyle of people, the number of people suffering from breast cancer in domestic females is increasing. Because the etiology of breast cancer is not clear, once the breast cancer is advanced, the patients are even at risk, and regular examination and early diagnosis and prevention can greatly reduce the incidence and death rate of the breast cancer, so that most women are prevented from being afflicted by the breast cancer. Thus, early discovery, early diagnosis and early treatment are an important principle for the prevention and treatment of breast cancer. At present, the mammary gland X-ray examination is a mainstream breast cancer examination means, and common abnormal conditions in a mammary gland molybdenum target image include calcification, tumor, structural distortion and the like. From the physiological perspective analysis, female mammary gland development is bilaterally symmetrical under normal conditions, and the content distribution conditions of glands in two side mammary gland tissues are approximately the same, so that in the clinical analysis process, if an imaging doctor suspects that one side of mammary gland has suspected lesions, the other side of mammary gland images are referred to at the same time, and the analysis is carried out by comparing judgment, and the method for carrying out clinical analysis through comparing and observing the two side mammary gland images is based on the practical basis of the asymmetric analysis of the two side mammary gland images. In general, the breast lesions grow asymmetrically, that is, the lesion is contained in one side of the breast image, so that the probability of containing the lesion in the same position of the opposite side of the breast image is very low, and in general, when a doctor analyzes whether the double breasts of a patient are symmetrical, the doctor usually repeatedly observes two breast images on the left side and the right side of the patient through naked eyes, then judges according to experience, and finally obtains an analysis result of whether the double breasts of the patient are symmetrical, however, the accuracy of the obtained analysis result is low. Based on the above, the present application provides an image analysis method, an image analysis device, a computer device, and a storage medium, which can solve the above technical problems.
The image analysis method provided by the application can be applied to computer equipment, and the computer equipment can be a terminal or a server. Taking a computer device as an example, the internal structure of the computer device may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image analysis method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The execution subject of the embodiments of the present application may be a computer device or an image analysis apparatus, and the method of the embodiments of the present application will be described below with reference to the computer device as the execution subject.
In one embodiment, an image analysis method is provided, and this embodiment relates to a specific process of how to obtain whether left and right milk areas in left and right milk split images are symmetrical by splitting the images.
As shown in fig. 2, the method may include the steps of:
s202, acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent.
The left milk image to be analyzed and the right milk image to be analyzed can be an original left milk image to be analyzed and an original right milk image to be analyzed, and can also be a left milk segmentation image to be analyzed and a right milk segmentation image to be analyzed. The left breast image to be analyzed and the right breast image to be analyzed are left and right breast images of the same detection object (generally, human body), the left and right breast images of the detection object can be obtained by scanning the left and right breast images of the detection object at the same time, the two breast images are the original breast images under the same view, and then the trained segmentation model can be adopted to segment the original left and right breast images to obtain a left breast segmentation image to be analyzed and a right breast segmentation image to be analyzed. That is, here the left milk image to be analyzed and the right milk image to be analyzed are breast images of the same view, and here the breast images include head-tail images and/or side-bias images.
In addition, here, the left milk image to be analyzed and the right milk image to be analyzed may be images of any modality, such as CT (omputed Tomography, i.e., electronic computed tomography) images, MR (magnetic resonance) images, PET (positron emission tomography, positron emission computed tomography) images, X-ray images, but X-ray images are mainly used in the present embodiment.
Of course, before the original left and right breast images are segmented by the segmentation model, the breast image on either side of the original left and right breast images may also be flipped so that the breasts in the original left and right breast images have the same orientation or direction, or so that the left and right breast regions have the same orientation or direction; of course, the left and right breast images may be turned over after the original left and right breast images are segmented; the overturning direction can be along the horizontal direction or along the vertical direction, and can be along other directions. For example, the original right-side breast image may be flipped in the horizontal direction such that the flipped right-side breast image and the breast in the original left-side breast image have the same orientation, but it may also be flipped without flipping the original right-side breast image. Orientation is understood herein to be: such as with the nipple on the breast facing to the right in the flipped left and right breast images, or to the left, etc. The breast image on any side is overturned, so that the accuracy of a subsequent segmentation result can be ensured.
Further, after the breast image on any one side of the original left and right breast images is turned over, the turned-over two-side breast images can be input into a trained segmentation model for segmentation processing, so as to obtain left and right breast segmentation images corresponding to the original left and right breast images, wherein the left and right breast segmentation images comprise breast contour information and nipple position information, the left and right breast contours and the left and right nipples form left and right breast areas, namely the left breast contours and the left nipples form left and right breast areas.
The segmentation model may be obtained by training based on a plurality of breast training images, and here, each breast training image is labeled with breast contour information, nipple position information, and the like, so that when the segmentation model is actually used, a corresponding breast segmentation image can be obtained for each input breast image.
S204, inputting a left milk image into a first neural network model, inputting a right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
In this step, the training image pair may be a training image pair composed of left and right breast training images, and typically the left and right breast training images in each image pair are the same view images of the same person at the same time. In addition, the symmetrical labeling result corresponding to each group of training image pairs can be probability, category, or the like of whether the left and right structures labeled for each group of training images are symmetrical or not. The training image pair in this embodiment may be left and right original images or may be divided images of left and right breasts.
The first neural network model and the second neural network model may be two network models having the same structure, or may be two network models having different structures. Optionally, the first neural network model and the second neural network model and the similarity analysis algorithm form a twin network model, that is, the first neural network model and the second neural network model are two network models with the same structure, and the two network models are mainly used for extracting features of images of left and right breasts, and of course, the similarity analysis algorithm can also be called a logistic regression unit, and is used for performing similarity analysis operation on the features extracted by the two network models to obtain similarity between the two extracted features.
Of course, here, the left milk split image may be input to the first neural network model, the right milk split image may be input to the second neural network model, and the feature extraction operation and the feature similarity analysis operation may be performed on the left and right milk split images in the first neural network model and the second neural network model.
In addition, before the left and right milk split images/left and right milk raw images are input to the first neural network model and the second neural network model, the left and right milk split images/left and right milk raw images may be registered, and the registered left and right milk split images/left and right milk raw images may be input to the first neural network model and the second neural network model. By the registration operation, inaccuracy of the subsequent analysis result caused by non-correspondence of positions of points on the left and right milk segmentation images/the left and right milk original images can be avoided.
Of course, the left and right milk split images/left and right milk original images may be directly input into the first neural network model and the second neural network model, the feature extraction is performed on the left milk region on the left milk split images/left milk original images in the first neural network model to obtain the feature information (may include a feature map or a feature vector, etc.) of the left milk region, the feature extraction is performed on the right milk region on the right milk split images/right milk original images in the second neural network model to obtain the feature information of the right milk region, and then the similarity analysis is performed on the feature information of the left milk region and the feature information of the right milk region by using the logistic regression unit. The logistic regression unit (or called a similarity analysis algorithm) may include a contrast loss function to analyze the similarity between the two feature information, and may include other functions that may analyze the similarity, as well. The first neural network model and the second neural network model are not sequentially divided, and the left milk segmentation image/the left milk raw image may be input to the second neural network model, and the right milk segmentation image/the right milk raw image may be input to the first neural network model.
Further, after the feature extraction and feature similarity analysis are performed on the left and right milk segmentation images/the left and right milk original images in the first neural network model and the second neural network model, a similarity analysis result between the left and right milk regions can be finally obtained, where the similarity analysis result may be a value, may be a probability of whether the left and right milk regions are symmetrical or a probability of whether the left and right milk regions are asymmetrical or not, and the like; of course, the similarity analysis result may be two values, where one value represents a probability of whether the left and right breast areas are symmetrical, and the other value represents a probability of whether the left and right breast areas are asymmetrical, and so on; of course, the similarity analysis result may be directly output category information, where the category information directly indicates that the left and right breast areas belong to symmetrical categories, or that the left and right breast areas belong to asymmetrical categories.
By way of example, referring to fig. 2a, fig. 2a (1) shows images of symmetric left and right breasts, and it can be seen that (1) the left and right breasts in the left and right breast images in the upper and lower sets of images are relatively similar in shape and almost the same in size, and the left and right breast regions therein can be considered to be symmetric; the image (2) in fig. 2a is an asymmetric image of the left and right breast, and it can be seen that the left and right breast shape in the left and right breast images in the upper and lower sets of images in the image (2) is not very similar, and the right breast area is larger than the left breast area, so that the left and right breast areas are considered to be asymmetric.
In the image analysis method, the left milk image to be analyzed is input into the first neural network model, the right milk image to be analyzed is input into the second neural network model, the feature extraction and the similarity analysis operation are carried out on the left milk image and the right milk image in the two neural network models, the analysis results of the left milk and the right milk are obtained, and whether the left milk area and the right milk area in the left milk image and the right milk image to be analyzed are symmetrical or not can be represented by the analysis results. In the method, the characteristic extraction and the similarity analysis can be carried out on the left and right milk images through the neural network model, so that the symmetrical results of the left and right milk can be quantified without manually obtaining the symmetrical results of the left and right milk through visual observation and experience, the problem of lower accuracy of the symmetrical results caused by artificial factors can be avoided, and the obtained symmetrical results of the left and right milk are more accurate.
In another embodiment, another image analysis method is provided, and this embodiment relates to a specific process of registering the left and right milk images before feature extraction and similarity analysis are performed on the left and right milk images. On the basis of the above embodiment, as shown in fig. 3, the above method may further include the following steps:
S302, registering the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image.
In this step, a rigid registration method may be used to register the left milk image and the right milk image, during registration, key points may be selected for the left milk area on the left milk image and the right milk area on the right milk image respectively, then similarity measurement is performed for each key point on the left and right milk images, a matching feature point pair on the left and right milk images is obtained, a spatial coordinate transformation parameter between the left and right milk images may be obtained through the matching feature point pair, and finally the left and right milk images are registered by the spatial coordinate transformation parameter (spatial transformation relation), where after registration, one of the obtained left and right milk images may be unchanged, and the other image is transformed, which is herein referred to as a registered left milk image and a registered right milk image in summary. The key point selection can be as follows: for example, a left milk contour point and a left milk nipple on a left milk area may be selected as key points of the left milk area, and a right milk contour point and a right milk nipple on a right milk area may be selected as key points of the right milk area.
By registering the left and right breast images, errors caused by the position offset of the left and right breast areas on the left and right breast images can be eliminated, and the accuracy of extracting the features of the left and right breast segmentation images can be affected.
After registering the left and right milk images, the step S204 may include the following step S304, as follows:
s304, inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
It should be noted that, the feature extraction operation and the feature similarity analysis operation performed in the first neural network model and the second neural network model in this step are mainly performed for the left and right milk segmentation images, and when the feature extraction and the similarity analysis are performed on the registered left and right milk segmentation images, this step may be optionally performed by adopting the following steps A1 and A2, as follows:
step A1, inputting the registered left milk segmentation image into a first neural network model, inputting the registered right milk segmentation image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector.
And A2, calculating the similarity between the left milk feature vector and the right milk feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
In the steps A1 and A2, the registered left milk segmentation image and the registered right milk segmentation image may be respectively input into a first neural network model and a second neural network model for feature extraction, so as to obtain a feature map corresponding to the left milk segmentation image and a feature map corresponding to the right milk segmentation image, and then vector conversion is performed on the feature map corresponding to the left milk segmentation image and the feature map corresponding to the right milk segmentation image, so as to obtain a feature vector corresponding to the left milk segmentation image and a feature vector corresponding to the right milk segmentation image, and the feature vectors are recorded as a left milk feature vector and a right milk feature vector.
In addition, the similarity analysis algorithm may be the contrast loss function constructive loss in S204, which may be a result of calculating a similarity matrix between the left milk feature vector and the right milk feature vector by using the following formula (1), and then calculating the similarity matrix by using the contrast loss function as follows formula (2) to obtain an analysis result of whether the left and right milk regions are symmetrical or whether the left and right milk regions are asymmetrical:
Figure BDA0004075542800000121
Figure BDA0004075542800000122
Wherein X is 1 And X 2 Dividing the image for the left and right milk of the input, G W (X 1 ) And G W (X 2 ) For the left and right milk feature vectors, W represents the weight parameters of the first neural network model and the second neural network model, can be set according to actual conditions,E W (X 1 ,X 2 ) For the similarity matrix between the left and right milk feature vectors, y represents the probability of asymmetry of the left and right milk regions, sigma represents a sigmoid function, and b represents the deviation parameter of the first neural network model and the second neural network model, and can be set according to practical situations.
The probability of whether the left and right breast areas are asymmetric or not can be directly calculated through the formulas (1) and (2), namely, the calculated similarity is obtained, the calculated similarity can be compared with a preset similarity threshold value, if the calculated similarity is larger than the similarity threshold value, the analysis result is determined to be asymmetric between the left breast area and the right breast area, that is, the probability of asymmetry and the probability threshold value (namely, the similarity threshold value) can be compared, if the probability of asymmetry is larger than the probability threshold value, the analysis result is considered to be asymmetric between the left and right breast areas, if the probability of asymmetry is not larger than the probability threshold value, the analysis result is considered to be symmetric between the left and right breast areas, the probability threshold value is generally 0.5, and of course, other values can be adopted, and the analysis result can be determined according to practical conditions.
According to the image analysis method provided by the embodiment, the left and right milk segmentation images can be registered, and the registered left and right milk segmentation images are input into the first neural network model and the second neural network model for feature extraction and similarity analysis, so that an analysis result of whether the left and right milk regions are symmetrical or asymmetrical is obtained. In this embodiment, since the left and right milk split images may be registered before the feature extraction is performed on the left and right milk split images, errors caused by the positional offset of the left and right milk split images may be eliminated, so that the extracted features are more accurate when the feature extraction is performed on the registered left and right milk split images subsequently, and thus the analysis result obtained when the similarity analysis is performed by using the accurate features is more accurate.
In another embodiment, another image analysis method is provided, and this embodiment relates to a specific process of how to train the first neural network model and the second neural network model. On the basis of the above embodiment, as shown in fig. 4, the training process of the first neural network model and the second neural network model may include the steps of:
S402, each group of training image pairs are input into an initial first neural network model and an initial second neural network model, and training feature vector pairs corresponding to each group of training image pairs are obtained, wherein the training feature vector pairs comprise left milk training feature vectors and right milk training feature vectors.
Preferably, the two network models in this embodiment are trained using segmented images of the left and right breast. If the left and right milk training images are left and right milk original images, the left and right milk original images can be subjected to segmentation processing before being input into an initial model, so as to obtain segmented images of the left and right milk.
S404, calculating the similarity between the left training feature vector and the right training feature vector in each training feature vector pair, and obtaining the prediction symmetry result corresponding to each group of training image pairs according to the obtained similarity of each training feature vector pair.
In this step, the similarity matrix of each group of training feature vector pairs and the probability of left-right structural asymmetry may be calculated by using the above formula (1) and formula (2), and then the probability of asymmetry of each group of training feature vector pairs may be compared with a probability threshold, if the probability of asymmetry of a certain group of training feature vector pairs is greater than the probability threshold, the predicted symmetric result corresponding to the group of training image pairs is considered to be asymmetric, otherwise, the predicted symmetric result corresponding to the group of training image pairs is considered to be symmetric.
S406, training the initial first neural network model and the initial second neural network model based on the predicted symmetric result and the corresponding labeling symmetric result of each group of training image pairs to obtain the first neural network model and the second neural network model.
In this step, optionally, when the network model is trained, a loss between the predicted symmetric result and the corresponding labeled symmetric result of each group of training image pairs can be calculated; and summing the losses of each group of training image pairs, and training the initial first neural network model and the initial second neural network model by using the obtained sum value to obtain the first neural network model and the second neural network model.
Taking the above formula (1) and formula (2) as examples, X 1 And X 2 The training image pair can be considered here, and the other letters are correspondingly training data and parameters. Referring to fig. 4a, a logistic regression unit, that is, the above-mentioned similarity analysis algorithm, may be used to calculate the symmetric results of each set of training image pairs, to obtain predicted symmetric results of each set of training image pairs, where after, y obtained above may be directly used as the predicted symmetric result, and then the following formulas (3) and (4) may be used to calculate the loss between the predicted symmetric result and the labeled symmetric result, as follows:
L(y i ,Y i )=-Y i logy i -(1-Y i )log(1-y i ) (3)
Figure BDA0004075542800000141
Wherein Y represents the labeled symmetry result of each training image pair, Y is the predicted symmetry result of each training image pair, i represents the index of the training image pair, P is the total number of training image pairs, L (Y) i ,Y i ) Representing the respective losses for each set of training image pairs, Γ (W, b) represents the sum of the losses for all training image pairs.
The loss between the predicted symmetric result and the labeled symmetric result of each training image pair can be calculated through the above formula (3), the loss of each training image pair can be summed to obtain a loss sum value through the formula (4), and then the initial first neural network model and the initial second neural network model are trained by using the loss sum value, wherein the structures of the first neural network model and the second neural network model (namely, the twin network model structures can be convolution networks) can be continuously shown in fig. 4a, the first neural network model and the second neural network model share the same set of weight parameters, such as W in the figure, and the adjustment parameters are generally the parameters in the adjusted W; in addition, the two neural network models can be convolutional network models, when the two neural network models are trained, when the loss sum value of the two neural network models is smaller than a preset threshold value, or when the loss sum value is basically stable, the two neural network models can be determined to be trained, otherwise, the training is continued, and when the training is finished, the parameters of the two neural network models can be fixed, so that the parameters can be conveniently used in the next step of feature extraction and similarity analysis.
According to the image analysis method provided by the embodiment, the feature extraction and the similarity calculation can be performed on each group of training image pairs to obtain the prediction symmetry result of each group of training image pairs, and the initial first neural network model and the initial second neural network model are trained through the prediction symmetry result and the labeling symmetry result of each group of training image pairs to obtain the trained two network models. In this embodiment, since the two initial neural network models can be trained by using a plurality of sets of training image pairs with left and right structures and the predicted symmetric results and the labeled symmetric results of each set of training image pairs, the two network models thus obtained are relatively accurate, and further, when the two accurate network models are used for feature extraction and similarity analysis, the analysis results obtained are relatively accurate.
In another embodiment, another image analysis method is provided, and this embodiment relates to a specific process how to obtain the left milk image to be analyzed and the right milk image to be analyzed if the left milk image to be analyzed is the left milk segmentation image to be analyzed and the right milk image to be analyzed is the right milk segmentation image to be analyzed. On the basis of the above embodiment, as shown in fig. 5, the step S202 may include the following steps:
S502, acquiring an original left breast image and an original right breast image; the original left breast image includes a left breast region and the original right breast image includes a right breast region.
In this step, the original left-side breast image and the original right-side breast image are breast images of the same view, and the head-tail position image and/or the side bias position image are included in the breast images. That is, in the actual operation process, the breast image data obtained by scanning the detection object can be obtained from the breast molybdenum target X-ray machine, that is, the digital imaging dcm file tag information can be read in the breast molybdenum target X-ray machine, and the image with the image sideways imagelaw of CC (crannicaudal, head-tail position, axial position) and the view position ViewPosition of left and right views L and R, or the image with the image sideways imagelaw of MLO (media-diagonal) and the view position ViewPosition of left and right views L and R, are screened out, so as to obtain the original left and right breast images; meanwhile, gray information in the dcm data file can be read in a mammary gland molybdenum target X-ray machine, and the window width and window level of the original left and right breast images are normalized according to the gray information, so that the original left and right breast images are mapped into 256 gray-scale images.
S504, the original left breast image and the original right breast image are subjected to segmentation processing, and a left breast segmentation image and a right breast segmentation image are obtained.
S506, determining the left milk segmentation image as a left milk image to be analyzed, and determining the right milk segmentation image as a right milk image to be analyzed.
In this embodiment, before S504, any one of the original left-side breast image and the original right-side breast image may be flipped so that the orientations of the left-side breast area and the right-side breast area are consistent, and then the flipped two-side breast images are subjected to segmentation processing to obtain a left-side breast segmentation image and a right-side breast segmentation image; of course, after the segmentation processing in S506, any one of the left and right milk segmentation images may be inverted so that the orientations of the left and right milk regions coincide. The overturning can be along the horizontal direction or along the vertical direction, and of course, the overturning can also be along other directions, which is not particularly limited in this embodiment, as long as breast areas in the two side breast images after overturning have the same or consistent orientation. The same or uniform orientation may be here where the nipple in both side breast images after inversion is facing the same direction, etc. The left and right original images are turned over before being segmented, so that segmentation errors caused by different orientations can be ensured, and accuracy of segmentation results can be improved.
In addition, when the segmentation processing is performed, the trained segmentation model may be used to directly segment the original left and right breast images, or the trained segmentation model may be used to segment the turned two-sided breast images, or the segmentation model may be obtained by training based on a plurality of breast training images, where each breast training image is labeled with breast contour information, nipple position information, or the like, and the labeled breast contour information may be position information of each point on the breast contour, or the like. The segmentation model can obtain segmentation images of the left and right breasts, and simultaneously obtain contour position information of the left and right breasts and nipple position information of the left and right breasts, wherein the contour regions of the left and right breasts form left and right breast regions, namely the left and right breast regions. The segmentation model here may be a neural network model or another model, and preferably the segmentation model here is a LinkNet (link network) network model, and the contour position information of the left and right breasts obtained here may be coordinates of each point on the contour of the left and right breasts, and the nipple position information of the left and right breasts may be coordinates of the center points of the nipples of the left and right breasts.
Accordingly, after the left and right milk split images, the contour position information of the left and right milk, and the nipple position information of the left and right milk are obtained, the left and right milk split images and the registered left and right milk split images can be registered based on the breast contour position information and the nipple position information. That is, each point on the contour of the left and right breasts and the left and right nipple may be used as the key points in S302, and the registration is performed on the left and right milk split images through the position information of each point on the contour of the left and right breasts and the position information of the left and right nipple, and the specific step of registration is already described in S302 and will not be described here.
The image analysis method provided by the embodiment can perform segmentation processing on the original left and right breast images, and take the obtained left and right breast segmentation images as left and right breast images to be analyzed. In this embodiment, since the left and right milk images to be analyzed are left and right milk segmentation images, the feature extraction can be performed on the left and right milk regions on the segmentation map in a targeted manner during the feature extraction in the subsequent step, so that the finally obtained symmetric analysis result can be more accurate.
In another embodiment, in order to facilitate a more detailed description of the technical solution of the present application, the following description is provided in connection with a more detailed embodiment, and the method may include the following steps S1-S11:
s1, acquiring an original left breast image and an original right breast image.
And S2, any one of the original left breast image and the original right breast image is flipped so that the original left breast image and the original right breast image have the same orientation.
S3, segmentation processing is carried out on the turned two-side breast images to obtain a left breast segmentation image to be analyzed and a right breast segmentation image to be analyzed; the left milk segmentation image to be analyzed and the right milk segmentation image to be analyzed both comprise breast contour position information and nipple position information.
And S4, registering the left milk segmentation image and the right milk segmentation image based on the breast contour position information and the nipple position information to obtain a registered left milk segmentation image and a registered right milk segmentation image.
S5, inputting the registered left milk segmentation image into a first neural network model, inputting the registered right milk segmentation image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left feature vector and a right feature vector.
And S6, calculating the similarity between the left-side feature vector and the right-side feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
It should be understood that, although the steps in the flowcharts of fig. 2-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-5 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided an image analysis apparatus including: an acquisition module 10 and an analysis module 11, wherein:
an acquisition module 10, configured to acquire a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
The analysis module 11 is configured to input a left milk image into the first neural network model, input a right milk image into the second neural network model, and perform feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
Optionally, the first neural network model, the second neural network model and the similarity analysis algorithm form a twin network model.
For specific limitations of the image analysis apparatus, reference may be made to the above limitations of the image analysis method, and no further description is given here.
In another embodiment, another image analysis device is provided, and before the analysis module 11 performs an operation, the device may further include a registration module, configured to register the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image;
The analysis module 11 is further configured to input the registered left milk image to the first neural network model, input the registered right milk image to the second neural network model, and perform a feature extraction operation and a feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
Optionally, the analysis module 11 is further configured to input the registered left milk segmentation image to a first neural network model, input the registered right milk segmentation image to a second neural network model, and perform feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector; and calculating the similarity between the left milk feature vector and the right milk feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
In another embodiment, another image analysis apparatus is provided, where the apparatus may further include a model training module, the model training module including an extraction unit, a calculation unit, and a training unit, where:
the extraction unit is used for inputting each group of training image pairs into the initial first neural network model and the initial second neural network model to obtain training feature vector pairs corresponding to each group of training image pairs, wherein the training feature vector pairs comprise left milk training feature vectors and right milk training feature vectors;
The computing unit is used for computing the similarity between the left training feature vector and the right training feature vector in each training feature vector pair, and obtaining the prediction symmetry result corresponding to each group of training image pairs according to the obtained similarity of each training feature vector pair;
the training unit is used for training the initial first neural network model and the initial second neural network model based on the predicted symmetrical result and the corresponding marked symmetrical result of each group of training image pairs to obtain the first neural network model and the second neural network model.
Optionally, the training unit is configured to calculate a loss between a predicted symmetric result and a corresponding labeled symmetric result of each training image pair; and summing the losses of each group of training image pairs, and training the initial first neural network model and the initial second neural network model by using the obtained sum value to obtain the first neural network model and the second neural network model.
In another embodiment, another image analysis apparatus is provided, and the acquiring module 10 may include: an original image acquisition unit, a segmentation unit, a determination unit, wherein:
an original image acquisition unit configured to acquire an original left-side breast image and an original right-side breast image; the original left breast image comprises a left breast area, and the original right breast image comprises a right breast area;
The segmentation unit is used for carrying out segmentation processing on the original left breast image and the original right breast image to obtain a left breast segmentation image and a right breast segmentation image;
and the determining unit is used for determining the left milk segmentation image as a left milk image to be analyzed and determining the right milk segmentation image as a right milk image to be analyzed.
Optionally, if the left milk segmentation image and the right milk segmentation image both include breast contour position information and nipple position information, the registration module is further configured to register the left milk segmentation image and the right milk segmentation image based on the breast contour position information and the nipple position information, so as to obtain a registered left milk segmentation image and a registered right milk segmentation image.
For specific limitations of the image analysis apparatus, reference may be made to the above limitations of the image analysis method, and no further description is given here.
The respective modules in the image analysis apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
In one embodiment, the processor when executing the computer program further performs the steps of:
registering the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image; and inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring an original left breast image and an original right breast image; the original left breast image comprises a left breast area, and the original right breast image comprises a right breast area; dividing the original left breast image and the original right breast image to obtain a left breast divided image and a right breast divided image; the left milk segmentation image is determined as the left milk image to be analyzed, and the right milk segmentation image is determined as the right milk image to be analyzed.
In one embodiment, the processor when executing the computer program further performs the steps of:
and registering the left milk segmentation image and the right milk segmentation image based on the breast contour position information and the nipple position information to obtain a registered left milk segmentation image and a registered right milk segmentation image.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting the registered left milk segmentation image into a first neural network model, inputting the registered right milk segmentation image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector; and calculating the similarity between the left milk feature vector and the right milk feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting each group of training image pairs into an initial first neural network model and an initial second neural network model to obtain training feature vector pairs corresponding to each group of training image pairs, wherein the training feature vector pairs comprise left milk training feature vectors and right milk training feature vectors; calculating the similarity between the left training feature vector and the right training feature vector in each training feature vector pair, and obtaining a prediction symmetry result corresponding to each group of training image pairs according to the obtained similarity of each training feature vector pair; based on the predicted symmetric results and the corresponding labeled symmetric results of each group of training image pairs, training the initial first neural network model and the initial second neural network model to obtain the first neural network model and the second neural network model.
In one embodiment, the processor when executing the computer program further performs the steps of:
calculating losses between the predicted symmetrical results and the corresponding marked symmetrical results of each group of training image pairs; and summing the losses of each group of training image pairs, and training the initial first neural network model and the initial second neural network model by using the obtained sum value to obtain the first neural network model and the second neural network model.
In one embodiment, the first and second neural network models and the similarity analysis algorithm form a twin network model.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image comprises a left milk area, the right milk image comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
inputting the left milk image into a first neural network model, inputting the right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical; the first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
registering the left milk image and the right milk image to obtain a registered left milk image and a registered right milk image; and inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring an original left breast image and an original right breast image; the original left breast image comprises a left breast area, and the original right breast image comprises a right breast area; dividing the original left breast image and the original right breast image to obtain a left breast divided image and a right breast divided image; the left milk segmentation image is determined as the left milk image to be analyzed, and the right milk segmentation image is determined as the right milk image to be analyzed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and registering the left milk segmentation image and the right milk segmentation image based on the breast contour position information and the nipple position information to obtain a registered left milk segmentation image and a registered right milk segmentation image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the registered left milk segmentation image into a first neural network model, inputting the registered right milk segmentation image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector; and calculating the similarity between the left milk feature vector and the right milk feature vector by adopting a similarity analysis algorithm, and obtaining an analysis result based on the calculated similarity.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting each group of training image pairs into an initial first neural network model and an initial second neural network model to obtain training feature vector pairs corresponding to each group of training image pairs, wherein the training feature vector pairs comprise left milk training feature vectors and right milk training feature vectors; calculating the similarity between the left training feature vector and the right training feature vector in each training feature vector pair, and obtaining a prediction symmetry result corresponding to each group of training image pairs according to the obtained similarity of each training feature vector pair; based on the predicted symmetric results and the corresponding labeled symmetric results of each group of training image pairs, training the initial first neural network model and the initial second neural network model to obtain the first neural network model and the second neural network model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
calculating losses between the predicted symmetrical results and the corresponding marked symmetrical results of each group of training image pairs; and summing the losses of each group of training image pairs, and training the initial first neural network model and the initial second neural network model by using the obtained sum value to obtain the first neural network model and the second neural network model.
In one embodiment, the first and second neural network models and the similarity analysis algorithm form a twin network model.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of image analysis, the method comprising:
acquiring a left milk image to be analyzed and a right milk image to be analyzed; the left milk image to be analyzed comprises a left milk area, the right milk image to be analyzed comprises a right milk area, and the directions of the left milk area and the right milk area are consistent;
inputting the left milk image to be analyzed into a first neural network model, inputting the right milk image to be analyzed into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result; the analysis result is used for representing whether the left milk area and the right milk area are symmetrical;
The first neural network model and the second neural network model are obtained by training based on a plurality of groups of training image pairs and labeling symmetrical results corresponding to each group of training image pairs, and each group of training image pairs comprises a left breast training image and a right breast training image.
2. The method of claim 1, wherein the acquiring the left milk image to be analyzed and the right milk image to be analyzed comprises:
acquiring an original left breast image and an original right breast image; the original left breast image comprises a left breast area, and the original right breast image comprises a right breast area;
performing segmentation processing on the original left breast image and the original right breast image to obtain a left breast segmentation image and a right breast segmentation image;
and determining the left milk image to be analyzed and the right milk image to be analyzed according to the left milk segmentation image and the right milk segmentation image.
3. The method of claim 2, wherein the determining the left milk image and the right milk image to be analyzed from the left milk segmentation image and the right milk segmentation image comprises:
And overturning any one of the left milk segmentation image and the right milk segmentation image, and obtaining the left milk image to be analyzed and the right milk image to be analyzed according to the overturned images.
4. The method according to claim 2, wherein the inputting the left milk image to be analyzed into a first neural network model, inputting the right milk image to be analyzed into a second neural network model, performing a feature extraction operation and a feature similarity analysis operation in the first neural network model and the second neural network model, and obtaining an analysis result includes:
determining breast contour position information and nipple position information in each of the left milk segmentation image and the right milk segmentation image according to the left milk segmentation image and the right milk segmentation image;
registering the left milk image to be analyzed and the right milk image to be analyzed based on the breast contour position information and the nipple position information to obtain a registered left milk image and a registered right milk image;
and inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation and feature similarity analysis operation in the first neural network model and the second neural network model to obtain an analysis result.
5. The method of claim 4, wherein the inputting the registered left milk image to a first neural network model, inputting the registered right milk image to a second neural network model, performing a feature extraction operation and a feature similarity analysis operation in the first neural network model and the second neural network model, and obtaining an analysis result includes:
inputting the registered left milk image into a first neural network model, inputting the registered right milk image into a second neural network model, and performing feature extraction operation in the first neural network model and the second neural network model to obtain a left milk feature vector and a right milk feature vector;
and calculating a similarity matrix between the left milk feature vector and the right milk feature vector by adopting a preset contrast loss function, and calculating the similarity matrix by adopting the contrast loss function to obtain the analysis result.
6. The method of claim 5, wherein said calculating the similarity matrix using the contrast loss function to obtain the analysis result comprises:
Determining a corresponding first weight parameter according to the first neural network model, and determining a corresponding second weight parameter according to the second neural network model;
and calculating the first weight parameter, the second weight parameter and the similarity matrix by adopting the contrast loss function to obtain the analysis result.
7. The method of claim 6, wherein the first weight parameter and the second weight parameter are the same set of weight parameters.
8. The method of any of claims 2-7, wherein the acquiring the original left-side breast image and the original right-side breast image comprises:
reading tag information in a digital imaging file in a mammary gland molybdenum target X-ray machine, screening out images with the image sideways as a head-tail position and the view positions of left view and right view, or screening out images with the image sideways as a side oblique position and the view positions of left view and right view, and obtaining screened left breast images and screened right breast images;
and reading gray information in the digital imaging file in the mammary gland molybdenum target X-ray machine, and carrying out normalization processing on the screened left breast image and the screened right breast image according to the gray information to obtain the original left breast image and the original right breast image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202310103687.0A 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium Pending CN116091466A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310103687.0A CN116091466A (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010380792.5A CN111681205B (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium
CN202310103687.0A CN116091466A (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010380792.5A Division CN111681205B (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium

Publications (1)

Publication Number Publication Date
CN116091466A true CN116091466A (en) 2023-05-09

Family

ID=72452232

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010380792.5A Active CN111681205B (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium
CN202310103687.0A Pending CN116091466A (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010380792.5A Active CN111681205B (en) 2020-05-08 2020-05-08 Image analysis method, computer device, and storage medium

Country Status (1)

Country Link
CN (2) CN111681205B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823905A (en) * 2023-06-26 2023-09-29 阿里巴巴达摩院(杭州)科技有限公司 Image registration method, electronic device, and computer-readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192031B (en) * 2021-04-29 2023-05-30 上海联影医疗科技股份有限公司 Vascular analysis method, vascular analysis device, vascular analysis computer device, and vascular analysis storage medium
CN113421633A (en) * 2021-06-25 2021-09-21 上海联影智能医疗科技有限公司 Feature classification method, computer device, and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295646B (en) * 2016-08-10 2019-08-23 东方网力科技股份有限公司 A kind of registration number character dividing method and device based on deep learning
CN109242849A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image processing method, device, system and storage medium
TWI675646B (en) * 2018-10-08 2019-11-01 財團法人資訊工業策進會 Breast image analysis method, system, and non-transitory computer-readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823905A (en) * 2023-06-26 2023-09-29 阿里巴巴达摩院(杭州)科技有限公司 Image registration method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN111681205B (en) 2023-04-07
CN111681205A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
Vishnuvarthanan et al. An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images
CN108428233B (en) Knowledge-based automatic image segmentation
CN111681205B (en) Image analysis method, computer device, and storage medium
Kuang et al. EIS-Net: segmenting early infarct and scoring ASPECTS simultaneously on non-contrast CT of patients with acute ischemic stroke
Dogan et al. A two-phase approach using mask R-CNN and 3D U-Net for high-accuracy automatic segmentation of pancreas in CT imaging
CN109712163B (en) Coronary artery extraction method, device, image processing workstation and readable storage medium
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN110619635B (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN110751187B (en) Training method of abnormal area image generation network and related product
KR102321487B1 (en) artificial intelligence-based diagnostic device for shapes and characteristics of detected tumors
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
CN110880366A (en) Medical image processing system
CN111861989A (en) Method, system, terminal and storage medium for detecting midline of brain
CN114332132A (en) Image segmentation method and device and computer equipment
CN114445334A (en) Image analysis method, device, equipment and storage medium
CN111798410A (en) Cancer cell pathological grading method, device, equipment and medium based on deep learning model
WO2023198166A1 (en) Image detection method, system and device, and storage medium
CN110992312B (en) Medical image processing method, medical image processing device, storage medium and computer equipment
Verburg et al. Knowledge‐based and deep learning‐based automated chest wall segmentation in magnetic resonance images of extremely dense breasts
CN113705807B (en) Neural network training device and method, ablation needle arrangement planning device and method
CN115661100A (en) Positron emission computed tomography focus detection method and system
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN114926487A (en) Multi-modal image brain glioma target area segmentation method, system and equipment
Lindeijer et al. Leveraging multi-view data without annotations for prostate MRI segmentation: A contrastive approach
CN114463288B (en) Brain medical image scoring method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination