CN112561908A - Mammary gland image focus matching method, device and storage medium - Google Patents

Mammary gland image focus matching method, device and storage medium Download PDF

Info

Publication number
CN112561908A
CN112561908A CN202011554809.0A CN202011554809A CN112561908A CN 112561908 A CN112561908 A CN 112561908A CN 202011554809 A CN202011554809 A CN 202011554809A CN 112561908 A CN112561908 A CN 112561908A
Authority
CN
China
Prior art keywords
focus
lesion
image
head
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011554809.0A
Other languages
Chinese (zh)
Other versions
CN112561908B (en
Inventor
王逸川
赵子威
王子腾
王立威
孙应实
胡阳
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Yizhun Intelligent Technology Co ltd
Zhejiang Yizhun Intelligent Technology Co ltd
Original Assignee
Guangxi Yizhun Intelligent Technology Co ltd
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Yizhun Intelligent Technology Co ltd, Beijing Yizhun Medical AI Co Ltd filed Critical Guangxi Yizhun Intelligent Technology Co ltd
Priority to CN202011554809.0A priority Critical patent/CN112561908B/en
Publication of CN112561908A publication Critical patent/CN112561908A/en
Application granted granted Critical
Publication of CN112561908B publication Critical patent/CN112561908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

Aiming at the condition that the existing breast lesion detection algorithm can only detect the lesion by aiming at a single image and cannot provide the corresponding relation of the lesion on two visual angles of the same breast, the invention provides a breast lesion matching method comprising a plurality of links, which provides the matching relation for the lesions detected on different visual angles of the same breast, and is beneficial to more accurately judging the category and the attribute of the lesion and eliminating partial false positive lesions to a certain extent.

Description

Mammary gland image focus matching method, device and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a method for matching focus parts in breast images with different visual angles.
Background
The breast cancer is the malignant tumor with the highest incidence rate of women, more than 27 million new breast cancer cases are newly added in China every year, the incidence rate of the breast cancer is in a trend of increasing year by year, and the health of women is seriously threatened. The early diagnosis of the breast cancer is very important, and the early accurate diagnosis can improve the 5-year survival rate of breast cancer patients from 25 percent to 99 percent. Mammary gland X-ray screening is considered to be the first screening method for breast cancer screening. At present, the mammography depends on subjective diagnosis, and the problems that the overall accuracy is not high enough and is limited by the level of an evaluator exist. Compared with personal experience judgment of a medical expert, the artificial intelligent identification algorithm can more quickly, efficiently and accurately identify the focus in X-ray influence, assist doctors in clinical diagnosis and save a great deal of energy of clinical and imaging doctors.
The imaging principle of mammography is to map a three-dimensional breast onto a two-dimensional planar view, resulting in a pile-up overlap of normal tissue that resembles lump tissue images, especially with a single view alone. Therefore, a diagnostic method commonly used in clinic is to combine multiple mammographic images of the same patient for analysis, so as to improve the diagnosis rate of breast lesions. The mammography image is taken of the same breast at two different viewing angles (CC and MLO), so that a lesion is generally simultaneously reflected at two viewing angles, i.e. a lesion is seen at CC and MLO, and they are different viewing angles of the same lesion. The doctor will give the location of the lesion at two viewing angles when presenting the report, i.e. if the lesion is found at both viewing angles, the lesion needs to be matched to find out which lesions are at different viewing angles of the same lesion.
The existing breast lesion detection and analysis algorithm usually analyzes four breast X-ray images (namely CC position images and MLO position images of breasts at two sides) of a patient, respectively provides a lesion detection result on each image, but cannot provide a matching relationship among the lesions, and the given lesion matching relationship has very important significance, can more accurately position the lesion in a three-dimensional space, and is also beneficial to a doctor to judge the attribute of the lesion and provide correct BI-RADS classification.
Disclosure of Invention
The invention provides a matching method of focuses in mammary X-ray images with different visual angles based on at least one of the technical problems, and provides a matching method of mammary focuses comprising a plurality of links, which provides a matching relation for focuses detected from different visual angles on the same breast, and is beneficial to more accurately judging the types and attributes of the focuses and eliminating partial false positive focuses to a certain extent.
In view of the above, an embodiment of the first aspect of the present invention provides a lesion matching method for a breast image, including:
acquiring images of different visual angles of a mammary gland X-ray image in the same detection, wherein the images of different visual angles comprise a head and tail position (CC position) image and an inner and outer side oblique position (MLO position) image;
acquiring first focus and corresponding focus information in the head-tail position image, and acquiring second focus and corresponding focus information in the inner and outer side oblique position images, wherein the focus information comprises: lesion location, lesion size, lesion category, and lesion probability;
determining a head-tail focus mask image corresponding to the first focus and an inner-outer oblique focus mask image corresponding to the second focus;
respectively taking the head-tail position image and the head-tail position focus mask image as different channels to generate a head-tail position focus composite image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate inner and outer oblique position focus composite images;
respectively processing the head-tail position focus composite image and the inner-outer oblique position focus composite image by using a pre-trained quadrant classification network to determine focus quadrants corresponding to the first focus and the second focus respectively;
respectively processing the head and tail position focus synthetic image and the inner and outer side oblique position focus synthetic image by using a pre-trained depth regression network to determine focus depths corresponding to the first focus and the second focus respectively;
using a lesion position, a lesion size, a lesion category, a lesion probability, a lesion quadrant, and a lesion depth corresponding to each of the first lesion and the second lesion as a lesion feature, and determining a matching score for the first lesion and the second lesion as the same lesion using a Gradient Boosting Decision Tree (GBDT) model.
Further, the step of obtaining the first lesion and the corresponding lesion information in the cranial-caudal image comprises: processing the head and tail position image by using a first breast lesion detection model;
the step of obtaining the second focus and the corresponding focus information in the inner and outer side oblique images comprises the following steps: and processing the inner and outer side oblique position images by using a second breast focus detection model.
In another alternative embodiment, the step of obtaining information of the first lesion and the corresponding lesion in the cranial-caudal image includes: acquiring a first focus manually marked in a head-tail position image and corresponding focus information;
the step of obtaining the second focus and the corresponding focus information in the inner and outer side oblique images comprises the following steps: and acquiring the second focus manually marked in the inner and outer oblique images and corresponding focus information.
Preferably, the lesion depth is a normalized distance from the lesion to the nipple in the image.
Preferably, in the step of performing lesion matching, the lesion features of the first lesion and the lesion features of the second lesion are spliced, and the spliced lesion features are used as the input of the gradient boost decision tree model.
Optionally, the first lesion and the second lesion may each include a plurality of lesions, and if a plurality of lesions exist in both the cranial-caudal image and the medial-lateral oblique image, a matching score of each of the first lesions and each of the second lesions may be determined, respectively, to obtain a matching score matrix, and a matching relationship between each of the first lesions and each of the second lesions may be determined according to the matching score matrix.
Still further, the step of determining a matching relationship between lesions according to the matching score matrix comprises:
s1: determining a maximum match score in the match score matrix,
s2: comparing the maximum matching score with a preset matching threshold;
s3: if the maximum matching score is larger than the matching threshold, determining the two focuses corresponding to the maximum matching score as matching focuses, removing the row and column where the maximum matching score is located from the matching score matrix to obtain a new matching score matrix, and going to step S1;
s4: if the maximum matching score is smaller than the threshold value, determining that no matching focus exists in the current matching score matrix; and outputting a matched focus result.
An embodiment of another aspect of the present invention provides an image processing apparatus, including:
the image acquisition unit is used for acquiring images of different visual angles of the mammary gland X-ray image in the same detection, wherein the images of different visual angles comprise a head-tail position (CC position) image and an inner-outer side oblique position (MLO position) image;
a focus obtaining unit, configured to obtain a first focus and corresponding focus information in the head-tail position image, and obtain a second focus and corresponding focus information in the inside-outside oblique position image, where the focus information includes: lesion location, lesion size, lesion category, and lesion probability;
the mask image generating unit is used for determining a head and tail focus mask image corresponding to the first focus and an inner and outer oblique focus mask image corresponding to the second focus;
the image synthesis unit is used for respectively taking the head and tail position image and the head and tail position focus mask image as different channels to generate a head and tail position focus synthesis image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate inner and outer oblique position focus composite images;
a quadrant detection unit for determining a focus quadrant corresponding to each of the first focus and the second focus;
a depth detection unit for determining a lesion depth corresponding to each of the first lesion and the second lesion;
and the focus matching unit is used for determining the matching score of the first focus and the second focus as the same focus according to the focus position, the focus size, the focus category, the focus probability, the focus quadrant and the focus depth which are respectively corresponding to the first focus and the second focus.
In yet another aspect, another embodiment of the present invention provides an electronic device including: a processor and a memory; the processor is connected with a memory, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the focus matching method in the previous embodiment.
In yet another aspect, a further embodiment of the present invention provides a computer storage medium having one or more first instructions stored thereon, the one or more first instructions adapted to be loaded by a processor and to perform the lesion matching method of the preceding embodiments.
By the technical scheme, the images of a plurality of different visual angles can be better detected by utilizing the mammary gland X-ray. The quadrant division of the focus and the depth regression of the focus given in the scheme can help a doctor to determine the quadrant and the depth of the focus. In addition, focal quadrant and depth can aid in focal matching.
Because the information of a lesion obtained on one site is limited, the diagnosis of the lesion using the information of the two sites, i.e., the MLO site and the CC site, is often better. The lesion matching can associate the MLO position with the CC position, judge which lesions are the projections of the same lesion on different positions, and is beneficial to more accurate judgment of lesion categories, attributes and the like. In addition, when the detection probability of one focus is low and a matched focus cannot be found in another position, the focus is probably caused by the overlapping of glands and is not a true focus, so that the focus matching can effectively eliminate a part of false positives. And because the GBDT is iteratively performed during training, but is performed in parallel in a prediction phase, the scheme can effectively save the calculation time during deployment.
Drawings
FIG. 1 shows an example of a different view breast image;
fig. 2 is a schematic diagram illustrating a breast image lesion matching method according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating the acquisition of focal quadrants in a method according to a first embodiment of the invention;
FIG. 4 is a diagram illustrating a focal depth acquisition method according to a first embodiment of the present invention;
FIG. 5 is a diagram illustrating a matching score matrix obtained according to lesion features in a method according to a first embodiment of the present invention;
FIG. 6 is a diagram illustrating a decision tree in a method according to a first embodiment of the invention;
FIG. 7 is a diagram illustrating another decision tree in a method according to embodiment one of the present invention;
FIG. 8 is a diagram illustrating ROC curves on a test set according to a first embodiment of the present invention;
fig. 9 shows a schematic block diagram of an image processing apparatus according to a second embodiment of the present invention;
fig. 10 shows a schematic block diagram of an electronic device according to a third embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Example one
Fig. 2 shows a schematic block diagram of a lesion matching method of a breast image according to an embodiment of the present invention.
As shown in fig. 2, according to the lesion matching method for a mammary gland image of an embodiment of the present invention, the embodiment takes a mammary gland X-ray image as an example to illustrate the invention, and the method of the present invention is not limited to a mammary gland X-ray image (including a normal X-ray image and a molybdenum target X-ray image), and can also be used for images obtained by medical imaging methods such as color ultrasound, CT, and magnetic resonance used in a mammary gland examination.
The focus matching method for the mammary gland image comprises the following steps:
s201: acquiring images of different visual angles of a mammary gland image in the same detection, wherein the images of different visual angles comprise a head and tail position (CC position) image and an inner and outer side oblique position (MLO position) image;
in the mammography examination, the craniocaudal position and the internal and external oblique positions (also called as the internal and external oblique positions) are the first-choice detection body positions, also called as standard body positions, and can generally meet the requirements of the mammography examination. In the inner and outer oblique position detection, the angle of the photographic platform is parallel to the outer edge of the pectoralis major of a detected person, generally 30-70 degrees, the movable tissue is fully moved to the fixed tissue and then is kept under real pressure, and X rays are projected from the inner part to the outer part downwards, so that the upper and lower spatial positions of limited lesions can be roughly determined. Standard medial-lateral oblique images have the greatest chance of imaging all breast tissue in a single position, and deep tissue on the outside of the breast can also be displayed. The head and tail positions are supplement positions of the inner and outer oblique positions. The angle of the photographic platform is 0, the mammary gland is fully supported to eliminate the dead zone of the upper fixed tissue and then the pressure is kept, X-rays are projected from top to bottom, the internal and external space positions of the localized lesion can be determined, and the tissue inside the mammary gland can be completely displayed.
In the step, how to obtain the head and tail position image and the inner and outer side oblique position image of the mammary gland image is not limited, and the corresponding detection image can be directly obtained after one mammary gland X-ray photographic detection, and the corresponding image can also be obtained from a database of past detection images. Alternatively, the images may be from radiographic photographs provided by the patient. When the images are obtained, the head and tail position images and the inner and outer side oblique position images need to be ensured to be the images for the same detection of the mammary gland.
The breast image in this embodiment may also be a molybdenum target X-ray image, a color ultrasound image, a CT image, or an MRI image.
S202: and acquiring first focus information and corresponding focus information in the head-tail position image and second focus information and corresponding focus information in the inner and outer side oblique position images.
Through this step, the information of the lesion and the lesion in the cranial-caudal image and the medial-lateral oblique image acquired in the previous step S201 is acquired. There are many methods for detecting a lesion in a single breast image, and the present invention is not limited thereto, and any method capable of detecting a lesion in a breast image and acquiring information of a lesion in the breast image in the prior art may be used.
In the traditional detection methods, detection methods based on image characteristics, texture characteristics, focus edge detection, wavelet analysis and the like exist, and the detection efficiency, accuracy and focus type identification of the traditional methods cannot meet ideal requirements. Except for the traditional method, the target detection method based on the neural network and the machine learning has better performance in the field that the traditional method cannot achieve good effect. For example, a better classification effect can be obtained by training a neural network by using a mammary gland image marked with an accurate focus as a training sample. Through the adjustment of the neural network, the required lesion information can be directly output, such as: lesion position, lesion size, lesion type, lesion probability, etc. Common neural network structures are Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), resNet, YOLO, fast RCNN, etc. The present solution is not limited to this, and any prior art target detection neural network and any feasible model training method in the prior art can be used.
In this scheme, the focus information that needs to be used in the subsequent steps includes: lesion location, lesion size, lesion category, and lesion probability. The lesion position may be represented by coordinates of a lesion center in the image, or may be represented by a position of the lesion center with respect to the nipple. The size of the lesion may be calculated from the actual size represented by the image, and the lesion detection algorithm based on the neural network outputs a lesion area, which is generally surrounded by a rectangular target frame completely covering the lesion area, for the detection result of the lesion area. Since the focus is generally an irregular figure, the area of the focus image in the image can be calculated to represent the size of the focus, and the length of the maximum position of the focus can also be used as the size of the focus. The lesion categories may include common lesion classifications such as mass, calcification, tumor, etc. The lesion probability is generally the probability given by the algorithm that the detected target is a recognized lesion of a specific type, and after the algorithm recognizes a lesion, the algorithm classifies the target lesion and gives a probability that the target lesion belongs to the category of the lesion.
In addition to using the breast lesion detection algorithm in the prior art to obtain the lesion and lesion information in the image, the present invention can also directly use the lesion analysis result given by the doctor, for example, the doctor can directly circle the lesion region in the image through medical software and give the type and probability of the lesion corresponding to the lesion region, and the medical image software can directly calculate the position and size of the lesion region.
In general, more than one lesion region may be included in the breast image, and thus, the first lesion and the second lesion mentioned in step S202 may each represent a set of a plurality of lesions. The plurality of lesions included in the first and second lesions may be different kinds of lesions, in which case the algorithm gives the location, size, lesion type and probability of each of the first and second lesions.
S203: determining a head-tail focus mask image corresponding to a first focus and an inner-outer oblique focus mask image corresponding to a second focus;
in the field of image processing, masks are commonly used to extract target regions. In step S202, the lesion region has already been acquired, and therefore, the mask image in this step may be generated from the lesion region acquired in step S202. An optional mode is that a focus mask image can be obtained by directly performing binarization processing on a focus region and other regions, wherein the focus region is 1 and a non-focus region is 0. Or setting the non-focus area as black and the focus area as a certain preset gray.
In the case where a plurality of lesions exist in an image, a plurality of different mask images are generated for each lesion.
The lesion mask image is used in a subsequent step in combination with the original image to indicate the location of the lesion in the image.
S204: respectively taking the head-tail position image and the head-tail position focus mask image as different channels to generate a head-tail position focus composite image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate an inner and outer oblique position focus composite image;
as shown in fig. 3, the original image and the focus mask image are synthesized as different channels, respectively, to obtain a focus synthesized image with information of focus position, size, shape, etc., while retaining all original information and features of the original image.
In the image synthesis of different channels, the original image and the focus mask image are single-channel gray images with the same size, and the original image and the focus mask image can be directly spliced as two different channels to obtain a double-channel focus synthetic image with the same size as the original input image.
The original breast gray scale image and the mask image can be synthesized by weighted addition of preset weights, for example, the weights of the original breast gray scale image and the mask image are respectively set to 0.5 and 0.5, the image matrix is multiplied by the weights and then directly added, the weights can be selected from various options, and the weights can be respectively set to 0.6 and 0.4, or 0.7 and 0.3; the sum of the weights of the original image and the mask image may not be limited to 1. This is merely an example and aspects of the present invention are not limited in this respect.
In the case where the image includes a plurality of focuses, a single mask image is generated for each focus in step S203, and then a focus composite image is generated for each mask image and the original image in this step, and finally a corresponding focus composite image is generated for each focus.
S205: respectively processing the head-tail position focus composite image and the inner-outer oblique position focus composite image by using a pre-trained quadrant classification network to determine focus quadrants corresponding to the first focus and the second focus respectively;
referring to fig. 3, in step S204, a mask image (mask) of the original image and the focus on the image is synthesized into a focus synthesized image as two channels of the image, where the mask is used to indicate the position of the focus, and in this step, a trained quadrant classification network is used to classify quadrants of the focus in the focus synthesized image. Dividing MLO focus into upper, middle, lower, axillary tail region and areola region; the CC focus is divided into four quadrants of external, middle, internal and areola area.
Clinically, for the convenience of diagnosis and treatment, the breast is artificially divided into 7 regions by the following method: drawing a horizontal line and a vertical line by taking the nipple as the center, so that the breast is divided into four quadrants, namely four subareas, namely an inner upper quadrant, an inner lower quadrant, an outer upper quadrant and an outer lower quadrant; the area of each projection position right behind the nipple and the areola is a central area; the posterior areola region is located in the central region near the front 1/3 of the nipple; the nipple and areola are the central area, and a protruding part is located on the upper and outer side of the mammary gland and extends to the armpit, which is called the axillary tail of the mammary gland. Corresponding to the areas of the CC bit image and the MLO bit image according to the seven clinical partitions, wherein the inner upper quadrant is corresponding to the MLO bit and is corresponding to the CC bit; the inner lower quadrant is corresponding to the MLO position and is corresponding to the CC position; the outer upper quadrant is above the MLO bit and outside the CC bit; the outer lower quadrant is under the MLO position and outside the CC position; the central region is in the MLO bit correspondence, in the CC bit correspondence; the rear area of the areola corresponds to the areola area at the MLO position and corresponds to the areola area at the CC position; the axillary tail part corresponds to the axillary tail region at the MLO position, and has no corresponding region at the CC position. A quadrant of the image in step S205 is obtained.
In this step, resnet can be used as a classifier to realize the partition of the focus quadrant. The sample data for training resnet may use the labeled lesion synthesis image. That is, for the image in which the lesion position and the divided quadrants have been determined, a lesion synthetic image is generated in accordance with steps S203 to S204, and the resnet is trained with the quadrants on the image as labels. Because the quadrant division of the CC bit image and the MLO bit image is different, different resnet data need to be trained by using different sample data respectively, and therefore correct focus quadrant division is obtained.
For the case that a plurality of different focuses exist in the image, a focus mask image and a focus composite image are separately generated for each focus, and then in step S205, the focus composite image generated in the same manner is used to train the model, and then the model is used to classify each focus composite image, and a quadrant of the focus corresponding to the composite image is output.
S206: respectively processing the head and tail position focus synthetic image and the inner and outer side oblique position focus synthetic image by using a pre-trained depth regression network to determine focus depths corresponding to the first focus and the second focus respectively;
referring to fig. 4, the lesion synthetic image generated in step S204 is also processed using a pre-trained depth regression network to obtain a depth of a lesion corresponding to each lesion.
In the step, the resnet can be used as a classifier to realize the division of the focus quadrant. The sample data for training resnet may use the labeled lesion synthesis image. That is, for the original image for which the lesion depth has been determined, a lesion synthetic image is generated in accordance with steps S203 to S204, and the resnet is trained with the lesion depth as a label. In this step, the CC bit image and the MLO bit image may be trained and classified using the same resnet, or different resnets may be used.
In addition to resnet, other neural networks may be used for training and classification.
The lesion depth is the distance from the lesion to the nipple, and preferably, the lesion depth in this step may use a normalized distance, a distance of 0 at the lesion depth indicates that the lesion is on the nipple, and a distance of 1 at the lesion depth indicates that the lesion is on the pectoralis major muscle.
For the case that there are multiple focuses in the image, similarly to steps S203-205, this step also generates a corresponding focus composite image for each focus, and performs processing on each focus composite image to obtain a focus depth corresponding to each focus.
S207: using a lesion position, a lesion size, a lesion category, a lesion probability, a lesion quadrant, and a lesion depth corresponding to each of the first lesion and the second lesion as a lesion feature, and determining a matching score for the first lesion and the second lesion as the same lesion using a Gradient Boosting Decision Tree (GBDT) model.
Referring to fig. 5, in this step, the lesion position, the lesion size, the lesion category, and the lesion probability obtained in the lesion detection step, the lesion quadrant obtained in step S205, and the lesion depth obtained in step S206 are used as the features of each lesion, the CC-level and MLO-level lesion features are combined two by two to obtain a plurality of feature pairs which are spliced together, and a score that two lesions are the same lesion is predicted by using a Gradient Boosting Decision Tree (GBDT) method.
The GBDT is an integrated model that trains multiple regression trees and sums all the regression tree predictors as the final predictor. The training process is to use the set characteristics as input and use whether to match as a training label, if the two focuses are the same pair of focuses, the labels of the two focuses are 1, otherwise, the labels are 0, and a first regression tree is obtained through training. And then, the set characteristics are still used as input, but the training labels are changed into residual errors between the real labels and the prediction result of the first regression tree, and a new regression tree is obtained through training again. And then, taking the residual error between the sum of the prediction results of all the models and the real label as the label of the next regression tree, and continuously training a new regression tree in an iterative manner to obtain a plurality of decision tree models to jointly participate in prediction.
During prediction, the features are input into each regression tree in parallel, each regression tree gives a matching score, and the prediction results of all the regression trees are added to obtain a score for matching the pair of focuses.
For the condition that a plurality of focuses exist in the image, each focus in the CC level image and each focus combination in the MLO level image can obtain a corresponding focus pair, a GBDT model is used for obtaining matching scores aiming at all the focus pairs, after the matching scores between all the focus pairs are obtained, a matching score matrix is constructed, and each element on the matrix represents the matching scores of two focuses. Then, the matching relation between the MLO position focus and the CC position focus can be obtained according to a greedy algorithm: firstly, finding out the largest element in a scoring matrix, if the score is larger than a preset matching threshold value, giving out two focuses corresponding to the element as a focus pair result, then removing the row and the column where the element is located in the scoring matrix to obtain a new scoring matrix, then finding out the largest element in the new scoring matrix, and so on to find out all focus pairs. For the remaining lesion pairs with no match or too small matching probability, they are considered as the individually appearing lesion and no matching relation is given.
For the training of the GBDT model, an example is given below:
and training a plurality of groups of different decision tree models by using the GBDT, giving different scores to the characteristics of the focus pair by each decision tree, and adding the scores given by all the decision trees to obtain a final score. Referring to the drawings, the drawings give examples of some trees and explain the detailed meaning of the trees. The left child node of the tree in the figure represents "no" and the right child node represents "yes".
As shown in fig. 6, the figure shows the 1 st decision tree in the GBDT model, where if there is a lesion at the MLO site, height 324, width 293, depth 0.73, the probability of being a mass is 0.99, and the probability of being a calcification is 0.32; there was a lesion at position CC, height 272, width 241, depth 0.97, probability of being a mass 0.99, probability of being calcified 0.22.
In this decision tree, first, it is determined whether the width of the lesion at CC is greater than 219, and the result is yes, and then the right child node is continuously moved to determine whether the height of the lesion at MLO is greater than 197, and the result is still yes, so that the probability of the lesion at MLO being calcified by continuously moving the right child node is greater than 0.26, and since the probability of the lesion at MLO being calcified is greater than 0.32 and greater than 0.26, a score of 2.18 is finally obtained.
For another example, fig. 7 shows an example of a 7 th decision tree, and the previous lesion pair is still used, for this tree, it is first examined whether the probability that the MLO lesion is calcified is greater than 0.33, the answer is no, and then, whether the probability that the question of the left child node, i.e., the CC lesion, is a tumor is greater than 0.74, and the answer is yes, and the right child node of this node is continuously judged, and the answer is yes, and finally, a score of 1.03 is obtained.
Then the two lesions will score 2.18 on the 1 st tree and 1.03 on the 7 th tree, and the total final score can be obtained by summing the scores of all trees. The higher the final score, the greater the likelihood of being the same lesion.
As shown in fig. 8, when the GBDT algorithm of the present invention is used to match a lesion, an ROC curve, which is a common method for evaluating a classification algorithm, can be obtained by testing the effect on a test set.
The evaluation of experimental results can also be measured by using the FROC (Free-Response Receiver Operating characterization) criterion commonly used in machine learning algorithms. Specifically, the FROC criterion in this patent describes the relationship between recall (number of detected lesions in all breast image pairs tested/number of lesions in all breast image pairs tested) and the proportion of false positive lesions averaged over each breast image pair (number of lesions predicted to be lesions in all breast image pairs tested/number of breast image pairs tested, FP/image pair tested). The results are shown in the following table:
Figure BDA0002855311640000121
in the embodiments of the present invention, a mammogram or a molybdenum target mammogram is generally taken as an example, and lesion matching is performed on different types and positions of lesions detected in the mammogram. In the breast examination, since the information of the lesion obtained on one site is limited, the diagnosis of the lesion using the information of the two sites, i.e., the MLO site and the CC site, is often more effective. The lesion matching can associate the MLO position with the CC position, judge which lesions are the projections of the same lesion on different positions, and is beneficial to more accurate judgment of lesion categories, attributes and the like. In addition, when the detection probability of one focus is low and a matched focus cannot be found in another position, the focus is probably caused by the overlapping of glands and is not a true focus, so that the focus matching can effectively eliminate a part of false positives.
Example two
As shown in fig. 9, a second embodiment of the present invention provides an image processing apparatus, which may be a computer program (including program code) running in a terminal. The image processing apparatus may execute the breast image lesion matching method in the first embodiment, and specifically includes:
the image acquisition unit is used for acquiring images of different visual angles of the mammary gland X-ray image in the same detection, wherein the images of different visual angles comprise a head-tail position (CC position) image and an inner-outer side oblique position (MLO position) image;
the focus acquisition unit is used for acquiring first focus and corresponding focus information in the head-tail position image and acquiring second focus and corresponding focus information in the inner and outer side oblique position images, wherein the focus information comprises: lesion location, lesion size, lesion category, and lesion probability;
the mask image generating unit is used for determining a head and tail focus mask image corresponding to the first focus and an inner and outer oblique focus mask image corresponding to the second focus;
the image synthesis unit is used for respectively taking the head and tail position image and the head and tail position focus mask image as different channels to generate a head and tail position focus synthesis image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate inner and outer oblique position focus composite images;
a quadrant detection unit for determining a focus quadrant corresponding to each of the first focus and the second focus;
a depth detection unit for determining a lesion depth corresponding to each of the first lesion and the second lesion;
and the focus matching unit is used for determining the matching score of the first focus and the second focus as the same focus according to the focus position, the focus size, the focus category, the focus probability, the focus quadrant and the focus depth which are respectively corresponding to the first focus and the second focus.
The units in the image processing apparatus may be respectively or completely combined into one or several other units to form the image processing apparatus, or some unit(s) may be further split into multiple units with smaller functions to form the image processing apparatus, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the model-based training apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the image processing apparatus device as shown in fig. 7 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method in the second embodiment on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like as well as a storage element, and the model training method of the embodiment of the present invention is implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
EXAMPLE III
As shown in fig. 10, a third embodiment of the present invention provides an electronic device, including: a processor and a memory; the processor is connected with the memory, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the breast image lesion matching method in the first embodiment.
The electronic devices in the present embodiment may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, medical image acquisition apparatuses, and the like. The terminal device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the terminal device may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the terminal device 600 to perform wireless or wired communication with other devices to exchange data. While fig. 6 illustrates a terminal apparatus 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
Example four
An embodiment of the present invention provides a computer-readable storage medium storing one or more first instructions adapted to be loaded by a processor and to perform the lesion matching method of the preceding embodiment.
It should be noted that the computer readable storage medium of the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device.
The computer readable medium carries one or more programs, and when the one or more programs are executed by the electronic device of the present invention, the electronic device executes the breast image lesion image processing method of the present invention, acquires breast images at different viewing angles, acquires lesion and lesion information in the images, generates a lesion mask image, combines the lesion mask image with the original image, acquires a lesion quadrant and a lesion depth according to the combined image, and determines whether the lesions in the breast images at different viewing angles are matched according to the features of the lesion information, the lesion quadrant, the lesion depth, and the like.
The steps in the method of each embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of each embodiment of the invention can be merged, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc-Read Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The technical solutions of the present invention have been described in detail with reference to the accompanying drawings, and the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A lesion matching method for breast imaging, comprising:
acquiring images of different visual angles of a mammary gland X-ray image in the same detection, wherein the images of different visual angles comprise a head and tail position (CC position) image and an inner and outer side oblique position (MLO position) image;
acquiring first focus and corresponding focus information in the head-tail position image, and acquiring second focus and corresponding focus information in the inner and outer side oblique position images, wherein the focus information comprises: lesion location, lesion size, lesion category, and lesion probability;
determining a head-tail focus mask image corresponding to the first focus and an inner-outer oblique focus mask image corresponding to the second focus;
respectively taking the head-tail position image and the head-tail position focus mask image as different channels to generate a head-tail position focus composite image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate inner and outer oblique position focus composite images;
respectively processing the head-tail position focus composite image and the inner-outer oblique position focus composite image by using a pre-trained quadrant classification network to determine focus quadrants corresponding to the first focus and the second focus respectively;
respectively processing the head and tail position focus synthetic image and the inner and outer side oblique position focus synthetic image by using a pre-trained depth regression network to determine focus depths corresponding to the first focus and the second focus respectively;
using a lesion position, a lesion size, a lesion category, a lesion probability, a lesion quadrant, and a lesion depth corresponding to each of the first lesion and the second lesion as a lesion feature, and determining a matching score for the first lesion and the second lesion as the same lesion using a Gradient Boosting Decision Tree (GBDT) model.
2. The method of claim 1, wherein:
the step of obtaining the first focus and the corresponding focus information in the head-tail position image comprises the following steps:
processing the head and tail position image by using a preset breast lesion detection model;
the step of obtaining a second focus and corresponding focus information in the medial-lateral oblique images comprises:
and processing the inner and outer side oblique position images by using the preset breast focus detection model.
3. The method of claim 2, wherein the predetermined breast lesion detection model is a trained neural network model, the output of which includes the lesion information.
4. The method of claim 2 or 3, wherein the lesion depth is a normalized distance of the lesion to a nipple.
5. The method of claim 4, further comprising:
and splicing the focus characteristics of the first focus and the focus characteristics of the second focus, and taking the spliced focus characteristics as the input of the gradient lifting decision tree model.
6. The method of claim 5, further comprising:
the first focus and the second focus comprise a plurality of focuses, the matching score of each focus in the first focus and each focus in the second focus is determined respectively, a matching score matrix is obtained, and the matching relation between each focus in the first focus and each focus in the second focus is determined according to the matching score matrix.
7. The method of claim 6, wherein the step of determining a match relationship between lesions based on the match score matrix comprises:
s1: determining a maximum match score in the match score matrix,
s2: comparing the maximum matching score with a preset matching threshold;
s3: if the maximum matching score is larger than the matching threshold, determining the two focuses corresponding to the maximum matching score as matching focuses, removing the row and column where the maximum matching score is located from the matching score matrix to obtain a new matching score matrix, and going to step S1;
s4: if the maximum matching score is smaller than the threshold value, determining that no matching focus exists in the current matching score matrix; and outputting a matched focus result.
8. An image processing apparatus comprising:
the image acquisition unit is used for acquiring images of different visual angles of the mammary gland X-ray image in the same detection, wherein the images of different visual angles comprise a head-tail position (CC position) image and an inner-outer side oblique position (MLO position) image;
a focus obtaining unit, configured to obtain a first focus and corresponding focus information in the head-tail position image, and obtain a second focus and corresponding focus information in the inside-outside oblique position image, where the focus information includes: lesion location, lesion size, lesion category, and lesion probability;
the mask image generating unit is used for determining a head and tail focus mask image corresponding to the first focus and an inner and outer oblique focus mask image corresponding to the second focus;
the image synthesis unit is used for respectively taking the head and tail position image and the head and tail position focus mask image as different channels to generate a head and tail position focus synthesis image; respectively taking the inner and outer oblique position images and the inner and outer oblique position focus mask images as different channels to generate inner and outer oblique position focus composite images;
a quadrant detection unit for determining a focus quadrant corresponding to each of the first focus and the second focus;
a depth detection unit for determining a lesion depth corresponding to each of the first lesion and the second lesion;
and the focus matching unit is used for determining the matching score of the first focus and the second focus as the same focus according to the focus position, the focus size, the focus category, the focus probability, the focus quadrant and the focus depth which are respectively corresponding to the first focus and the second focus.
9. An electronic device, comprising: a processor and a memory; the processor is connected to a memory, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the method according to any one of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method according to any one of claims 1-7.
CN202011554809.0A 2020-12-24 2020-12-24 Mammary gland image focus matching method, device and storage medium Active CN112561908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011554809.0A CN112561908B (en) 2020-12-24 2020-12-24 Mammary gland image focus matching method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011554809.0A CN112561908B (en) 2020-12-24 2020-12-24 Mammary gland image focus matching method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112561908A true CN112561908A (en) 2021-03-26
CN112561908B CN112561908B (en) 2021-11-23

Family

ID=75033903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011554809.0A Active CN112561908B (en) 2020-12-24 2020-12-24 Mammary gland image focus matching method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112561908B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305503A (en) * 2021-12-09 2022-04-12 上海杏脉信息科技有限公司 Breast disease follow-up system, medium and electronic equipment
CN114820592A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing apparatus, electronic device, and medium
CN115018795A (en) * 2022-06-09 2022-09-06 北京医准智能科技有限公司 Method, device and equipment for matching focus in medical image and storage medium
WO2023097279A1 (en) * 2021-11-29 2023-06-01 Hologic, Inc. Systems and methods for correlating objects of interest
US11957497B2 (en) 2017-03-30 2024-04-16 Hologic, Inc System and method for hierarchical multi-level feature image synthesis and representation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130122528A1 (en) * 2011-11-16 2013-05-16 Aspenbio Pharma, Inc. Compositions and methods for assessing appendicitis
US8553977B2 (en) * 2010-11-15 2013-10-08 Microsoft Corporation Converting continuous tone images
CN104376199A (en) * 2014-11-05 2015-02-25 宁波市科技园区明天医网科技有限公司 Method for intelligently generating breast report lesion schematic diagram
CN109859217A (en) * 2019-02-20 2019-06-07 厦门美图之家科技有限公司 The dividing method in pore region and calculating equipment in facial image
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions
CN109993170A (en) * 2019-05-10 2019-07-09 图兮深维医疗科技(苏州)有限公司 A kind of bell figure of breast lesion shows device and equipment
CN110674885A (en) * 2019-09-30 2020-01-10 杭州依图医疗技术有限公司 Focus matching method and device
CN111430014A (en) * 2020-03-31 2020-07-17 杭州依图医疗技术有限公司 Display method, interaction method and storage medium of glandular medical image
CN111797267A (en) * 2020-07-14 2020-10-20 西安邮电大学 Medical image retrieval method and system, electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8553977B2 (en) * 2010-11-15 2013-10-08 Microsoft Corporation Converting continuous tone images
US20130122528A1 (en) * 2011-11-16 2013-05-16 Aspenbio Pharma, Inc. Compositions and methods for assessing appendicitis
CN104376199A (en) * 2014-11-05 2015-02-25 宁波市科技园区明天医网科技有限公司 Method for intelligently generating breast report lesion schematic diagram
CN109859217A (en) * 2019-02-20 2019-06-07 厦门美图之家科技有限公司 The dividing method in pore region and calculating equipment in facial image
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions
CN109993170A (en) * 2019-05-10 2019-07-09 图兮深维医疗科技(苏州)有限公司 A kind of bell figure of breast lesion shows device and equipment
CN110674885A (en) * 2019-09-30 2020-01-10 杭州依图医疗技术有限公司 Focus matching method and device
CN111430014A (en) * 2020-03-31 2020-07-17 杭州依图医疗技术有限公司 Display method, interaction method and storage medium of glandular medical image
CN111797267A (en) * 2020-07-14 2020-10-20 西安邮电大学 Medical image retrieval method and system, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐伟栋: "乳腺X线图像的计算机辅助诊断技术研究", 《中国优秀博硕士学位论文全文数据库(博士)医药卫生科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11957497B2 (en) 2017-03-30 2024-04-16 Hologic, Inc System and method for hierarchical multi-level feature image synthesis and representation
WO2023097279A1 (en) * 2021-11-29 2023-06-01 Hologic, Inc. Systems and methods for correlating objects of interest
CN114305503A (en) * 2021-12-09 2022-04-12 上海杏脉信息科技有限公司 Breast disease follow-up system, medium and electronic equipment
CN114820592A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing apparatus, electronic device, and medium
CN114820592B (en) * 2022-06-06 2023-04-07 北京医准智能科技有限公司 Image processing apparatus, electronic device, and medium
CN115018795A (en) * 2022-06-09 2022-09-06 北京医准智能科技有限公司 Method, device and equipment for matching focus in medical image and storage medium

Also Published As

Publication number Publication date
CN112561908B (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN112561908B (en) Mammary gland image focus matching method, device and storage medium
US11937962B2 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
CN112767346B (en) Multi-image-based full-convolution single-stage mammary image lesion detection method and device
CN110473186B (en) Detection method based on medical image, model training method and device
WO2021179491A1 (en) Image processing method and apparatus, computer device and storage medium
US11684333B2 (en) Medical image analyzing system and method thereof
EP3973539A1 (en) System and method for interpretation of multiple medical images using deep learning
US11424021B2 (en) Medical image analyzing system and method thereof
JP2022164527A (en) Medical image-based tumor detection and diagnostic device
US11935234B2 (en) Method for detecting abnormality, non-transitory computer-readable recording medium storing program for detecting abnormality, abnormality detection apparatus, server apparatus, and method for processing information
Hu et al. A multi-instance networks with multiple views for classification of mammograms
CN114332120A (en) Image segmentation method, device, equipment and storage medium
CN109410170B (en) Image data processing method, device and equipment
Zhang et al. A new window loss function for bone fracture detection and localization in X-ray images with point-based annotation
Zhang et al. CCS-net: cascade detection network with the convolution kernel switch block and statistics optimal anchors block in hypopharyngeal cancer MRI
WO2022033598A1 (en) Breast x-ray radiography acquisition method and apparatus, and computer device and storage medium
CN110163195A (en) Liver cancer divides group's prediction model, its forecasting system and liver cancer to divide group's judgment method
KR102360615B1 (en) Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
US20200170624A1 (en) Diagnostic apparatus and diagnostic method
Zhang et al. Window loss for bone fracture detection and localization in x-ray images with point-based annotation
EP4296941A1 (en) Processing method of medical image and computing apparatus for processing medical image
US20230274424A1 (en) Appartus and method for quantifying lesion in biometric image
Gunasekara et al. A feasibility study for deep learning based automated brain tumor segmentation using magnetic resonance images
US11210848B1 (en) Machine learning model for analysis of 2D images depicting a 3D object
Ghosh et al. EMD based binary classification of mammograms with novel leader selection technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Patentee after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Patentee after: Guangxi Yizhun Intelligent Technology Co.,Ltd.

Address before: 1106, 11 / F, Weishi building, No.39 Xueyuan Road, Haidian District, Beijing

Patentee before: Beijing Yizhun Intelligent Technology Co.,Ltd.

Patentee before: Guangxi Yizhun Intelligent Technology Co.,Ltd.