CN114092463A - Digital breast tomography focus positioning device - Google Patents

Digital breast tomography focus positioning device Download PDF

Info

Publication number
CN114092463A
CN114092463A CN202111434425.XA CN202111434425A CN114092463A CN 114092463 A CN114092463 A CN 114092463A CN 202111434425 A CN202111434425 A CN 202111434425A CN 114092463 A CN114092463 A CN 114092463A
Authority
CN
China
Prior art keywords
module
image
node
network
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111434425.XA
Other languages
Chinese (zh)
Inventor
范明
简嘉豪
厉力华
郑惠中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111434425.XA priority Critical patent/CN114092463A/en
Publication of CN114092463A publication Critical patent/CN114092463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a digital breast tomography focus positioning device. The invention comprises a data input module for acquiring a labeled digital breast tomography image; the image preprocessing module preprocesses the digital breast tomography image; the data sorting module is used for carrying out data division on the labeled digital breast tomography image; the model adjusting module comprises a model training submodule and a model testing submodule; the model training submodule trains the EfficientDet network by using training set data; the model testing sub-module tests the trained EfficientDet network by using the test set data; the region positioning module fuses detection frames of focuses at the same position on different faults by using a focus fusion algorithm, screens out detection results with detection probability lower than a set threshold value, and integrates focus diagnosis result labels at the same position. The invention can efficiently and accurately position the focus position in the digital breast tomography image.

Description

Digital breast tomography focus positioning device
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a digital breast tomography focus positioning device.
Background
When dense mammary glands are shot by a digital mammary gland X-ray photography technology commonly used in clinical application at present, the problem of tissue overlapping may occur. When Digital Breast Tomography (DBT) is used to take images, the X-ray tube rotates around the Breast within a certain angle range, and a low dose exposure is performed once every time the X-ray tube rotates a certain angle. When the X-ray tube has completed its rotation, the digital detector will get a perspective view of the breast at different angles. The computer obtains the three-dimensional tomographic image through a simultaneous algebraic reconstruction technology, thereby reducing the phenomenon of tissue overlap, avoiding missed diagnosis or misdiagnosis of doctors in the diagnosis process, and observing focuses of different positions and different forms in the compact breast, thereby making more accurate clinical diagnosis for the breast cancer.
However, digital breast tomography produces images in quantities tens of times greater than conventional radiography techniques. In order to detect, doctors need to spend a lot of time reading images, and meanwhile missed detection and false detection are caused by visual fatigue, so how to find the focus position in massive image data is very important for the doctors. Therefore, the research on the focus positioning technology of the digital breast tomography has important practical value and significance for reducing the burden of doctors and providing accurate auxiliary results for clinical diagnosis.
Disclosure of Invention
The invention aims to solve the problems, provides an efficient and accurate positioning device for a digital breast tomography focus, takes a deep learning thought as a technical core, combines a transfer learning strategy, utilizes a Convolutional Neural Network (CNN) in a deep learning model to extract image characteristics, can obtain a candidate detection frame through a region generation Network, and classifies the Network to predict benign and malignant conditions. The invention can make up the deficiency of the experience of doctors in reading DBT images at the present stage, effectively reduce the working intensity of the doctors, improve the working efficiency and further promote the clinical application and the technical development of the digital breast sectional images in the breast cancer image examination.
In order to realize the purpose, the invention adopts the following technical scheme:
the invention discloses a digital breast tomography focus positioning device, which comprises a data input module, an image preprocessing module, a data sorting module, a model adjusting module and a region positioning module;
the data input module is used for acquiring a labeled digital breast tomography image;
the image preprocessing module preprocesses the digital breast tomography image transmitted by the data input module;
preferably, the preprocessing method for the digital breast tomography image comprises invalid background removal and image denoising; the invalid background removal specifically removes the background region of the digital breast tomography image which is invalid by using an edge clipping technology. Image denoising is to remove the pectoral muscle part of the breast in the processed image by using a masking algorithm to remove invalid background.
The data sorting module is used for carrying out data division on the labeled digital breast tomography image transmitted by the image preprocessing module to obtain a training set and a testing set;
the model adjusting module comprises a model training submodule and a model testing submodule; the model training submodule trains the EfficientDet network by using training set data; the model testing sub-module tests the trained EfficientDet network by using the test set data;
the region positioning module fuses detection frames of focuses at the same position on different faults by using a focus fusion algorithm, screens out detection results with detection probability lower than a set threshold value, and integrates focus diagnosis result labels at the same position.
Preferably, the preprocessing method for the digital breast tomogram in the image preprocessing module comprises invalid background removal and image denoising.
The EfficientDet network comprises a backbone network, a characteristic pyramid module, a benign and malignant classification network and a detection frame prediction network;
1) the backbone network selects EfficientNet-B0, and mainly comprises an input layer, a convolutional layer, an MB convolutional (Mobile invoked convolutional Convolition) module, a pooling layer and a full connection layer;
2) the characteristic Pyramid module uses a BiFPN (Bi-directional Feature Pyramid Network) structure, removes part of convolution layers on the basis of a PAFPN (Path Aggregation Feature Parymid Network), and increases a shortcut structure;
3) the benign and malignant classification network outputs corresponding benign and malignant prediction results and the probability thereof according to a fusion feature map obtained by fusing features output by the backbone network;
4) and the detection frame prediction network outputs the focus position of the detection target according to the feature graph extracted by the feature pyramid network.
Preferably, the backbone network specifically includes:
the input image obtains a characteristic diagram through a backbone network, and the characteristic diagram obtained from the backbone network is put into the same set according to the corresponding breast, so that each set is the characteristic diagram obtained after the same breast sequence tomography image passes through the backbone network;
in the same breast tomographic image, the image of the middle part can express more information; by utilizing the positive-false distribution, in the process of carrying out feature fusion, the corresponding probability density is obtained according to the position of the image corresponding to the feature map in the fault, the probability density is multiplied by the corresponding feature map, finally, the feature maps in the same set are added to obtain the fused feature map, and the calculation formula is shown as formula 1:
Figure RE-GDA0003414171280000031
wherein μ is the mean value of the image sequence, σ is the standard deviation of the image sequence, x is the serial number of the image, and P is the characteristic diagram corresponding to the serial number.
Preferably, the feature pyramid module comprises five input nodes, three intermediate nodes and five output nodes; wherein the intermediate node X1 is obtained by fusing an input node P7 and an input node P6; the intermediate node X2 is obtained by fusing an input node P5 and an intermediate node X1; the intermediate node X3 is obtained by fusing an input node P4 and an intermediate node X2; the output node Y1 is obtained by fusing an input node P7 and an output node Y2; the output node Y2 is obtained by fusing an input node P6, an intermediate node X1 and an output node Y3; the output node Y3 is obtained by fusing an input node P5, an intermediate node X2 and an output node Y4; the output node Y4 is obtained by fusing an input node P4, an intermediate node X3 and an output node Y5; the output node Y5 is obtained by fusing an input node P3 and an intermediate node X3; thus, the stack of BiFPN forms a planar structure that is top-to-bottom, bottom-to-top, and then repeats.
Preferably, the lesion fusion algorithm of the region localization module specifically includes:
5-1, firstly, judging whether a single tomographic image has more than 1 focus position, if so, calculating the intersection-to-parallel ratio between every two focus positions, continuously judging whether the intersection-to-parallel ratio is more than a threshold value, if so, screening out the focus position with lower confidence coefficient, and if not, completely reserving;
5-2, arranging the rest tomograms processed in the step 5-1 according to the shooting sequence of the digital breast tomograms, sequentially projecting the tomograms to the same plane, calculating the intersection ratio between every two lesion positions of the rest tomograms, continuously judging whether the intersection ratio is larger than a threshold value, if so, dividing the two lesion positions into the same set, and otherwise, dividing the two lesion positions into different sets;
5-3, calculating confidence coefficient average values of the focus positions in each set, judging whether the average values are lower than a threshold value, if so, removing the set, and if not, keeping the set;
5-4, because the positions of the focuses in the same set are the same, judging whether the number of the positions of the focuses in the same set is larger than a threshold value, if so, keeping, otherwise, deleting; finally outputting the lesion position of the remaining set.
The invention adopts a DNN (Deep Neural Network) structure of a Deep learning model, combines a transfer learning strategy, and automatically learns the characteristics of the digital breast sectional image through the training of the model so as to realize the positioning of the focus, thereby greatly reducing the working strength of a doctor, improving the diagnosis rate, further promoting the clinical practical value of the digital breast sectional image and forming a more efficient and standard diagnosis mode.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a schematic view of the lesion position of the digital breast tomography in the present invention; (a) benign, (b) malignant;
FIG. 3 is a flow chart of the training and classification recognition of the model adjustment module of the present invention;
FIG. 4 is a flow chart of the positioning determination of the area positioning module 5 according to the present invention;
FIG. 5 is a structural diagram of an EfficientDet structure used by the model adjustment module of the present invention;
FIG. 6 is a diagram of a feature pyramid module according to the present invention; (a) PAFPN, (b) bipfn;
the system comprises a data input module 1, an image preprocessing module 2, a data sorting module 3, a model adjusting module 4 and an area positioning module 5.
Detailed Description
The invention is further described in detail below with reference to the following figures and specific embodiments:
as shown in fig. 1, the digital breast tomography focus positioning device includes a data input module 1, an image preprocessing module 2, a data sorting module 3, a model adjusting module 4, and a region positioning module 5; the output end of the data input module 1 is connected with the input end of the image preprocessing module 2, the output end of the image preprocessing module 2 is connected with the input end of the data sorting module 3, the output end of the data sorting module 3 is connected with the input end of the model adjusting module 4, and the output end of the model adjusting module 4 is connected with the input end of the positioning diagnosis module 5.
The data input module 1 is used for acquiring pixel information of a digital breast tomography image of a patient and converting the pixel information into a general image format; the image preprocessing module 2 preprocesses the digital breast tomography image and is used for improving the quality of the digital breast tomography image and removing invalid information; the data sorting module 3 is used for dividing the data of the labeled digital breast tomography image transmitted by the image preprocessing module to obtain a training set and a test set; the model adjusting module 4 comprises a model training submodule and a model testing submodule; the model training submodule trains the EfficientDet network by using training set data; the model testing sub-module tests the trained EfficientDet network by using the test set data, can feed back the model training module by evaluating the model testing module, and corrects the algorithm model by using model optimization, parameter adjustment, data set adjustment and the like; the region positioning module 5 fuses detection frames of focuses at the same position on different faults by using a focus fusion algorithm, screens out detection results with detection probability lower than a set threshold value, and integrates focus diagnosis result labels at the same position.
The method for detecting and diagnosing the lesion by utilizing the digital breast tomography image comprises the following steps:
step 1: the data input module 1 acquires a digital breast tomography image data DICOM file with a label in fig. 2 and converts the file into a general image format; the label is the position of the breast lesion and the benign or malignant state; the digital breast tomograph is a plurality of tomograms of the whole breast in a certain space.
Step 2: the image preprocessing module preprocesses the digital breast tomography image obtained by the data input module;
the preprocessing specifically comprises the steps of removing black invalid background areas at the edges by using an edge cutting technology, and removing pectoral muscle parts of oblique lateral position images in the digital breast tomographic images by using a mask algorithm;
and step 3: the data sorting module takes the preprocessed digital breast tomography image with the label as a data set, wherein one part of the data set is a training set, and the other part of the data set is a testing set;
and 4, step 4: the model training submodule in the model adjustment module of FIG. 3 trains the EfficientDet network by using the training set data; and the model testing submodule tests the trained EfficientDet network by using the test set data.
The EfficientDet network shown in fig. 5 includes a backbone network, a feature pyramid module, a benign and malignant classification network, and a detection box prediction network;
1) the main network selects EfficientNet-B0, mainly comprises an input layer, a convolution layer, an MB convolution module, a pooling layer, a full-link layer and a Softmax classifier, and finally outputs image characteristics, wherein the specific parameter configuration of the network is shown in Table 1. Compared with other models, the maximum innovation of the EfficientNet is that the MB convolution module structure is obtained by Neural network structure Search (NAS). The design idea of neural network expansion mainly includes increasing the number of network layers (depth), increasing the network width, i.e., the number of channels (width) of the network, and increasing the resolution (resolution) of an input image, and it is difficult to improve the accuracy even more when the network is expanded from a single dimension under the condition of limited resources. And the EfficientDet expands the network model from three dimensions, and determines the MB convolution module in a neural network structure searching mode, thereby solving the problem.
The method comprises the following steps:
the input images are processed by a backbone network to obtain a characteristic diagram, and the characteristic diagram obtained from the backbone network is put into the same set according to the corresponding breast, so that each set is the characteristic diagram obtained after the same breast sequence (1-N) tomography image is processed by the backbone network.
The image of the middle part of the same breast can express more information in the tomographic image. By utilizing the positive-false distribution, in the process of carrying out feature fusion, the corresponding probability density is obtained according to the position of the image corresponding to the feature map in the fault, the probability density is multiplied by the corresponding feature map, finally, the feature maps in the same set are added to obtain the fused feature map, and the calculation formula is shown as formula 1:
Figure RE-GDA0003414171280000061
wherein μ is the mean value of the image sequence, σ is the standard deviation of the image sequence, x is the serial number of the image, and P is the characteristic diagram corresponding to the serial number.
3) The Feature Pyramid module (FPN) uses a BiFPN structure, and removes a part of the convolution layer and adds a shortcut structure on the basis of PAFPN, as shown in fig. 6. The features used for further refining the output fusion feature map of the backbone network; compared with the common FPN structure, the FPN structure has less parameter quantity and higher precision.
The characteristic pyramid module comprises five input nodes, three middle nodes and five output nodes; wherein the intermediate node X1 is obtained by fusing the input node P7 and the input node P6. The intermediate node X2 is obtained by fusing the input node P5 and the intermediate node X1. The intermediate node X3 is obtained by fusing the input node P4 and the intermediate node X2. The output node Y1 is obtained by fusing the input node P7 and the output node Y2. The output node Y2 is obtained by fusing the input node P6, the intermediate node X1 and the output node Y3. The output node Y3 is obtained by fusing the input node P5, the intermediate node X2 and the output node Y4. The output node Y4 is obtained by fusing the input node P4, the intermediate node X3 and the output node Y5. The output node Y5 is obtained by fusing the input node P3 and the intermediate node X3. Therefore, the plane structure formed by stacking the BiFPNs is from top to bottom, then from bottom to top, and then is repeated continuously, because the precedence relationship exists between the plane structure and the plane structure from top to bottom and from bottom to top, the plane structure can cause unsmooth information transmission, and the performance of target detection is influenced.
3) And the benign and malignant classification network outputs corresponding benign and malignant prediction results and the probability thereof according to a fusion feature map obtained by fusing features output by the backbone network.
4) And the detection frame prediction network outputs the focus position of the detection target according to the feature graph extracted by the feature pyramid network.
And applying the image data set obtained by the data sorting module to an EfficientDet network structure to obtain detection results of the focus on different tomograms. If the feedback result is not ideal, the optimization can be carried out by continuously adjusting the model.
And 5: for example, the region locating module in fig. 4 uses a focus fusion algorithm to fuse the detection frames of the focuses at the same position on different faults, and screens out the detection result with the detection probability lower than a set threshold value, so as to reduce the detection rate of false positive focuses of the fault images. Meanwhile, the fusion module integrates the focus diagnosis result labels at the same position to obtain the final diagnosis result of the focus at the position for further analysis by a doctor.
5-1, firstly judging whether a single tomographic image has more than 1 focus position, if so, calculating the intersection and comparison between every two focus positions, continuously judging whether the intersection and comparison is more than a threshold value (can be 0.5), if so, screening out the focus position with lower confidence coefficient, and if not, completely reserving.
5-2, arranging the rest tomograms processed in the step 5-1 according to the shooting sequence of the digital breast tomograms, sequentially projecting the tomograms to the same plane, calculating the intersection ratio between every two of the positions of the focuses of the rest tomograms, continuously judging whether the intersection ratio is larger than a threshold value (which can be 0.5), if so, dividing the positions of the focuses into the same set, and otherwise, dividing the positions of the focuses into different sets.
5-3, calculating the confidence coefficient average value of the focus positions in each set, judging whether the average value is lower than a threshold value (can be 0.5), if so, removing the set, and if not, keeping the set.
5-4, because the positions of the focuses in the same set are the same, whether the number of the positions of the focuses in the same set is greater than a threshold value (can be 1) or not needs to be judged, if yes, the focuses are retained, and if not, the focuses are deleted; finally outputting the lesion position of the remaining set.
Further, in step 4, the model adjusting module comprises a model training module and a model testing module; the model training module adopts an EfficientDet structure in a DNN structure of a deep learning model and combines a transfer learning strategy to realize the detection of the breast cancer tomographic image focus and the diagnosis of benign and malignant analogy, the model training is completed under the supervision condition, the difference between the actual output and the expected output is minimized by using a random gradient descent method, the gradient coefficient is calculated by using a back propagation method, and the training error is reduced by continuously adjusting parameters among neural network models; the model testing module performs testing evaluation and feedback by using the model obtained by the training of the model training module, so as to obtain the position of the focus and the benign and malignant label.
TABLE 1EfficientNet-B0 network architecture configuration
Figure RE-GDA0003414171280000071
Figure RE-GDA0003414171280000081
Any modification and variation of the present invention within the spirit and scope of the claims will fall within the scope of the present invention.

Claims (6)

1. The digital breast tomography focus positioning device is characterized by comprising a data input module, an image preprocessing module, a data sorting module, a model adjusting module and a region positioning module;
the data input module is used for acquiring a digital breast tomography image with a label;
the image preprocessing module is used for preprocessing the digital breast tomography image transmitted by the data input module;
the data sorting module is used for carrying out data division on the labeled digital breast tomography image transmitted by the image preprocessing module to obtain a training set and a testing set;
the model adjusting module comprises a model training submodule and a model testing submodule; the model training submodule trains the EfficientDet network by using training set data; the model testing sub-module tests the trained EfficientDet network by using the test set data;
the region positioning module fuses detection frames of focuses at the same position on different faults by using a focus fusion algorithm, screens out detection results with detection probability lower than a set threshold value, and integrates focus diagnosis result labels at the same position.
2. The apparatus according to claim 1, wherein the preprocessing method of the digital breast tomogram in the image preprocessing module comprises invalid background removal and image denoising.
3. The device of claim 1, wherein the EfficientDet network comprises a backbone network, a feature pyramid module, a benign and malignant classification network, and a detection frame prediction network;
1) the backbone network selects EfficientNet-B0, and mainly comprises an input layer, a convolution layer, an MB convolution module, a pooling layer and a full-connection layer;
2) the characteristic pyramid module uses a BiFPN structure, removes part of the convolution layer on the basis of PAFPN, and adds a shortcut structure;
3) the benign and malignant classification network outputs corresponding benign and malignant prediction results and the probability thereof according to a fusion feature map obtained by fusing features output by the backbone network;
4) and the detection frame prediction network outputs the focus position of the detection target according to the feature graph extracted by the feature pyramid network.
4. The device for locating a lesion of digital breast tomographic images as claimed in claim 3, wherein the backbone network is specifically:
the input image obtains a characteristic diagram through a backbone network, and the characteristic diagram obtained from the backbone network is put into the same set according to the corresponding breast, so that each set is the characteristic diagram obtained after the same breast sequence tomography image passes through the backbone network;
in the same breast tomographic image, the image of the middle part can express more information; by utilizing the positive-false distribution, in the process of carrying out feature fusion, the corresponding probability density is obtained according to the position of the image corresponding to the feature map in the fault, the probability density is multiplied by the corresponding feature map, finally, the feature maps in the same set are added to obtain the fused feature map, and the calculation formula is shown as formula 1:
Figure FDA0003381320730000021
wherein μ is the mean value of the image sequence, σ is the standard deviation of the image sequence, x is the serial number of the image, and P is the characteristic diagram corresponding to the serial number.
5. The digital breast tomographic image lesion locating apparatus of claim 3, wherein the feature pyramid module comprises five input nodes, three intermediate nodes, and five output nodes; wherein the intermediate node X1 is obtained by fusing an input node P7 and an input node P6; the intermediate node X2 is obtained by fusing an input node P5 and an intermediate node X1; the intermediate node X3 is obtained by fusing an input node P4 and an intermediate node X2; the output node Y1 is obtained by fusing an input node P7 and an output node Y2; the output node Y2 is obtained by fusing an input node P6, an intermediate node X1 and an output node Y3; the output node Y3 is obtained by fusing an input node P5, an intermediate node X2 and an output node Y4; the output node Y4 is obtained by fusing an input node P4, an intermediate node X3 and an output node Y5; the output node Y5 is obtained by fusing an input node P3 and an intermediate node X3; thus, the stack of BiFPN forms a planar structure that is top-to-bottom, bottom-to-top, and then repeats.
6. The device for locating a lesion of digital breast tomographic image according to claim 3, wherein the lesion fusion algorithm of the region locating module is specifically:
5-1, firstly, judging whether a single tomographic image has more than 1 focus position, if so, calculating the intersection-to-parallel ratio between every two focus positions, continuously judging whether the intersection-to-parallel ratio is more than a threshold value, if so, screening out the focus position with lower confidence coefficient, and if not, completely reserving;
5-2, arranging the rest tomograms processed in the step 5-1 according to the shooting sequence of the digital breast tomograms, sequentially projecting the tomograms to the same plane, calculating the intersection ratio between every two lesion positions of the rest tomograms, continuously judging whether the intersection ratio is larger than a threshold value, if so, dividing the two lesion positions into the same set, and otherwise, dividing the two lesion positions into different sets;
5-3, calculating confidence coefficient average values of the focus positions in each set, judging whether the average values are lower than a threshold value, if so, removing the set, and if not, keeping the set;
5-4, because the positions of the focuses in the same set are the same, judging whether the number of the positions of the focuses in the same set is larger than a threshold value, if so, keeping, otherwise, deleting; finally outputting the lesion position of the remaining set.
CN202111434425.XA 2021-11-29 2021-11-29 Digital breast tomography focus positioning device Pending CN114092463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111434425.XA CN114092463A (en) 2021-11-29 2021-11-29 Digital breast tomography focus positioning device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111434425.XA CN114092463A (en) 2021-11-29 2021-11-29 Digital breast tomography focus positioning device

Publications (1)

Publication Number Publication Date
CN114092463A true CN114092463A (en) 2022-02-25

Family

ID=80305756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111434425.XA Pending CN114092463A (en) 2021-11-29 2021-11-29 Digital breast tomography focus positioning device

Country Status (1)

Country Link
CN (1) CN114092463A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820590A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN117115515A (en) * 2023-08-07 2023-11-24 南方医科大学南方医院 Digital breast three-dimensional tomography structure distortion focus image processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820590A (en) * 2022-06-06 2022-07-29 北京医准智能科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN117115515A (en) * 2023-08-07 2023-11-24 南方医科大学南方医院 Digital breast three-dimensional tomography structure distortion focus image processing method

Similar Documents

Publication Publication Date Title
CN111047572B (en) Automatic spine positioning method in medical image based on Mask RCNN
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
Xu et al. DeepLN: a framework for automatic lung nodule detection using multi-resolution CT screening images
US6125194A (en) Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing
US10997466B2 (en) Method and system for image segmentation and identification
CN105640577A (en) Method and system automatically detecting local lesion in radiographic image
CN107451615A (en) Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN111986177A (en) Chest rib fracture detection method based on attention convolution neural network
CN114092463A (en) Digital breast tomography focus positioning device
Li et al. Classification of breast mass in two‐view mammograms via deep learning
CN113706491B (en) Meniscus injury grading method based on mixed attention weak supervision migration learning
CN108305253A (en) A kind of pathology full slice diagnostic method based on more multiplying power deep learnings
CN115131300B (en) Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning
KR20220095342A (en) The diagnostic method and system of lymph node metastasis in thyroid cancer using ct image
EP4118617A1 (en) Automated detection of tumors based on image processing
CN111833321A (en) Window-adjusting optimization-enhanced intracranial hemorrhage detection model and construction method thereof
Souid et al. Xception-ResNet autoencoder for pneumothorax segmentation
CN117710760A (en) Method for detecting chest X-ray focus by using residual noted neural network
AU2020223750B2 (en) Method and System for Image Annotation
CN111755131A (en) COVID-19 early screening and severity degree evaluation method and system based on attention guidance
Sha et al. The improved faster-RCNN for spinal fracture lesions detection
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
Fauci et al. A massive lesion detection algorithm in mammography
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination