CN116993758A - Method, device, equipment and medium for segmenting breast magnetic resonance image focus - Google Patents

Method, device, equipment and medium for segmenting breast magnetic resonance image focus Download PDF

Info

Publication number
CN116993758A
CN116993758A CN202311006012.0A CN202311006012A CN116993758A CN 116993758 A CN116993758 A CN 116993758A CN 202311006012 A CN202311006012 A CN 202311006012A CN 116993758 A CN116993758 A CN 116993758A
Authority
CN
China
Prior art keywords
image
breast
target
magnetic resonance
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311006012.0A
Other languages
Chinese (zh)
Inventor
王毅
罗舜聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202311006012.0A priority Critical patent/CN116993758A/en
Publication of CN116993758A publication Critical patent/CN116993758A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application relates to a method, a device, equipment and a medium for segmenting breast magnetic resonance image focus, wherein the method comprises the following steps: acquiring a plurality of central breast magnetic resonance images, and performing image preprocessing to obtain an initial image; training the weight distribution model based on the initial image to obtain a target weight distribution model; training a breast focus segmentation model based on the breast magnetic resonance image and the weight score to obtain a target breast focus segmentation model; obtaining breast magnetic resonance images to be processed of a plurality of centers, carrying out prediction processing on the breast magnetic resonance images to be processed through a target weight distribution model and a target breast focus segmentation model to obtain target weight fractions and segmentation results corresponding to each center model, and carrying out weighted summation processing on the segmentation results and the corresponding target weight fractions to obtain target breast focus segmentation results. The application improves the model training speed and the segmentation accuracy of breast focus images.

Description

Method, device, equipment and medium for segmenting breast magnetic resonance image focus
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a medium for segmenting breast magnetic resonance image focus.
Background
The breast magnetic resonance imaging examination is the imaging mode with highest sensitivity in the current breast cancer clinical examination, and is also one of the common breast examination means. The magnetic resonance multi-sequence images help doctors to classify the lesions of the breast tumor, and the diagnosis efficiency and accuracy are improved. Images generated by scanning of magnetic resonance equipment of different manufacturers have certain differences in image parameters such as brightness, contrast, resolution and the like. The single-center lesion segmentation model has insufficient generalization performance in a few cases with large distribution differences, and the generalization performance can be further reduced under the condition of insufficient data quantity. In the common distribution cases, the focus segmentation accuracy of the single-center model still has room for improvement, and more focus characteristic information needs to be supplemented and redundant information of high-contrast areas such as blood vessels, organs and the like needs to be reduced so as to improve the segmentation accuracy.
Existing approaches propose a deep network containing multiple center classifiers, each associated with a center. And predicting the probability that the target image belongs to each center, and combining the prediction results of the center models to improve the generalization capability of the models. The work is based on a classification task of a two-dimensional image, when facing a breast magnetic resonance three-dimensional image focus segmentation task, a network model combining a multi-center model and a classifier has low convergence speed in actual training, and the classification effect of the image in the network is poor, so that the segmentation performance is reduced. Therefore, a method for segmenting breast magnetic resonance image lesions is needed to improve the model training speed and the segmentation accuracy of breast lesion images.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device, equipment and a medium for segmenting breast magnetic resonance image focuses, so as to improve the model training speed and the segmentation accuracy of breast focus images.
In order to solve the above technical problems, an embodiment of the present application provides a method for segmenting a breast magnetic resonance image focus, including:
acquiring a plurality of central breast magnetic resonance images, and performing image preprocessing on the breast magnetic resonance images to obtain an initial image;
randomly cutting the initial image to generate training data, and training a weight distribution model based on the training data to obtain a target weight distribution model, wherein the target weight distribution model comprises weight scores corresponding to a plurality of center models;
performing image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and performing Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image;
training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model;
acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model;
And carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
In order to solve the above technical problems, an embodiment of the present application provides a segmentation apparatus for breast magnetic resonance imaging lesions, including:
the initial image generation unit is used for acquiring a plurality of central breast magnetic resonance images and carrying out image preprocessing on the breast magnetic resonance images to obtain initial images;
the weight distribution model training unit is used for randomly cutting the initial image to generate training data, training the weight distribution model based on the training data to obtain a target weight distribution model, wherein the target weight distribution model comprises weight scores corresponding to a plurality of center models;
the target image generation unit is used for carrying out image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and carrying out Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image;
The breast focus segmentation model training unit is used for training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model;
the weight score generating unit is used for acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model;
and the target breast focus segmentation result generation unit is used for carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer device is provided comprising one or more processors; and the memory is used for storing one or more programs, so that the one or more processors can realize the breast magnetic resonance image focus segmentation method.
In order to solve the technical problems, the application adopts a technical scheme that: a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of segmentation of breast magnetic resonance imaging lesions as defined in any one of the preceding claims.
The embodiment of the application provides a method, a device, equipment and a medium for segmenting breast magnetic resonance image focus. According to the embodiment of the application, the weight scores corresponding to the plurality of center models are output through the weight distribution model, the breast focus segmentation model is trained based on the weight scores, so that the convergence rate in model training is improved, and meanwhile, the segmentation accuracy of breast focus images is improved due to the combination of the weight scores according to different center models and the segmentation model.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of a method for segmenting a breast magnetic resonance image lesion according to an embodiment of the present application;
fig. 2 is a flowchart of a segmentation framework of a breast magnetic resonance image lesion provided in an embodiment of the present application;
fig. 3 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
fig. 4 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a weight distribution model according to an embodiment of the present application;
fig. 6 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
fig. 7 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a breast lesion segmentation model according to an embodiment of the present application;
FIG. 9 is a flowchart of a focus segmentation framework for fusion multi-center models provided by an embodiment of the present application;
fig. 10 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
fig. 11 is a flowchart of a sub-flowchart implementation of a method for segmenting a breast magnetic resonance image lesion provided by an embodiment of the present application;
Fig. 12 is a schematic diagram of a segmentation apparatus for breast magnetic resonance imaging lesions provided in an embodiment of the present application;
fig. 13 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
The present application will be described in detail with reference to the drawings and embodiments.
It should be noted that, the method for segmenting a breast magnetic resonance image focus provided by the embodiment of the application is generally executed by a server, and correspondingly, the device for segmenting the breast magnetic resonance image focus is generally configured in the server.
Referring to fig. 1 and 2, fig. 1 shows a specific implementation of a method for segmenting a breast magnetic resonance image lesion, and fig. 2 is a flowchart of a segmentation framework of a breast magnetic resonance image lesion according to an embodiment of the present application.
It should be noted that, if there are substantially the same results, the method of the present application is not limited to the flow sequence shown in fig. 1, and the method includes the following steps:
s1: and acquiring a plurality of breast magnetic resonance images of the centers, and performing image preprocessing on the breast magnetic resonance images to obtain an initial image.
As shown in fig. 2, the first step of the segmentation framework according to the embodiment of the present application is to output weight scores corresponding to each center model according to the input image; and secondly, breast cancer focus segmentation, wherein each center model outputs a segmentation result, and the results corresponding to each center model in the second step are weighted and summed according to the weight in the first step to obtain a final segmentation result.
Wherein the center is a data center, for example the data center is a hospital. There are how many centers are involved, and there are several weight scores for the center model. For example, three images of a hospital (center) are input, and weight scores corresponding to the three center models are output later. This weight score is for weights between center models, not between model intra-coding modules.
Wherein the image preprocessing includes image cropping, downsampling, normalization and data enhancement. The acquired image of the embodiment of the application can be a breast magnetic resonance image of phase 1 or phase 2.
Referring to fig. 3, fig. 3 shows a specific embodiment of step S1, which is described in detail as follows:
s11: acquiring the breast magnetic resonance images of a plurality of the centers.
S12: and identifying a mammary gland pixel region contained in the mammary gland magnetic resonance image, and cutting the mammary gland pixel region in the mammary gland magnetic resonance image to obtain a cut image.
S13: and downsampling the cut image to a preset resolution to obtain a downsampled image.
S14: and subtracting the average intensity value of all voxels from the intensity value of all voxels in the downsampled image, and dividing the average intensity value by the standard deviation of all voxels to perform image normalization processing on the downsampled image so as to obtain a normalized image.
S15: and performing image overturning and image contrast changing processing on the standardized image to obtain the initial image.
Specifically, after the mammary gland magnetic resonance image is subjected to contrast enhancement, the contrast of the mammary gland focus is higher, so that focus segmentation is facilitated. The processing of clipping, downsampling, normalization and data enhancement of the magnetic resonance image data is required prior to model training. Breast magnetic resonance images are three-dimensional data with high resolution, and common sizes are 896×896×120, 704×704×160, 512×512×220, 336×336×300, and the like. To reduce the memory occupied by model training, the computational overhead and redundant computation, areas that do not contain breast need to be automatically identified and cropped. The specific method is as follows: the clipping width is consistent with that of the original image, and the image is scanned line by line from top to bottom to identify the region containing breast pixels in the breast magnetic resonance image, so that the clipping region is the region between the first line containing pixels with the pixel intensity of more than 0 (containing breast) and the last line containing pixels with the pixel intensity of more than 0. Downsampling the cut image to a preset resolution to obtain a downsampled image. In a specific embodiment, the cropped image is downsampled to a lower resolution of 256×256×128, so that the receptive field of the model can be enlarged without increasing the occupied memory.
The image normalization is to subtract the average intensity value of all voxels from the intensity value of all voxels in the image and divide the average intensity value by the standard deviation of all voxels to obtain a normalized image. The downsampling and normalization process may increase the convergence rate of weight distribution model training and the speed of weight distribution model reasoning. And finally, image overturning and image enhancement processing for changing the contrast of the image are carried out, so that the generalization capability of the weight distribution model is improved. In one embodiment, the image is flipped over in the range of-10 ° to 10 °, and the gamma amplitude value for the image contrast is in the range of 0.75 to 1.25.
S2: and randomly cutting the initial image to generate training data, and training the weight distribution model based on the training data to obtain a target weight distribution model.
The target weight distribution model comprises weight scores corresponding to the plurality of center models.
The model of the embodiment of the application comprises a weight distribution model and a breast lesion segmentation model, which are encoder-decoder structures. The encoder and decoder are composed of n coding blocks, n-1 decoding blocks and three full connection layers respectively. n is typically 4 to 7, which can be adjusted according to the actual memory constraints and the desired level of feature abstraction, n being 5 in this embodiment of the application.
Referring to fig. 4 and fig. 5, fig. 4 shows a specific implementation manner of step S2, and fig. 5 is a schematic structural diagram of a weight distribution model provided in an embodiment of the present application, which is described in detail below:
s21: and randomly cutting the mammary gland magnetic resonance image of any single use case in the initial image into data with a preset size to generate single training data, and generating the training data after all the initial images are randomly cut.
S22: and inputting the training data into the weight distribution model one by the single training data to perform feature extraction so as to obtain output weight.
In the embodiment of the application, due to the limitation of the memory of the display card, the breast magnetic resonance three-dimensional image data of a single use case needs to be randomly cut into data with a fixed size, which is called a patch, and training is performed by taking the patch as a unit. In the embodiment of the application, the single training data is patch. Further, in the implementation of the application, the patch size is 2, and can be adjusted according to the actual memory size of the display card.
Referring to fig. 6, fig. 6 shows a specific embodiment of step S22, which is described in detail as follows:
s221: and inputting the training data into the weight distribution model one by one according to the single training data.
S222: and carrying out convolution processing on the single training data for a plurality of times through the encoder, so that the number of channels corresponding to the single training data is increased to a first preset multiple, and the size of the single training data is downsampled to a first preset size, thereby obtaining an initial feature map.
S223: and carrying out convolution processing on the initial feature map for a plurality of times through the decoder so as to reduce the channel number of the initial feature to a second preset multiple, and up-sampling the size of the initial feature map to a second preset size to obtain a basic feature map.
S224: and outputting the output weights corresponding to the plurality of center models based on the basic feature map through the full connection layer.
In one embodiment, as shown in fig. 5, the operation of convolving, normalizing, and passing the activation function is performed twice in a single code block, and in an embodiment of the present application, the activation function uses the relu function. The first convolution operation expands the channel number of the single training data to 2 times of the original channel number, namely, the feature number of the image is increased. The step size of the second convolution operation becomes 2, downsampling the size of the single training data to 1/2 of the original size. The activation function adds to the process of nonlinear transformation. A single decoding block will also perform the operations of convolving, normalizing, and activating functions twice. The first convolution operation reduces the channel number of the input feature map to 1/2 of the original channel number, and reduces the feature number of the image. The transpose convolution is performed a second time, up-sampling the size of the input feature map to 2 times the input size. And finally, outputting the output weights corresponding to the plurality of center models through the full connection layer. In one embodiment, there are four center models, the output weight is 0.3,0.1,0.4,0.2.
S23: and training the weight distribution model based on the output weight and the cross entropy loss function to obtain the target weight distribution model.
In the embodiment of the application, in the model training process, training data are input into a weight distribution model one by using the single training data, and model training is performed by using a patch as a unit. The training round number is set according to the actual training amount, and training data of 100 use cases is generally set to 400 rounds. The initial learning rate (initial_lr) of training is set to 0.01, and the learning rate (lr) adjustment method is as follows:
wherein, epoch is the number of training rounds, epoch max For the total training round number, the method gradually reduces the learning rate along with the increase of the training round number. The model optimizer may choose Adam or SGD. In the embodiment of the application, an Adam optimizer is preferably adopted. The loss function is a cross entropy loss function.
S3: and performing image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and performing Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image.
In an embodiment of the present application, the image preprocessing here includes image cropping, resampling, and normalization. The image preprocessing is the same as the processing procedure in steps S11 to S15, and is not repeated here. The difference from the data processing described above is the data enhancement step. The embodiment of the application performs Gaussian smoothing filtering (standard deviation range is 10-20) and image deformation (deformation coefficient range is 0.7-1.4) on the basis of image overturning and changing image contrast. Because focus segmentation needs to further improve model generalization performance and segmentation accuracy, breast magnetic resonance images output by different center devices have differences in resolution, size and the like. Gaussian smoothing filtering and image warping may model the image differences between parts of the device.
S4: and training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model.
Referring to fig. 7 to 9, fig. 7 shows a specific implementation manner of step S4, and fig. 8 is a schematic structural diagram of a breast lesion segmentation model according to an embodiment of the present application; FIG. 9 is a flowchart of a focus segmentation framework for fusion multi-center models provided by an embodiment of the present application; the details are as follows:
s41: and randomly sampling the breast magnetic resonance image of any single use case in the target image to obtain random sampling data.
S42: and identifying a focus area in the mammary gland magnetic resonance image, obtaining a target focus area, randomly selecting a point in the target focus area as a center point, and sampling a preset size according to the center point to obtain segmentation training data.
S43: training the breast focus segmentation model based on the random sampling data, the segmentation training data and the weight score to obtain a target breast focus segmentation model.
In the embodiment of the application, because the area of the breast focus area scanned in the breast magnetic resonance examination is smaller, if the sampling mode of randomly cutting the image in the step S2 is directly adopted, a large amount of data of model training does not contain focuses, so that the model cannot learn effective information, and the convergence rate is greatly reduced. In order to solve the problem of less lesion occupancy in the patch taking process, two samplings are performed when each sample is trained: randomly sampling for the first time; during the second sampling, a focus area is firstly searched in the image, and the sampling of the patch size is carried out by taking a random point in the focus area as a central point. Namely randomly sampling the breast magnetic resonance image of any single use case in the target image to obtain random sampling data; and identifying a focus area in the mammary gland magnetic resonance image to obtain a target focus area, randomly selecting a point in the target focus area as a center point, and sampling the preset size according to the center point to obtain segmentation training data.
Referring to fig. 10, fig. 10 shows a specific embodiment of step S43, which is described in detail as follows:
s431: and inputting the random sampling data and the segmentation training data into the breast focus segmentation model one by one, and respectively carrying out downsampling and upsampling on the random sampling data and the segmentation training data through the breast focus segmentation model to obtain a basic downsampled image and an upsampled image.
The method comprises the steps of performing downsampling and convolution processing through a coding module in a breast focus segmentation model to obtain a basic sampling image, and performing upsampling processing on the basic sampling image through a decoding module to obtain an upsampled image;
s432: and combining the basic downsampled image and the upsampled image to obtain a combined image.
S433: and carrying out convolution processing on the combined image by a decoding module in the breast focus segmentation model to obtain a focus segmentation result graph.
S434: training the breast focus segmentation model based on the focus segmentation result graph and the weight score to obtain a target breast focus segmentation model.
In the implementation of the application, the downsampled image in the coding module and the input upsampled image are combined before the coding module performs convolution operation on the basis of the weight distribution model structure. The method aims to solve the problem of missing image details caused by multiple downsampling of the coding module, the segmentation of the fine focus is more sensitive to detail characteristic information, and the image before downsampling can be segmented and stored to keep the detail information of the image. After each layer of decoding module finishes decoding, the output result also enters a loss function, and the operation is to output a characteristic diagram of the multi-layer supervised decoding module, so that the segmentation performance of the model on the tiny focus is improved.
Referring to fig. 11, fig. 11 shows a specific embodiment of step S434, which is described in detail below:
s4341: and carrying out change processing on the weight score corresponding to each center model through a random average module to obtain a change weight score.
S4342: multiplying the focus segmentation result graph with the corresponding change weight score to obtain a plurality of weighted feature graphs.
S4343: and adding the weighted feature images to obtain a target segmentation feature image, and training the breast focus segmentation model based on the segmentation training data, the cross entropy loss function and the Dice loss function to obtain the target breast focus segmentation model.
As shown in fig. 9, the weight distribution model outputs a weight score corresponding to each center model, and in the embodiment of the present application, the output feature maps of the breast lesion segmentation models are multiplied by the corresponding weights, and then added one by one to obtain a final feature map. Meanwhile, a random average module is arranged to improve the generalization performance of the fusion model: the corresponding weight of each center in the first step is changed to 1/center number under a certain probability, and is set to 0.3 in the embodiment of the application. The random average module can solve the problem of overfitting of the fusion multi-center model on the current data, and can balance the phenomenon of overlarge output weight deviation caused by overlarge amount of partial center data.
Further, the loss function used to train the breast lesion segmentation model is the sum of the cross entropy loss function and the Dice loss function.
S5: and acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model.
In the embodiment of the application, when breast focus segmentation is needed, a plurality of breast magnetic resonance images to be processed in the center are needed to be acquired, and the breast magnetic resonance images to be processed are preprocessed and randomly cut. The pretreatment is shown in step S11-step S15. And randomly cutting the preprocessed breast magnetic resonance image to be processed into data with a preset size, namely cutting the data into the patch size, so as to obtain the image to be processed. This is because the graphics card memory limitations require reasoning about the image partitions, the size of the reasoning window is the same as the size of the partition. The window slides reasoning on the image to be processed in turn until the image is completely traversed. The sliding step is set to half the patch size. The reasoning result of each patch is a target weight score corresponding to each center model.
S6: and carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
In the embodiment of the application, the image to be processed is predicted through the target breast focus segmentation model, and the prediction process also needs to infer the images to be processed one by one according to the patch, and the size of an inference window is the same as the size of the patch. The window slides reasoning on the image to be processed in turn until the image is completely traversed. The sliding step is set to half the patch size. After the prediction is completed, the target breast focus segmentation model outputs a plurality of segmentation results. And finally, carrying out weighted summation treatment on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
In the embodiment of the application, a plurality of breast magnetic resonance images in the center are acquired, and the breast magnetic resonance images are subjected to image preprocessing to obtain an initial image; randomly cutting the initial image to generate training data, and training a weight distribution model based on the training data to obtain a target weight distribution model, wherein the target weight distribution model comprises weight scores corresponding to a plurality of center models; performing image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and performing Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image; training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model; acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model; and carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result. According to the embodiment of the application, the weight scores corresponding to the plurality of center models are output through the weight distribution model, the breast focus segmentation model is trained based on the weight scores, so that the convergence rate in model training is improved, and meanwhile, the segmentation accuracy of breast focus images is improved due to the combination of the weight scores according to different center models and the segmentation model.
Referring to fig. 12, as an implementation of the method shown in fig. 1, the present application provides an embodiment of a segmentation apparatus for breast magnetic resonance imaging lesions, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 12, the apparatus for segmenting breast magnetic resonance image lesions of the present embodiment includes: an initial image generation unit 71, a weight distribution model training unit 72, a target image generation unit 73, a breast lesion segmentation model training unit 74, a weight score generation unit 75, and a target breast lesion segmentation result generation unit 76, wherein:
an initial image generating unit 71, configured to acquire a plurality of central magnetic resonance images of breast, and perform image preprocessing on the magnetic resonance images of breast to obtain an initial image;
the weight distribution model training unit 72 is configured to randomly crop the initial image to generate training data, and train a weight distribution model based on the training data to obtain a target weight distribution model, where the target weight distribution model includes weight scores corresponding to a plurality of center models;
a target image generating unit 73, configured to perform image preprocessing on the breast magnetic resonance image to obtain a base image, and perform gaussian smoothing filtering and image deformation processing on the base image to obtain a target image;
A breast focus segmentation model training unit 74, configured to train the breast focus segmentation model based on the target image and the weight score, to obtain a target breast focus segmentation model;
a weight score generating unit 75, configured to acquire a plurality of to-be-processed breast magnetic resonance images of the center, perform preprocessing and interpolation processing on the to-be-processed breast magnetic resonance images to obtain to-be-processed images, and perform prediction processing on the to-be-processed images through the target weight distribution model to obtain a target weight score corresponding to each center model;
and a target breast focus segmentation result generating unit 76, configured to predict the image to be processed through the target breast focus segmentation model, obtain a segmentation result corresponding to each center model, and perform weighted summation processing on the segmentation result and the corresponding target weight score, so as to obtain a target breast focus segmentation result.
Further, the initial image generation unit 71 includes:
a breast magnetic resonance image acquisition unit configured to acquire the breast magnetic resonance images of a plurality of the centers;
the image clipping unit is used for identifying that the mammary gland magnetic resonance image contains a mammary gland pixel region, clipping the mammary gland pixel region in the mammary gland magnetic resonance image and obtaining a clipped image;
The image downsampling unit is used for downsampling the cut image to a preset resolution to obtain a downsampled image;
the image normalization processing is used for subtracting the average intensity value of all voxels from the intensity value of all voxels in the downsampled image and dividing the average intensity value by the standard deviation of all voxels so as to perform the image normalization processing on the downsampled image to obtain a normalized image;
and the image enhancement unit is used for carrying out image overturning and image contrast changing processing on the standardized image to obtain the initial image.
Further, the weight distribution model training unit 72 includes:
the random cutting unit is used for randomly cutting the mammary gland magnetic resonance image of any single use case in the initial image into data with a preset size, generating single training data, and generating the training data after all the initial images are randomly cut;
and the target weight distribution model generation unit is used for training the weight distribution model based on the output weight and the cross entropy loss function to obtain the target weight distribution model.
Further, the target weight assignment model generation unit includes:
the training data input unit is used for inputting the training data into the weight distribution model one by one according to the single training data;
The initial feature map generating unit is used for performing convolution processing on the single training data for a plurality of times through the encoder so as to enlarge the number of channels corresponding to the single training data to a first preset multiple and downsampling the size of the single training data to a first preset size to obtain an initial feature map;
the basic feature map generating unit is used for carrying out convolution processing on the initial feature map for a plurality of times through the decoder so as to reduce the channel number of the initial feature to a second preset multiple, and up-sampling the size of the initial feature map to a second preset size to obtain a basic feature map;
and the initial weight score generating unit is used for outputting the output weights corresponding to the plurality of center models based on the basic feature graphs through the full-connection layer.
Further, the breast lesion segmentation model training unit 74 includes:
the random sampling data generation unit is used for randomly sampling the mammary gland magnetic resonance image of any single use case in the target image to obtain random sampling data;
the focus area sampling unit is used for identifying focus areas in the mammary gland magnetic resonance image to obtain a target focus area, randomly selecting one point as a center point in the target focus area, and sampling the preset size according to the center point to obtain segmentation training data;
The breast focus segmentation model training unit is used for training the breast focus segmentation model based on the random sampling data, the segmentation training data and the weight score to obtain a target breast focus segmentation model.
Further, the breast lesion segmentation model training unit includes:
the segmentation training data sampling unit is used for inputting the segmentation training data into the breast focus segmentation model one by one, and respectively carrying out downsampling and upsampling treatment on the segmentation training data through the breast focus segmentation model to obtain a basic downsampled image and an upsampled image;
the image merging unit is used for merging the basic downsampled image and the upsampled image to obtain a merged image;
the focus segmentation result graph generating unit is used for carrying out convolution processing on the combined image through a decoding module in the breast focus segmentation model to obtain a focus segmentation result graph;
the model training unit is used for training the breast focus segmentation model based on the focus segmentation result graph and the weight score to obtain a target breast focus segmentation model.
Further, the model training unit includes:
The weight score changing unit is used for changing the weight score corresponding to each center model through the random average module to obtain a changed weight score;
the weighted feature map generating unit is used for multiplying the focus segmentation result map with the corresponding change weight score to obtain a plurality of weighted feature maps;
and the target segmentation feature map generating unit is used for adding the weighted feature maps to obtain a target segmentation feature map, and training the breast focus segmentation model based on the target segmentation feature map, the cross entropy loss function and the position loss function to obtain the target breast focus segmentation model.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 13, fig. 13 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 8 comprises a memory 81, a processor 82, a network interface 83 communicatively connected to each other via a system bus. It should be noted that only a computer device 8 having three components memory 81, a processor 82, a network interface 83 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer device may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The computer device can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 81 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory 81 may be an internal storage unit of the computer device 8, such as a hard disk or memory of the computer device 8. In other embodiments, the memory 81 may also be an external storage device of the computer device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 8. Of course, the memory 81 may also include both internal storage units of the computer device 8 and external storage devices. In this embodiment, the memory 81 is generally used for storing an operating system installed in the computer device 8 and various application software, such as program codes of a segmentation method of breast magnetic resonance image lesions. Further, the memory 81 may be used to temporarily store various types of data that have been output or are to be output.
The processor 82 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 82 is typically used to control the overall operation of the computer device 8. In this embodiment, the processor 82 is configured to execute the program code stored in the memory 81 or process data, such as the program code for executing the method for segmenting a breast magnetic resonance image focus described above, to implement various embodiments of the method for segmenting a breast magnetic resonance image focus.
The network interface 83 may comprise a wireless network interface or a wired network interface, which network interface 83 is typically used to establish a communication connection between the computer device 8 and other electronic devices.
The present application also provides another embodiment, namely, a computer readable storage medium, where a computer program is stored, where the computer program is executable by at least one processor, so that the at least one processor performs the steps of a method for segmenting a breast magnetic resonance image focus as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (10)

1. A method for segmenting a breast magnetic resonance imaging lesion, comprising:
acquiring a plurality of central breast magnetic resonance images, and performing image preprocessing on the breast magnetic resonance images to obtain an initial image;
randomly cutting the initial image to generate training data, and training a weight distribution model based on the training data to obtain a target weight distribution model, wherein the target weight distribution model comprises weight scores corresponding to a plurality of center models;
Performing image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and performing Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image;
training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model;
acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model;
and carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
2. The method of claim 1, wherein the acquiring a plurality of central breast magnetic resonance images and performing image preprocessing on the breast magnetic resonance images to obtain an initial image comprises:
Acquiring the breast magnetic resonance images of a plurality of the centers;
identifying that a mammary gland pixel region is contained in a mammary gland magnetic resonance image, and cutting the mammary gland pixel region in the mammary gland magnetic resonance image to obtain a cut image;
downsampling the cut image to a preset resolution to obtain a downsampled image;
subtracting the average intensity value of all voxels from the intensity value of all voxels in the downsampled image, dividing the average intensity value by the standard deviation of all voxels, and performing image normalization processing on the downsampled image to obtain a normalized image;
and performing image overturning and image contrast changing processing on the standardized image to obtain the initial image.
3. The method of claim 1, wherein the performing random cropping on the initial image to generate training data, and training a weight distribution model based on the training data to obtain a target weight distribution model comprises:
randomly cutting a mammary gland magnetic resonance image of any single use case in the initial image into data with a preset size to generate single training data, and generating the training data after all the initial images are randomly cut;
Inputting the training data into the weight distribution model one by the single training data for feature extraction to obtain output weight;
and training the weight distribution model based on the output weight and the cross entropy loss function to obtain the target weight distribution model.
4. The method of claim 3, wherein the weight distribution model includes an encoder, a decoder, and a full-connection layer, the training data is input into the weight distribution model one by one as the single training data to perform feature extraction, and the method includes:
inputting the training data into the weight distribution model one by one according to the single training data;
performing multiple convolution processing on the single training data through the encoder, so that the number of channels corresponding to the single training data is increased to a first preset multiple, and the size of the single training data is downsampled to a first preset size, so as to obtain an initial feature map;
performing convolution processing on the initial feature map for a plurality of times through the decoder so as to reduce the channel number of the initial feature to a second preset multiple, and up-sampling the size of the initial feature map to a second preset size to obtain a basic feature map;
And outputting the output weights corresponding to the plurality of center models based on the basic feature map through the full connection layer.
5. The method for segmenting a breast magnetic resonance image lesion according to claim 1, wherein the training the breast lesion segmentation model based on the target image and the weight score to obtain a target breast lesion segmentation model comprises:
randomly sampling the breast magnetic resonance image of any single use case in the target image to obtain random sampling data;
identifying a focus area in the mammary gland magnetic resonance image, obtaining a target focus area, randomly selecting a point in the target focus area as a center point, and sampling a preset size according to the center point to obtain segmentation training data;
training the breast focus segmentation model based on the random sampling data, the segmentation training data and the weight score to obtain a target breast focus segmentation model.
6. The method of claim 5, wherein training the breast lesion segmentation model based on the randomly sampled data, the segmentation training data, and the weight score to obtain a target breast lesion segmentation model comprises:
Inputting the segmentation training data into the breast focus segmentation model one by one, and respectively carrying out downsampling and upsampling treatment on the segmentation training data through the breast focus segmentation model to obtain a basic downsampled image and an upsampled image;
combining the basic downsampled image and the upsampled image to obtain a combined image;
convolving the combined image through a decoding module in the breast focus segmentation model to obtain a focus segmentation result diagram;
training the breast focus segmentation model based on the focus segmentation result graph and the weight score to obtain a target breast focus segmentation model.
7. The method of claim 6, wherein training the breast lesion segmentation model based on the lesion segmentation result map and the weight score to obtain a target breast lesion segmentation model comprises:
carrying out changing treatment on the weight score corresponding to each center model through a random average module to obtain a changed weight score;
multiplying the focus segmentation result graph with the corresponding change weight score to obtain a plurality of weighted feature graphs;
And adding the weighted feature images to obtain a target segmentation feature image, and training the breast focus segmentation model based on the segmentation training data, the cross entropy loss function and the Dice loss function to obtain the target breast focus segmentation model.
8. A breast magnetic resonance imaging lesion segmentation apparatus, comprising:
the initial image generation unit is used for acquiring a plurality of central breast magnetic resonance images and carrying out image preprocessing on the breast magnetic resonance images to obtain initial images;
the weight distribution model training unit is used for randomly cutting the initial image to generate training data, training the weight distribution model based on the training data to obtain a target weight distribution model, wherein the target weight distribution model comprises weight scores corresponding to a plurality of center models;
the target image generation unit is used for carrying out image preprocessing on the mammary gland magnetic resonance image to obtain a basic image, and carrying out Gaussian smoothing filtering and image deformation processing on the basic image to obtain a target image;
the breast focus segmentation model training unit is used for training the breast focus segmentation model based on the target image and the weight score to obtain a target breast focus segmentation model;
The weight score generating unit is used for acquiring a plurality of breast magnetic resonance images to be processed of the center, preprocessing and interpolating the breast magnetic resonance images to be processed to obtain images to be processed, and predicting the images to be processed through the target weight distribution model to obtain target weight scores corresponding to each center model;
and the target breast focus segmentation result generation unit is used for carrying out prediction processing on the image to be processed through the target breast focus segmentation model to obtain a segmentation result corresponding to each center model, and carrying out weighted summation processing on the segmentation result and the corresponding target weight score to obtain a target breast focus segmentation result.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor, when executing the computer program, implementing a method of segmentation of breast magnetic resonance imaging lesions as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the method for segmenting breast magnetic resonance image lesions according to any one of claims 1 to 7.
CN202311006012.0A 2023-08-10 2023-08-10 Method, device, equipment and medium for segmenting breast magnetic resonance image focus Pending CN116993758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311006012.0A CN116993758A (en) 2023-08-10 2023-08-10 Method, device, equipment and medium for segmenting breast magnetic resonance image focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311006012.0A CN116993758A (en) 2023-08-10 2023-08-10 Method, device, equipment and medium for segmenting breast magnetic resonance image focus

Publications (1)

Publication Number Publication Date
CN116993758A true CN116993758A (en) 2023-11-03

Family

ID=88528185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311006012.0A Pending CN116993758A (en) 2023-08-10 2023-08-10 Method, device, equipment and medium for segmenting breast magnetic resonance image focus

Country Status (1)

Country Link
CN (1) CN116993758A (en)

Similar Documents

Publication Publication Date Title
CN111369582B (en) Image segmentation method, background replacement method, device, equipment and storage medium
CN114066902A (en) Medical image segmentation method, system and device based on convolution and transformer fusion
CN110838125B (en) Target detection method, device, equipment and storage medium for medical image
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
CN112258558B (en) Ultrasonic carotid plaque video tracking method based on multi-scale twin network
CN109977832B (en) Image processing method, device and storage medium
CN112734748B (en) Image segmentation system for hepatobiliary and biliary calculi
CN112016502B (en) Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN109977963A (en) Image processing method, unit and computer-readable medium
CN111626134A (en) Dense crowd counting method, system and terminal based on hidden density distribution
CN113838067A (en) Segmentation method and device of lung nodule, computing equipment and storable medium
CN113313728B (en) Intracranial artery segmentation method and system
CN114332553A (en) Image processing method, device, equipment and storage medium
CN111507950B (en) Image segmentation method and device, electronic equipment and computer-readable storage medium
CN114742802B (en) Pancreas CT image segmentation method based on 3D transform mixed convolution neural network
CN116542988A (en) Nodule segmentation method, nodule segmentation device, electronic equipment and storage medium
CN116993758A (en) Method, device, equipment and medium for segmenting breast magnetic resonance image focus
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113486910B (en) Method, apparatus and storage medium for extracting data information area
CN115272250A (en) Method, device, computer equipment and storage medium for determining focus position
CN116543246A (en) Training method of image denoising model, image denoising method, device and equipment
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN114792370A (en) Whole lung image segmentation method and device, electronic equipment and storage medium
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination