WO2020182036A1 - 图像处理方法、装置、服务器及存储介质 - Google Patents

图像处理方法、装置、服务器及存储介质 Download PDF

Info

Publication number
WO2020182036A1
WO2020182036A1 PCT/CN2020/077772 CN2020077772W WO2020182036A1 WO 2020182036 A1 WO2020182036 A1 WO 2020182036A1 CN 2020077772 W CN2020077772 W CN 2020077772W WO 2020182036 A1 WO2020182036 A1 WO 2020182036A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distance
pixel
point
background point
Prior art date
Application number
PCT/CN2020/077772
Other languages
English (en)
French (fr)
Inventor
王亮
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2020182036A1 publication Critical patent/WO2020182036A1/zh
Priority to US17/221,595 priority Critical patent/US11715203B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4306Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
    • A61B5/4312Breast evaluation or disorder diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This application relates to the field of computer technology, in particular to an image processing method, device, server and storage medium.
  • Computer Vision is a science that studies how to make machines "see”. Furthermore, it refers to the use of cameras and computers instead of human eyes to identify, track, and measure machine vision for targets. Graphic processing makes computer processing more suitable for human eyes to observe or transmit to the instrument for inspection.
  • computer vision technology is more and more commonly used to process images in the corresponding fields, so as to identify whether there are regions of interest in these images.
  • computer vision technology can be applied to process medical images, so as to identify whether there is a tumor area in the medical image (such as breast MRI (Magnetic Resonance Imaging) images, etc.). .
  • an image processing method, device, service, and storage medium are provided.
  • An image processing method executed by a server including:
  • the former scenic spot distance image and the background spot distance image are obtained.
  • the image distance is based on the pixel point and the former scenic spot or The coordinate distance between the background points and the gray value distance are determined;
  • An image processing device comprising:
  • a processing module configured to input the original image into the first segmentation model, and output the first segmentation image marked with the initial target area, the first segmentation model is used to predict the initial target area from the original image;
  • a determining module configured to determine the front scenic spot and background point according to the initial target area of the first segmented image
  • the obtaining module is used to obtain the distance image of the front sight spot and the distance image of the background point by calculating the image distance between each pixel in the original image and the front sight spot and the background point, and the image distance is based on the pixel
  • the coordinate distance and gray value distance between the point and the front scenic spot or background point are determined;
  • a non-volatile storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps of the image processing method.
  • a computer device includes a memory and a processor.
  • the memory stores computer readable instructions.
  • the processor executes the steps of the image processing method.
  • Fig. 3 is a flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a first segmented image provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of manually selecting an ROI area according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of front scenic spots and background points provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a background point distance image provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of various images input during segmentation in the image processing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a third segmented image obtained by manually correcting a second segmented image according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another first segmented image provided by an embodiment of the present application.
  • FIG. 13 is an overall flowchart of a model training process and an image processing process provided by an embodiment of the application
  • FIG. 14 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • Fig. 15 shows a server for image processing according to an exemplary embodiment.
  • the former scenic spot refers to the pixels on the 3D (3-Dimension) image that belong to the tumor area but are not predicted in the area, and can be represented by fg.
  • the background point refers to the pixels on the 3D image that do not belong to the tumor area but are predicted to be in the area, which can be represented by bg.
  • the embodiment of the present application provides a flow chart of a method for constructing a first segmentation model. Referring to FIG. 1, the method flow provided by the embodiment of the present application includes:
  • the server obtains multiple training sample images and multiple labeled images with target regions.
  • the target area refers to the area that needs to be predicted during the image segmentation process, and in the embodiment of the present application is the tumor area on the breast MRI image.
  • the training sample image is an unlabeled breast MRI image, which is used to train the segmentation model.
  • the annotated image corresponds to the training sample image, which is an image marked with the target area, which is the same as the training sample image, and the annotated image is also used to train the segmentation model.
  • the server acquires multiple training sample images and multiple segmented images
  • the following methods can be adopted: Acquire breast MRI images of multiple patients on the Internet, and use the acquired breast MRI images as training sample images.
  • the obtained training sample images are provided to the doctor, and the doctor manually annotates the tumor area on each training sample image to obtain multiple labeled images.
  • the server trains the initial first segmentation model according to multiple training sample images and multiple labeled images to obtain the first segmentation model.
  • the initial first segmentation model may be a deep learning neural network for object segmentation, such as 3D u-net, 3D v-net, and so on.
  • the server can set an initial value for each model parameter, construct the first objective loss function for the initial first segmentation model, input multiple training sample images into the initial first segmentation model, and output the segmentation results according to The segmentation result of each training sample image and the corresponding labeled image are used to calculate the function value of the first objective loss function. If the function value of the first objective loss function does not meet the first threshold condition, the server adjusts the model parameters of the initial first segmentation model and continues to calculate the function value of the first objective loss function until the obtained function value meets the first threshold condition.
  • the first threshold condition can be set by the server according to the processing accuracy.
  • the server adopts the BP (Back Propagation) algorithm to adjust the model parameters of the initial first segmentation model, based on the adjusted individual parameters.
  • the parameter values of the model parameters continue to calculate the function value of the first objective loss function until the calculated function value meets the first threshold condition.
  • the BP algorithm is mainly composed of the two processes of signal forward propagation and error back propagation. After signal forward propagation and error back propagation, the adjustment of weights and thresholds is repeated until the preset learning and training. The number of times, or the output error is reduced to the allowable level.
  • the server obtains the parameter value of each model parameter when the first threshold condition is met, and uses the initial first segmentation model corresponding to the parameter value of each model parameter when the first threshold condition is met as the first segmentation model obtained by training.
  • the first segmentation model is used to predict the initial target area from the original image, which is also an unlabeled breast MRI image.
  • the embodiment of the present application provides a flow chart of a method for constructing a second segmentation model. Referring to FIG. 2, the process of the method provided by the embodiment of the present application includes:
  • the server acquires a plurality of training sample images and a plurality of annotated images with target regions.
  • the server obtains the distance image of the scenic spot before the training sample and the distance image of the background point of the training sample according to each training sample image and the corresponding labeled image.
  • the server inputs each training sample image into the first segmentation model, outputs the first segmentation training image, and automatically selects each training sample image by comparing the first segmentation training image corresponding to each training sample image and the annotation image
  • the front scenic spot and the background point based on the selected front scenic spot, obtain the training sample front scenic spot distance image, and based on the selected background point, obtain the training sample background point distance image.
  • the server obtains the distance image of the training sample front scenic spot based on the selected former scenic spot, the steps are as follows:
  • the server obtains the image distance between each pixel in the training sample image and the previous scenic spot.
  • the server obtains the coordinates of each pixel and the front spot in the three-dimensional coordinate system.
  • the breast MRI data is 3D data.
  • the server establishes a three-dimensional coordinate system for the 3D data, and obtains the coordinates of each pixel in the training sample image in the three-dimensional coordinate system, and the coordinates of the previous scenic spot in the three-dimensional coordinate system.
  • the server obtains the coordinate distance between each pixel and the previous scenic spot according to the coordinates of each pixel and the coordinates of the previous scenic spot.
  • the server obtains the gray value of each pixel and the previous scenic spot.
  • the gray value is also called brightness value and intensity value.
  • the server obtains the grayscale distance between each pixel and the previous scenic spot according to the gray value of each pixel and the gray value of the previous scenic spot.
  • the server obtains the image distance between each pixel in the training sample image and the previous scenic spot according to the coordinate distance and the gray-scale distance between each pixel and the previous scenic spot.
  • the server can obtain the image distance D_P 0 between each pixel in the training sample image and the previous scenic spot by applying the following formula according to the coordinate distance and gray-scale distance between each pixel and the previous scenic spot:
  • the server obtains an image of the distance to the former scenic spot according to the image distance between each pixel and the former scenic spot.
  • the server obtains the foreground distance image of the training sample by taking the image distance between each pixel and the previous scenic spot as the pixel value of each pixel.
  • the server obtains the training sample background point distance image based on the selected background point, the steps are as follows:
  • the server obtains the coordinates of each pixel point and the background point in the three-dimensional coordinate system.
  • the breast MRI data is 3D data.
  • the server establishes a three-dimensional coordinate system for the 3D data, and obtains the coordinates of each pixel in the training sample image in the three-dimensional coordinate system and the coordinates of the background point in the three-dimensional coordinate system.
  • the server obtains the coordinate distance between each pixel point and the background point according to the coordinates of each pixel point and the coordinates of the background point.
  • the server obtains the gray value of each pixel and background point.
  • the server obtains the grayscale distance between each pixel point and the background point according to the gray value of each pixel and the gray value of the background point.
  • the server obtains the image distance between each pixel point and the background point in the training sample image according to the coordinate distance and gray-scale distance between each pixel point and the background point.
  • the server can obtain the image distance D_P 0 ′ between each pixel point and the background point in the training sample image by applying the following formula according to the coordinate distance and gray-scale distance between each pixel point and the background point:
  • a is the weight parameter, and the value range is (0, 1). Preferably, the value of a may be 0.6.
  • the server obtains the background point distance image of the training sample according to the image distance between each pixel point and the background point.
  • the server obtains the background distance image of the training sample by taking the image distance between each pixel and the background point as the pixel value of each pixel.
  • the server trains the initial second segmentation model according to multiple training sample images, multiple labeled images, multiple training sample distance images of scenic spots, and multiple training sample background point distance images to obtain a second segmentation model.
  • the initial second segmentation model may be a deep learning neural network for object segmentation, such as 3D u-net, 3D v-net, and so on.
  • the server sets an initial value for each model parameter, and constructs a second objective loss function for the initial second segmentation model, combining multiple training sample images, multiple training sample distance images of scenic spots, and multiple training sample backgrounds
  • the point distance image is input into the initial second segmentation model, the segmentation result is output, and the function value of the second objective loss function is calculated according to the segmentation result of each training sample image and the corresponding labeled image.
  • the server adjusts the model parameters of the initial second segmentation model and continues to calculate the function value of the second objective loss function until the obtained function value meets the second threshold condition.
  • the second threshold condition can be set by the server according to the processing accuracy.
  • the server uses the BP algorithm to adjust the model parameters of the initial second segmentation model, and continues to calculate the first based on the adjusted parameter values of each model parameter. Second, the function value of the objective loss function until the calculated function value satisfies the second threshold condition.
  • the server obtains the parameter value of each model parameter when the second threshold condition is met, and uses the initial second segmentation model corresponding to the parameter value of each model parameter when the second threshold condition is met as the second segmentation model obtained by training.
  • the second segmentation model is used to predict the target area from the original image (that is, the unlabeled breast MRI image) based on the distance image of the front scenic spot and the distance image of the background point.
  • the embodiment of the present application provides a flowchart of an image segmentation method. Referring to FIG. 3, the process of the method provided in the embodiment of the present application includes:
  • the server inputs the original image into the first segmentation model, and outputs the first segmented image marked with the initial target area.
  • the server may mark the initial target area on the first segmented image with a color different from the normal tissue area, for example, red, green, etc. Since the initial target area predicted by the first segmentation model includes all possible areas predicted, the prediction result has a large error, and it needs to be segmented again through a subsequent process.
  • the pixels predicted to be tumors can be marked as 1, and the pixels predicted to be non-tumor can be marked as 0, so the first segmented image is actually binarized image.
  • Figure 4 is a three-view view of breast MRI images.
  • the left image of Figure 4 is a cross-sectional view of the breast MRI image
  • the upper right image of Figure 4 is a sagittal view of the breast MRI image
  • the lower right image of Figure 4 is a breast
  • the circular area at the intersection of the dotted lines in Figure 4 is the tumor area.
  • Fig. 5 is a first segmented image obtained by processing the breast MRI image in Fig. 4 using the first segmentation model.
  • the upper left picture of Fig. 5 is a cross-sectional view with the tumor area marked
  • the lower left picture in Fig. 5 is a 3D rendering of the marked image
  • the upper right picture of Fig. 5 is a sagittal view with the tumor area marked
  • in Fig. 5 The lower right image of is a coronal view with the tumor area marked
  • the circular area at the intersection of the dashed lines in Figure 5 is the tumor area.
  • the server determines the front scenic spot and the background point according to the initial target area of the first segmented image.
  • 302 may specifically include determining a region of interest ROI from the breast MRI image to be labeled based on the initial tumor region.
  • the ROI region is the effective region for labeling the breast MRI image to be labeled.
  • the annotation result of is considered invalid; select the front scenic spot and background point on the ROI area.
  • multiple pixels can be selected on the three views of the original image, and the area enclosed by the selected multiple pixels can be determined as the designated area. See Figure 6.
  • the upper left image in Figure 6 is a cross-sectional view of the original image
  • the lower left image in Figure 6 is a 3D image of the original image
  • the upper right image in Figure 6 is a sagittal view of the original image
  • the right in Figure 6 The picture below is a coronal view of the original image.
  • the doctor can manually obtain the front sights and background points from the designated area, and input the selected front sights and background points into the server.
  • the server obtains the front sights and background by detecting the operation of the doctor point.
  • the front scenic spot can be represented by p_fg (x, y, z)
  • fg is the front ground foreground
  • the background point can be represented by p_bg (x, y, z)
  • bg is the back ground background.
  • the white highlighted area in Figure 7 is actually the tumor area.
  • the doctor selects a pixel in the unpredicted area as the front spot, and the prediction is wrong Select a pixel in the area as the background point.
  • the server obtains an image of the distance to the previous scenic spot by calculating the image distance between each pixel in the original image and the previous scenic spot.
  • the server can obtain the distance image of the front sight spot by calculating the image distance between each pixel in the original image and the front sight spot. Specific steps can be taken as follows:
  • the server obtains the image distance between each pixel in the original image and the previous scenic spot.
  • the server obtains the coordinates of each pixel and the front scenic spot in the three-dimensional coordinate system.
  • the server obtains the coordinate distance between each pixel and the previous scenic spot according to the coordinates of each pixel and the coordinates of the previous scenic spot.
  • the server obtains the gray value of each pixel and the front scenic spot.
  • the server obtains the grayscale distance between each pixel and the previous scenic spot according to the gray value of each pixel and the gray value of the previous scenic spot.
  • the server obtains the image distance between each pixel in the original image and the former scenic spot according to the coordinate distance and the gray-scale distance between each pixel and the former scenic spot.
  • the server obtains a distance image of the front sight spot according to the image distance between each pixel and the front sight spot.
  • the server can also refer to steps 3031 to 30315 to calculate the image distance between each pixel in the ROI area and the previous scenic spot, and according to the image distance between each pixel and the previous scenic spot, Obtain the distance image of the front spot.
  • the server obtains the foreground distance image by taking the image distance between each pixel and the front scenic spot as the pixel value of each pixel. For any pixel in the original image, if the distance between the pixel and the previous scenic spot is closer, the smaller the image distance between the pixel and the previous scenic spot, the brightness of the pixel on the image from the previous scenic spot The darker, on the contrary, if the distance between the pixel and the previous scenic spot is farther, the larger the image distance between the pixel and the previous scenic spot, the brighter the brightness of the pixel on the image of the previous scenic spot.
  • Figure 8 shows the distance image of the front sight spot.
  • the left picture in Fig. 8 shows the distance image of the sight spot in front of the cross section.
  • the upper right picture in Fig. 8 shows the distance image of the sight spot in front of the sagittal plane.
  • the lower right picture in Fig. 8 shows the distance image of the sight spot in front of the coronal.
  • the intersection of the dotted lines in 8 is the position of the former scenic spot.
  • the server obtains the background point distance image by calculating the image distance between each pixel point in the original image and the background point.
  • the server can obtain the background point distance image by calculating the image distance between each pixel point in the original image and the background point. Specific steps can be taken as follows:
  • the server obtains the image distance between each pixel point in the original image and the background point.
  • the server obtains the coordinates of each pixel point and the background point in the three-dimensional coordinate system.
  • the server obtains the coordinate distance between each pixel point and the background point according to the coordinates of each pixel point and the coordinates of the background point.
  • the server obtains the gray value of each pixel point and the background point.
  • the server obtains the grayscale distance between each pixel point and the background point according to the grayscale value of each pixel point and the grayscale value of the background point.
  • the server obtains the image distance between each pixel point and the background point in the original image according to the coordinate distance and the grayscale distance between each pixel point and the background point.
  • the server obtains a background point distance image according to the image distance between each pixel point and the background point.
  • the server can also refer to the steps 3041 to 30415 to calculate the image distance between each pixel in the ROI area and the background point, and according to the image distance between each pixel and the background point, Get the background point distance image.
  • the server can obtain the background distance image by taking the image distance between each pixel point and the background point as the pixel value of each pixel point. For any pixel in the original image, if the distance between the pixel and the background point is closer, the image distance between the pixel and the background point is smaller, the brightness of the pixel point on the image from the background point The darker, on the contrary, if the distance between the pixel point and the background point is farther, the larger the image distance between the pixel point and the background point, the brighter the brightness of the pixel point on the image from the background point.
  • Figure 9 is a background point distance image
  • the left image in Figure 9 is a cross-sectional background point distance image
  • the upper right image in Figure 9 is a sagittal background point distance image
  • the lower right image in Figure 9 is a coronal background point distance
  • the intersection of the dotted lines in Figure 9 is the position of the background point.
  • the server inputs the original image, the distance image of the front scenic spot, and the distance image of the background point into the second segmentation model, and outputs the second segmentation image marked with the target area.
  • step 305 may specifically include inputting the to-be-labeled breast MRI image, the distance image of the front scenic spot, and the distance image of the background point into the second segmentation model, and outputting the tumor region.
  • the second segmentation model is used to mark the tumor area from the breast MRI image based on the breast MRI image, the distance image of the front scenic spot, and the distance image of the background point.
  • Fig. 11 is a third segmented image obtained by manual correction, wherein the upper left image in Fig. 11 is a cross-sectional image of the third segmented image, the upper right image in Fig.
  • FIG. 11 is an out-of-shape surface image of the third segmented image, and in Fig. 11 The bottom left image of is the 3D image of the annotated image, and the bottom right image of Figure 11 is the coronal image of the third segmented image.
  • the manual correction process has greatly reduced the workload of all manual annotation processes, and the obtained third segmented image has higher accuracy, which can be processed as annotated images for training the first segmentation model and the second segmentation model. Thereby improving the accuracy of the first segmentation model and the second segmentation model.
  • the accuracy of the first segmentation model and the second segmentation model trained becomes higher and higher, and the output segmentation results are more accurate, without the need for extensive manual modification.
  • the final segmentation result can be obtained through a small amount of correction, and the manual workload is greatly reduced under the premise of ensuring the accuracy of the segmentation.
  • Figure 12 is the first segmented image output by using the retrained first segmentation model.
  • the first segmentation model only predicts two tumor regions on the left and right breasts, but does not predict the heart region. It is the tumor area, which is more accurate than the segmentation result before retraining.
  • the segmentation results in Figure 12 can be used for image diagnosis after a small amount of manual correction.
  • FIG. 13 is an overall flow chart of the construction of the first segmentation model, the second segmentation model and the image processing method in an embodiment of the application.
  • the training phase using labeled data includes the following steps:
  • Step 1 Obtain multiple patient images and label multiple patient images to obtain multiple labeled images. Based on multiple patient images and multiple labeled images, use deep learning segmentation algorithm A to train the model to obtain the algorithm A prediction model. That is, the first segmentation model in the embodiment of this application.
  • Step 3 According to multiple patient images and multiple labeled images, automatically generate multiple front sight distance images and multiple background point distance images.
  • Step 4 Based on multiple patient images, multiple labeled images, multiple distance images from front spots, and multiple distance images from background points, the deep learning segmentation algorithm B is used to train the model to obtain the algorithm B prediction model, which is the prediction model in the embodiment of the application The second segmentation model.
  • Step 1 For a new patient image, input it into the algorithm A prediction model, and output the preliminary segmentation prediction result of the new patient image, which is the first segmented image in the embodiment of the present application.
  • Step 3 Manually select the front spot and background point on the ROI area, and generate the distance image of the front spot and the background point distance.
  • Step 5 Input the new patient image, the distance image of the front sight spot, and the distance image of the background point into the algorithm B prediction model, and output the algorithm B segmentation result, which is the second segmentation image in the embodiment of the present application.
  • Step 6 Manually correct the segmentation result of Algorithm B to obtain the final tumor segmentation result, which is the third segmented image in the embodiment of the present application.
  • the labeled image (ground truth data) obtained by processing the third segmented image is used to train the algorithm prediction model A and the algorithm prediction model B.
  • the image segmentation method provided by the implementation of this application can be applied to medical image processing scenarios, including medical image annotation scenarios, pathological image analysis scenarios, and medical tumor processing scenarios.
  • medical image processing scenarios including medical image annotation scenarios, pathological image analysis scenarios, and medical tumor processing scenarios.
  • Scenario 1 When making MRI breast tumor imaging diagnosis, a doctor can use the method provided in the embodiments of this application to mark the tumor area to obtain information such as the size and shape of the tumor area, and write a patient's imaging diagnosis report based on this information.
  • Scenario 2 In the field of image processing, when annotating massive tumor data, the method provided in the embodiments of the present application may be used for annotating, which reduces the workload of manual annotation and improves the efficiency of annotation.
  • the method provided by the embodiment of the application obtains the distance image of the former scenic spot and the distance image of the background point by calculating the image distance between each pixel in the original image and the previous scenic spot and the background point.
  • the coordinate distance and the gray value distance between the front scenic spot or the background point are determined, and there is no need to traverse the path distance of all possible paths, thus reducing the amount of calculation in the image processing process, reducing resource consumption, and shortening processing time.
  • the embodiment of the present application uses a manual method to determine the designated region, and the designated region The outside prediction result is deemed invalid, which not only improves the accuracy of image processing, but also reduces the amount of calculation in the subsequent processing.
  • the embodiment of the present application adopts an iterative calculation method to retrain the first segmentation model and the second segmentation model based on the manually labeled segmentation results, which greatly improves the accuracy of the model, and in particular makes the segmentation results based on the first segmentation model more accurate. Reduce the workload of subsequent manual correction.
  • the processing module 1401 is configured to input the original image into the first segmentation model, and output the first segmented image labeled with the initial target area, and the first segmentation model is used to predict the initial target area from the original image.
  • the determining module 1402 is configured to determine the front scenic spot and background point according to the initial target area of the first segmented image.
  • the obtaining module 1403 is used to obtain the image distance between each pixel in the original image and the former scenic spot and the background point to obtain the distance image of the former scenic spot and the background point.
  • the image distance is based on the pixel point and the former scenic spot or background point.
  • the distance between the coordinates and the gray value distance is determined.
  • the processing module 1401 is also used to input the original image, the distance image of the front spot and the distance image of the background point into the second segmentation model, and output the second segmented image marked with the target area.
  • the background point is from the image, and the target area is predicted from the original image.
  • the obtaining module 1403 is used to obtain the image distance between each pixel in the original image and the previous scenic spot; according to the image distance between each pixel and the previous scenic spot, the previous scenic spot is obtained Distance image; obtain the image distance between each pixel point and the background point in the original image; and obtain the background point distance image according to the image distance between each pixel point and the background point.
  • the obtaining module 1403 is used to obtain the coordinates of each pixel and the previous scenic spot in the three-dimensional coordinate system; according to the coordinates of each pixel and the coordinate of the previous scenic spot, obtain each pixel The coordinate distance between each pixel and the previous scenic spot; obtain the gray value of each pixel and the previous scenic spot; according to the gray value of each pixel and the gray value of the previous scenic spot, obtain the distance between each pixel and the previous scenic spot Gray-scale distance; and according to the coordinate distance and gray-scale distance between each pixel and the former scenic spot, the image distance between each pixel in the original image and the former scenic spot is obtained.
  • the obtaining module 1403 is configured to obtain the foreground distance image by taking the image distance between each pixel and the front scenic spot as the pixel value of each pixel.
  • the obtaining module 1403 is used to obtain the coordinates of each pixel and the background point in the three-dimensional coordinate system; according to the coordinates of each pixel and the coordinates of the background point, obtain each pixel The coordinate distance between each pixel and the background point; obtain the gray value of each pixel and the background point; according to the gray value of each pixel and the gray value of the background point, obtain the distance between each pixel and the background point Gray distance; and according to the coordinate distance and gray distance between each pixel point and the background point, the image distance between each pixel point and the background point in the original image is obtained.
  • the obtaining module 1403 is configured to obtain the background distance image by taking the image distance between each pixel and the background point as the pixel value of each pixel.
  • the determining module 1402 is used to determine the designated area from the original image; and by comparing the initial target area of the first segmented image with the original image, the front scenic spot and background point are obtained from the designated area.
  • the obtaining module 1403 is configured to obtain a plurality of training sample images and a plurality of annotated images with target regions, and the training sample images and the annotated images correspond one-to-one.
  • the device also includes a training module, which is used to train the initial first segmentation model according to the multiple training sample images and the multiple labeled images to obtain the first segmentation model.
  • the obtaining module 1403 is configured to obtain a plurality of training sample images and a plurality of annotated images with target regions, and the training sample images and the annotated images correspond to each other; The sample image and the corresponding labeled image are used to obtain the distance image of the scenic spot before the training sample and the distance image of the background point of the training sample.
  • the device also includes: a training module for training the initial second segmentation model based on multiple training sample images, multiple labeled images, multiple training sample distance images of scenic spots, and multiple training sample background point distance images to obtain The second segmentation model.
  • the acquiring module 1403 is configured to acquire a third segmented image, and the third segmented image is an image obtained by manually correcting the second segmented image.
  • the device also includes a training module, which is used to train the first segmentation model and the second segmentation model according to the third segmentation image.
  • the device is applied to a medical image processing scene, and the medical image processing scene includes at least a medical image annotation scene, a pathological image analysis scene, and a medical tumor processing scene.
  • the image processing device is used to process breast magnetic resonance imaging MRI images.
  • the processing module 1401 is also used for inputting the breast MRI image to be labeled into the first segmentation model and outputting the initial tumor area.
  • the first segmentation model is used for labeling the initial tumor area from the breast MRI image.
  • the determining module 1402 is also used to determine the ROI region of the region of interest from the breast MRI image to be labeled according to the initial tumor region.
  • the ROI region is the effective region for labeling the breast MRI image to be labeled.
  • the labeling results outside the ROI region are Treated as invalid.
  • the obtaining module 1403 is also used to select the front scenic spot and background point on the ROI area; based on the breast MRI image, the front scenic spot and the background point to be labeled, the front scenic spot distance image and the background point distance image are obtained;
  • the processing module 1401 is also used to input the to-be-labeled breast MRI image, the distance image of the front sight spot, and the distance image of the background point into the second segmentation model, and output the tumor area.
  • the second segmentation model is used to based on the breast MRI image and the distance front sight Image and background point distance image, marking the tumor area from the breast MRI image.
  • the device calculates the image distance between each pixel in the original image and the front scenic spot and the background point, respectively, to obtain the distance image of the previous scenic spot and the background point distance image, because the distance image can be directly based on
  • the coordinate distance and gray value distance between the pixel point and the front scenic spot or background point are determined, and there is no need to traverse the path distance of all possible paths, thus reducing the amount of calculation in the image processing process, reducing resource consumption, and shortening processing time.
  • Fig. 15 shows a server for image processing according to an exemplary embodiment.
  • the server 1500 includes a processing component 1522, which further includes one or more processors, and a memory resource represented by a memory 1532 for storing instructions executable by the processing component 1522, such as application programs.
  • the application program stored in the memory 1532 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1522 is configured to execute instructions to perform functions performed by the server in the above-mentioned image segmentation method.
  • the server 1500 may also include a power component 1526 configured to perform power management of the server 1500, a wired or wireless network interface 1550 configured to connect the server 1500 to the network, and an input output (I/O) interface 1558.
  • the server 1500 can operate based on an operating system stored in the storage 1532, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • the server provided in the embodiment of the application obtains the distance image of the front sight spot and the distance image of the background point by calculating the image distance between each pixel in the original image and the front sight spot and the background point.
  • the coordinate distance and the gray value distance between the front scenic spot or the background point are determined, and there is no need to traverse the path distance of all possible paths, thus reducing the amount of calculation in the image processing process, reducing resource consumption, and shortening processing time.
  • a computer device including a memory and a processor, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the image processing method.
  • the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
  • a computer-readable storage medium which stores computer-readable instructions.
  • the processor executes the steps of the image processing method.
  • the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法,所述方法包括:通过计算原始图像中每个像素点分别与前景点和背景点之间的图像距离,获取前景点距离图像和背景点距离图像,图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定;及将各种图像输入到第二分割模型中,输出第二分割图像。

Description

图像处理方法、装置、服务器及存储介质
本申请要求于2019年03月08日提交中国专利局,申请号为201910176668.4,申请名称为“图像处理方法、装置、服务器及存储介质”的中国专利申请的优先权,及要求于2019年03月08日提交中国专利局,申请号为2019107462659,申请名称为“图像处理方法、装置、服务器及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别涉及一种图像处理方法、装置、服务器及存储介质。
背景技术
计算机视觉(Computer Vision,CV)是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。
随着计算机视觉技术的发展,在越来越多的领域中,越来越普遍地应用计算机视觉技术对相应领域的图像进行处理,从而识别出这些图像中是否存在感兴趣的区域。比如在医学领域,可以应用计算机视觉技术可对医学影像进行处理,从而识别出医学影像(如乳腺MRI(Magnetic Resonance Imaging,核磁共振成像)图像等)中是否存在肿瘤区域。。
目前,相关技术在进行应用计算机视觉技术进行图像处理,识别出感兴趣的区域的过程中,通常需要计算图像中每个像素点到前景点或背景点所有可能路径的路径距离,以获取每个像素点到前景点或背景点之间的测地线距离,导致图像处理过程资源消耗大、处理时间较长。
发明内容
根据本申请提供的各种实施例,提供了一种图像处理方法、装置、服务及存储介质。
一种图像处理方法,由服务器执行,所述方法包括:
将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,所述第一分割模型用于从原始图像中预测出初始目标区域;
根据所述第一分割图像的初始目标区域,确定前景点和背景点;
通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,所述图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定;及
将所述原始图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像,所述第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。
一种图像处理装置,所述装置包括:
处理模块,用于将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,所述第一分割模型用于从原始图像中预测出初始目标区域;
确定模块,用于根据所述第一分割图像的初始目标区域,确定前景点和背景点;
获取模块,用于通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,所述图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定;及
所述处理模块,还用于将所述原始图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像,所述第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。
一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令 被一个或多个处理器执行时,使得一个或多个处理器执行图像处理方法的步骤。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行图像处理方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种构建第一分割模型的方法流程图;
图2是本申请实施例提供的一种构建第二分割模型的方法流程图;
图3是本申请实施例提供的一种图像处理方法流程图;
图4是本申请实施例提供的一种乳腺MRI图的三视图;
图5是本申请实施例提供的一种第一分割图像的示意图;
图6是本申请实施例提供的一种人工选取ROI区域的示意图;
图7是本申请实施例提供的一种前景点和背景点的示意图;
图8是本申请实施提供的一种前景点距离图像的示意图;
图9是本申请实施例提供的一种背景点距离图像的示意图;
图10是本申请实施例提供的图像处理方法在进行分割时所输入的各种图像示意图;
图11是本申请实施例提供的人工对第二分割图像进行修正得到的第三分割图像的示意图;
图12是本申请实施例提供的另一种第一分割图像的示意图;
图13是本申请实施例提供的模型训练过程及图像处理过程的整体流程图;
图14是本申请实施例提供的图像处理装置的结构示意图;及
图15是根据一示例性实施例示出的一种用于图像处理的服务器。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在执行本申请实施例之前,首先对本申请实施例涉及的重要名词进行解释。
前景点是指3D(3-Dimension,三维)图像上属于肿瘤区域而未被预测在区域内的像素点,可以用fg表示。
背景点是指3D图像上不属于肿瘤区域而被预测在区域内的像素点,可以用bg表示。
ROI(Region Of Interest,感兴趣区域)是指机器视觉、图像处理中,从被处理图像中以方框、圆、椭圆、不规则多边形等方式勾勒出的需要处理的区域,或者广义上是指读者需要关注的区域。ROI区域为原始数据上的局部区域,在本申请实施例中一般指2D(2-Dimension,二维)或3D的矩形区域。
本申请实施例提供了一种构建第一分割模型的方法流程图,参见图1,本申请实施例提供的方法流程包括:
101、服务器获取多个训练样本图像和多个标注有目标区域的标注图像。
其中,目标区域是指在图像分割过程中需要预测的区域,在本申请实施例中为乳腺MRI图像上的肿瘤区域。训练样本图像为未经标注的乳腺MRI图像,用于训练分割模型。标注图像与训练样本图像对应,为标注有目标区域的图像,与训练样本图像相同,标注图像也用于训练分割模型。
服务器在获取多个训练样本图像和多个分割图像时,可采用如下方式: 在联网上获取多个病人的乳腺MRI图像,并将所获取的乳腺MRI图像作为训练样本图像。将所获取的训练样本图像提供给医生,由医生采用人工方式标注出每个训练样本图像上的肿瘤区域,得到多个标注图像。
102、服务器根据多个训练样本图像和多个标注图像,对初始第一分割模型进行训练,得到第一分割模型。
其中,初始第一分割模型可以为用于物体分割的深度学习神经网络,例如3D u-net、3D v-net等。具体训练时,服务器可为每个模型参数设置一个初始值,并为初始第一分割模型构建第一目标损失函数,将多个训练样本图像输入到初始第一分割模型中,输出分割结果,根据每个训练样本图像的分割结果与对应的标注图像,计算第一目标损失函数的函数值。如果第一目标损失函数的函数值不满足第一阈值条件,服务器对初始第一分割模型的模型参数进行调整,并继续计算第一目标损失函数的函数值,直至得到的函数值满足第一阈值条件。其中,第一阈值条件可由服务器根据处理精度进行设置。
进一步地,当得到的第一目标损失函数的函数值不满足第一阈值条件,服务器采用BP(Back Propagation,反向传播)算法对初始第一分割模型的模型参数进行调整,基于调整后的各个模型参数的参数值继续计算第一目标损失函数的函数值,直至计算后的函数值满足第一阈值条件。其中,BP算法主要由信号的正向传播和误差的反向传播两个过程组成,经过信号正向传播和误差反向传播,权重和阈值的调整反复进行,一直进行到预先设定的学习训练次数,或者输出误差减小到允许的程度。
服务器获取满足第一阈值条件时各个模型参数的参数值,并将满足第一阈值条件时各个模型参数的参数值所对应的初始第一分割模型,作为训练得到的第一分割模型。其中,第一分割模型用于从原始图像中预测出初始目标区域,该原始图像也即是未经标注的乳腺MRI图像。
本申请实施例提供了一种构建第二分割模型的方法流程图,参见图2,本申请实施例提供的方法流程包括:
201、服务器获取多个训练样本图像和多个标注有目标区域的标注图像。
本步骤的实现过程与上述步骤101相同,具体参见上述步骤101。
202、服务器根据每个训练样本图像和对应的标注图像,获取训练样本前景点距离图像和训练样本背景点距离图像。
服务器将每个训练样本图像输入到第一分割模型中,输出第一分割训练图像,并通过对比每个训练样本图像对应的第一分割训练图像和标注图像,自动从每个训练样本图像中选取前景点和背景点,进而基于所选取的前景点,获取训练样本前景点距离图像,并基于所选取的背景点,获取训练样本背景点距离图像。
对于任一训练样本图像,服务器基于所选取的前景点,获取训练样本前景点距离图像时,步骤如下:
a1、服务器获取训练样本图像中每个像素点与前景点之间的图像距离。
a11、服务器获取每个像素点和前景点在三维坐标系中的坐标。
乳腺MRI数据为3D数据,服务器针对3D数据,建立三维坐标系,并获取训练样本图像中每个像素点在三维坐标系中的坐标,以及前景点在三维坐标系中的坐标。
a12、服务器根据每个像素点的坐标和前景点的坐标,获取每个像素点与前景点之间的坐标距离。
设定前景点的坐标为P 0=(x 0,y 0,z 0),像素点的坐标为P=(x,y,z),则像素点与前景点之间的坐标距离为
Figure PCTCN2020077772-appb-000001
a13、服务器获取每个像素点和前景点的灰度值。
其中,灰度值也称为亮度值,强度值。
a14、服务器根据每个像素点的灰度值和前景点的灰度值,获取每个像素点与前景点之间的灰度距离。
设定前景点的灰度值为I(x 0,y 0,z 0),像素点的灰度值为I(x,y,z),则像素点与前景点之间的灰度距离为I(x,y,z)-I(x 0,y 0,z 0)。
a15、服务器根据每个像素点与前景点之间的坐标距离和灰度距离,获取训练样本图像中每个像素点与前景点之间的图像距离。
服务器根据每个像素点与前景点之间的坐标距离和灰度距离,应用以下公式可以获取训练样本图像中每个像素点与前景点之间的图像距离D_P 0:
Figure PCTCN2020077772-appb-000002
其中,a为权重参数,取值范围为(0,1)。优选地,a的值可以为0.6。
a2、服务器根据每个像素点与前景点之间的图像距离,获取训练样本前景点距离图像。
服务器通过将每个像素点与前景点之间的图像距离作为每个像素点的像素值,获取训练样本前景距离图像。
对于任一训练样本图像,服务器基于所选取的背景点,获取训练样本背景点距离图像时,步骤如下:
b1、服务器获取训练样本图像中每个像素点与背景点之间的图像距离。
b11、服务器获取每个像素点和背景点在三维坐标系中的坐标。
乳腺MRI数据为3D数据,服务器针对3D数据,建立三维坐标系,并获取训练样本图像中每个像素点在三维坐标系中的坐标,以及背景点在三维坐标系中的坐标。
b12、服务器根据每个像素点的坐标和背景点的坐标,获取每个像素点与背景点之间的坐标距离。
设定背景点的坐标为P 0′=(x 0′,y 0′,z 0′),像素点的坐标为P′=(x′,y′,z′),则像素点与背景点之间的坐标距离为
Figure PCTCN2020077772-appb-000003
b13、服务器获取每个像素点和背景点的灰度值。
b14、服务器根据每个像素点的灰度值和背景点的灰度值,获取每个像素点与背景点之间的灰度距离。
设定背景点的灰度值为I(x 0′,y 0′,z 0′),像素点的灰度值为I(x′,y′,z′),则像素点与背景点之间的灰度距离为I(x′,y′,z′)-I(x 0′,y 0′,z 0′)。
b15、服务器根据每个像素点与背景点之间的坐标距离和灰度距离,获取训练样本图像中每个像素点与背景点之间的图像距离。
服务器根据每个像素点与背景点之间的坐标距离和灰度距离,应用以下 公式可以获取训练样本图像中每个像素点与背景点之间的图像距离D_P 0′:
Figure PCTCN2020077772-appb-000004
其中,a为权重参数,取值范围为(0,1)。优选地,a的值可以为0.6。
b2、服务器根据每个像素点与背景点之间的图像距离,获取训练样本背景点距离图像。
服务器通过将每个像素点与背景点之间的图像距离作为每个像素点的像素值,获取训练样本背景距离图像。
203、服务器根据多个训练样本图像、多个标注图像、多个训练样本前景点距离图像及多个训练样本背景点距离图像,对初始第二分割模型进行训练,得到第二分割模型。
其中,初始第二分割模型可以为用于物体分割的深度学习神经网络,例如3D u-net、3D v-net等。具体训练时,服务器为每个模型参数设置一个初始值,并为初始第二分割模型构建第二目标损失函数,将多个训练样本图像、多个训练样本前景点距离图像及多个训练样本背景点距离图像输入到初始第二分割模型中,输出分割结果,根据每个训练样本图像的分割结果与对应的标注图像,计算第二目标损失函数的函数值。如果第二目标损失函数的函数值不满足第二阈值条件,服务器对初始第二分割模型的模型参数进行调整,并继续计算第二目标损失函数的函数值,直至得到的函数值满足第二阈值条件。其中,第二阈值条件可由服务器根据处理精度进行设置。
进一步地,当得到的第二目标损失函数的函数值不满足第二阈值条件,服务器采用BP算法对初始第二分割模型的模型参数进行调整,基于调整后的各个模型参数的参数值继续计算第二目标损失函数的函数值,直至计算后的函数值满足第二阈值条件。
服务器获取满足第二阈值条件时各个模型参数的参数值,并将满足第二阈值条件时各个模型参数的参数值所对应的初始第二分割模型,作为训练得到的第二分割模型。其中,第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像(即未经标注的乳腺MRI图像)中预测出目标区域。
本申请实施例提供了一种图像分割方法的流程图,参见图3,本申请实施例提供的方法流程包括:
301、服务器将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像。
其中,第一分割模型用于从原始图像中预测出初始目标区域。当获取到待分割的原始图像时,服务器将该原始图像输入到第一分割模型中,通过第一分割模型进行处理,输出标注初始目标区域的第一分割图像,该初始目标区域为可能的目标区域,需要进行后续步骤进一步确定。该初始目标区域可以为乳腺MRI图像中的初始肿瘤区域。
在一个具体的实施例中,301具体还可包括将待标注的乳腺MRI图像输入到第一分割模型中,输出初始肿瘤区域。在本实施例中,第一分割模型用于从乳腺MRI图像中标注出初始肿瘤区域。
为了更好地区分初始目标区域与正常组织区域,服务器可在第一分割图像上将初始目标区域标注不同于正常组织区域的颜色,例如,红色、绿色等。由于第一分割模型预测的初始目标区域中包括预测出的所有可能的区域,预测结果存在较大的误差,需要经过后续过程进行再次分割。
另外,在采用第一分割模型进行预测时,对于预测出为肿瘤的像素点可以标记为1,对于预测出为非肿瘤的像素点可以标记为0,因而第一分割图像实际上为二值化图像。
图4为乳腺MRI图像的三视图,其中,图4的左图为乳腺MRI图像的横断面图,图4的右上图为乳腺MRI图像的矢状面图,图4中的右下图为乳腺MRI图像的冠状面图,图4中虚线交叉位置的圆形区域为肿瘤区域。
图5为采用第一分割模型对图4中的乳腺MRI图像进行处理,得到的第一分割图像。其中,图5的左上图为标注有肿瘤区域的横断面图,图5中的左下图为标注图像的3D效果图,图5的右上图为标注有肿瘤区域的矢状面图,图5中的右下图为标注有肿瘤区域的冠状面图,图5中虚线交叉位置的圆形区域为肿瘤区域。通过对图5进行分析发现,图5中预测出来的最大区域实际上是心 脏区域,并不是肿瘤区域。由于图5中的预测结果存在较大的误差,因而需要经过后续步骤进行再次分割。
302、服务器根据第一分割图像的初始目标区域,确定前景点和背景点。
由于第一分割模型输出的第一分割图像存在较大的误差,包括很多误差区域,而如果对于这些误差区域一一进行处理,会浪费大量的资源,因此,本申请实施例还将采用人工方式,结合第一分割图像的初始目标区域,从原始图像上选取ROI区域,将其确定为指定区域,将指定区域外的肿瘤预测结果视为无效。其中,指定区域也即是进行图像分割的ROI区域,是指进行肿瘤预测的有效区域。
在一个实施例中,302具体还可包括根据初始肿瘤区域,从待标注的乳腺MRI图像中确定感兴趣区域ROI区域,ROI区域为对待标注的乳腺MRI图像进行标注的有效区域,对于ROI区域以外的标注结果被视为无效;在ROI区域上选取前景点和背景点。
采用人工方式确定指定区域时,可在原始图像的三视图上,选择多个像素点,并将所选取的多个像素点围成的区域确定为指定区域。参见图6,图6中左上图为原始图像的横断面图,图6中的左下图为原始图像的3D图,图6中的右上图为原始图像的矢状面图,图6中的右下图为原始图像的冠状面图。医生在横断面图上选择坐标分别为P 1(x 1,y 1,z′)和P 2(x 2,y 2,z″)的两个像素点,并在矢状面图上选择坐标分别为P 3(x′,y′,z 1)和P 4(x″,y″,z 2)的两个像素点,基于所选择的四个像素点,可以确定ROI区域的坐标范围为([x 1,x 2],[y 1,y 2],[z 1,z 2])。对于图6,P 1中x的取值为x 1,y的取值为y 1,z的取值为任意值;,P 2中x的取值为x 2,y的取值为y 2,z的取值为任意值;P 3中x的取值为任意值,y的取值为任意值,z的取值为z 1;P 4中x的取值为任意值,y的取值为任意值,z的取值为z 2。医生选取多个像素点之后,将所选取的像素点输入到服务器中,服务器通过检测医生的输入操作从原始图像上确定指定区域,该指定区域为图6中的左下图中虚线交叉的区域,或图6中右下图中虚线交叉的区域。
当确定的指定区域后,医生可采用人工方式从指定区域中获取前景点和 背景点,并将所选取的前景点和背景点输入到服务器中,服务器通过检测医生的操作,获取前景点和背景点。其中,前景点可以采用p_fg(x,y,z)表示,fg即front ground前景,背景点可以采用p_bg(x,y,z)表示,bg即back ground背景。例如对图6中左上图的区域进行放大,得到图7,图7中白色高亮区域实际上是肿瘤区域,医生在未被预测到的区域内选取一个像素点作为前景点,并在预测错误的区域内选取一个像素点作为背景点。
303、服务器通过计算原始图像中每个像素点与前景点之间的图像距离,获取前景点距离图像。
服务器基于第一分割图像所确定的前景点,通过计算原始图像中每个像素点与前景点之间的图像距离,可获取前景点距离图像。具体可采用如下步骤:
3031、服务器获取原始图像中每个像素点与前景点之间的图像距离。
服务器获取原始图像中每个像素点与前景点之间的图像距离时,可采用如下步骤:
30311、服务器获取每个像素点和前景点在三维坐标系中的坐标。
30312、服务器根据每个像素点的坐标和前景点的坐标,获取每个像素点与前景点之间的坐标距离。
30313、服务器获取每个像素点和前景点的灰度值。
30314、服务器根据每个像素点的灰度值和前景点的灰度值,获取每个像素点与前景点之间的灰度距离。
30315、服务器根据每个像素点与前景点之间的坐标距离和灰度距离,获取原始图像中每个像素点与前景点之间的图像距离。
3032、服务器根据每个像素点与前景点之间的图像距离,获取前景点距离图像。
在一个具体的实施例中,服务器还可参考3031至30315的步骤,计算ROI区域中每个像素点与前景点之间的图像距离,并根据每个像素点与前景点之间的图像距离,获取前景点距离图像。
服务器通过将每个像素点与前景点之间的图像距离作为每个像素点的像素值,获取前景距离图像。对于原始图像中的任一像素点,如果该像素点与前景点之间的距离越接近,则该像素点与前景点之间的图像距离越小,该像素点在前景点距离图像上的亮度越暗,反之,如果该像素点与前景点之间的距离越远,则该像素点与前景点之间的图像距离越大,该像素点在前景点距离图像上的亮度越亮。
图8为前景点距离图像,图8中的左图为横断面前景点距离图像,图8中的右上图为矢状面前景点距离图像,图8中的右下图为冠状面前景点距离图像,图8中虚线交叉位置为前景点的位置。参见图8,对于前景点周围的点,由于与前景点之间的图像距离较小,因而亮度较暗,导致整个肿瘤区域整体的亮度较暗,在前景点距离图像上表现为明显的暗区域,由于亮度不同,能够跟其他区域明显区分开来,因而更适合在后续步骤中采用第二分割模型进行分割。
304、服务器通过计算原始图像中每个像素点与背景点之间的图像距离,获取背景点距离图像。
服务器基于第一分割图像所确定的背景点,通过计算原始图像中每个像素点与背景点之间的图像距离,可获取背景点距离图像。具体可采用如下步骤:
3041、服务器获取原始图像中每个像素点与背景点之间的图像距离。
服务器在获取原始图像中每个像素点与背景点之间的图像距离时,可采用如下步骤:
30411、服务器获取每个像素点和背景点在三维坐标系中的坐标。
30412、服务器根据每个像素点的坐标和背景点的坐标,获取每个像素点与背景点之间的坐标距离。
30413、服务器获取每个像素点和背景点的灰度值。
30414、服务器根据每个像素点的灰度值和背景点的灰度值,获取每个像素点与背景点之间的灰度距离。
30415、服务器根据每个像素点与背景点之间的坐标距离和灰度距离,获取原始图像中每个像素点与背景点之间的图像距离。
3042、服务器根据每个像素点与背景点之间的图像距离,获取背景点距离图像。
在一个具体的实施例中,服务器还可参考3041至30415的步骤,计算ROI区域中每个像素点与背景点之间的图像距离,并根据每个像素点与背景点之间的图像距离,获取背景点距离图像。
服务器通过将每个像素点与背景点之间的图像距离作为每个像素点的像素值,可以获取背景距离图像。对于原始图像中的任一像素点,如果该像素点与背景点之间的距离越接近,则该像素点与背景点之间的图像距离越小,该像素点在背景点距离图像上的亮度越暗,反之,如果该像素点与背景点之间的距离越远,则该像素点与背景点之间的图像距离越大,该像素点在背景点距离图像上的亮度越亮。
图9为背景点距离图像,图9中的左图为横断面背景点距离图像,图9中的右上图为矢状面背景点距离图像,图9中的右下图为冠状面背景点距离图像,图9中虚线交叉位置为背景点的位置。参见图9,对于背景点周围的点,由于与背景点之间的图像距离较小,因而亮度较暗,从而印证了之前错误预测为肿瘤区域的心脏区域。由于心脏区域在背景点距离图像上为突出的暗区域,能够跟其他区域明显区分开来,因而在后续步骤中采用第二分割模型进行分割时,可以对预测的错误结果进行纠正。
305、服务器将原始图像、前景点距离图像及背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像。
其中,第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。第二分割模型的输入为三个通道的3D数据,分别为原始图像(即乳房MRI图像)、前景点距离图像及背景点距离图像。图10为采用第二分割模型进行图像分割时输入和输出的图像,其中,图10中的左上图为乳房MRI图像中的横断面图像,图10中的右上图为前景点距离图像中 的横断面图像,图10中的左下图为背景点距离图像,图10中的右下图为标注图像(ground truth图像),标注图像为对原始图像进行肿瘤像素位置标注过程中对像素进行二值化处理时,将位于肿瘤区域中各个像素点标记为1、将位于其他区域的各个像素点标记为0所得到的图像。
在一个具体的实施例中,步骤305具体还可以包括将待标注的乳腺MRI图像、前景点距离图像及背景点距离图像输入到第二分割模型中,输出肿瘤区域。在本实施例中,第二分割模型用于基于乳腺MRI图像、前景点距离图像及背景点距离图像,从乳腺MRI图像中标注出肿瘤区域。
虽然采用第二分割模型对原始图像进行分割时,因参照了前景点距离图像和背景点距离图像所得到的第二分割图像更精确,但是所得到的第二分割图像还可能存在一些误差,不能100%满足医生的分割要求,因此,在本申请的另一个实施例中,医生还将采用人工方式对第二分割图像进行修正,得到更为精确的第三分割图像。在人工修正时医生可以手动修正,还可以借助3D标注软件,例如,ITK-SNAP等。图11为人工修正得到的第三分割图像,其中,图11中的左上图为第三分割图像的横断面图像,图11中的右上图为第三分割图像的失状面图像,图11中的左下图为标注图像的3D图像,图11中的右下图为第三分割图像的冠状面图像。
一般来说,人工修正的过程相对全部人工标注的过程工作量大大降低了,所得到的第三分割图像的精度较高,可以处理为标注图像用于训练第一分割模型和第二分割模型,从而提高第一分割模型和第二分割模型的精度。随着获取到的第三分割图像的数据量的增多,所训练的第一分割模型和第二分割模型的准确度越来越高,所输出的分割结果也更加准确,无需人工大量的修改,通过少量的修正即可获得最终的分割结果,在保证分割精度的前提下,大大降低了人工的工作量。图12为采用再训练的第一分割模型所输出的第一分割图像,由图12可以看出,第一分割模型只在左右两个乳房上预测出两个肿瘤区域,并未将心脏区域预测为肿瘤区域,相比于再训练前的分割结果更准确。对于图12中的分割结果人工经过少量的修正即可用于影像诊断。
图13为本申请实施例中的构建第一分割模型、第二分割模型及图像处理方法的整体流程图。
其中,使用标注数据训练阶段,包括以下步骤:
步骤一、获取多个病人图像,并对多个病人图像进行标注,得到多个标注图像,基于多个病人图像和多个标注图像,采用深度学习分割算法A训练模型,得到算法A预测模型,即本申请实施例中的第一分割模型。
步骤三、根据多个病人图像和多个标注图像,自动生成多个前景点距离图像和多个背景点距离图像。
步骤四、基于多个病人图像、多个标注图像、多个前景点距离图像及多个背景点距离图像,采用深度学习分割算法B训练模型,得到算法B预测模型,即本申请实施例中的第二分割模型。
其中,使用模型预测阶段,包括以下步骤:
步骤一、对于一个新的病人图像,将其输入到算法A预测模型中,输出对新的病人图像的初步分割预测结果,即本申请实施例中的第一分割图像。
步骤二、根据初步分割预测结果,采用人工方式从新的病人图像中选取ROI区域,对于ROI区域外的肿瘤预测结果则视为无效,会将其删除。
步骤三、采用人工方式在ROI区域上选择前景点和背景点,并生成前景点距离图像和背景点距离图像。
步骤五、将新的病人图像、前景点距离图像及背景点距离图像输入到算法B预测模型中,输出算法B分割结果,即本申请实施例中的第二分割图像。
步骤六、人工对算法B分割结果进行修正,获得最终肿瘤分割结果,即本申请实施例中的第三分割图像。采用对第三分割图像处理得到的标注图像(ground truth数据)训练算法预测模型A和算法预测模型B。
本申请实施提供的图像分割方法,可应用于医学图像处理场景,包括医学图像标注场景、病理图像分析场景及医学肿瘤处理场景等。例如,
场景一:医生在作MRI乳腺肿瘤影像诊断时,可采用本申请实施例提供的方法对肿瘤区域进行标注,从而获得肿瘤区域的大小、形态等信息,并基 于此信息撰写病人的影像诊断报告。
场景二:在图像处理领域,对海量肿瘤数据进行标注时,可以采用本申请实施例提供的方法进行标注,降低手动标注的工作量,提高标注效率。
本申请实施例提供的方法,通过计算原始图像中每个像素点分别与前景点和背景点之间的图像距离,获取前景点距离图像和背景点距离图像,由于距离图像可直接根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定,无需遍历探索所有可能路径的路径距离,因而减小了图像处理过程中的计算量,降低了资源消耗,缩短了处理时间。
另外,在训练样本图像的数据量较小时,所训练的第一分割模型的准确率较低,预测出现错误的目标区域较多,而本申请实施例采用人工方式确定指定区域,并将指定区域外的预测结果视为无效,不仅能够提高图像处理的精度,而且减少了后续处理过程的计算量。
另外,本申请实施例采用迭代计算方式,基于人工标注的分割结果对第一分割模型和第二分割模型进行再训练,大大提高了模型精度,尤其使得基于第一分割模型的分割结果更准确,减小了后续人工修正的工作量。
参见图14,本申请实施例提供了一种图像处理装置,该装置包括:处理模块1401、确定模块1402和获取模块1403。图像处理装置中包括的各个模块可全部或部分通过软件、硬件或其组合来实现。
处理模块1401,用于将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,第一分割模型用于从原始图像中预测出初始目标区域。
确定模块1402,用于根据所述第一分割图像的初始目标区域,确定前景点和背景点。
获取模块1403,用于通过计算原始图像中每个像素点分别与前景点和背景点之间的图像距离,获取前景点距离图像和背景点距离图像,图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定。
处理模块1401还用于将原始图像、前景点距离图像及背景点距离图像输 入到第二分割模型中,输出标注有目标区域的第二分割图像,第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。
在本申请的另一个实施例中,获取模块1403,用于获取原始图像中每个像素点与前景点之间的图像距离;根据每个像素点与前景点之间的图像距离,获取前景点距离图像;获取原始图像中每个像素点与背景点之间的图像距离;及根据每个像素点与背景点之间的图像距离,获取背景点距离图像。
在本申请的另一个实施例中,获取模块1403,用于获取每个像素点和前景点在三维坐标系中的坐标;根据每个像素点的坐标和前景点的坐标,获取每个像素点与前景点之间的坐标距离;获取每个像素点和前景点的灰度值;根据每个像素点的灰度值和前景点的灰度值,获取每个像素点与前景点之间的灰度距离;及根据每个像素点与前景点之间的坐标距离和灰度距离,获取原始图像中每个像素点与前景点之间的图像距离。
在本申请的另一个实施例中,获取模块1403,用于通过将每个像素点与前景点之间的图像距离作为每个像素点的像素值,获取前景距离图像。
在本申请的另一个实施例中,获取模块1403,用于获取每个像素点和背景点在三维坐标系中的坐标;根据每个像素点的坐标和背景点的坐标,获取每个像素点与背景点之间的坐标距离;获取每个像素点和背景点的灰度值;根据每个像素点的灰度值和背景点的灰度值,获取每个像素点与背景点之间的灰度距离;及根据每个像素点与背景点之间的坐标距离和灰度距离,获取原始图像中每个像素点与背景点之间的图像距离。
在本申请的另一个实施例中,获取模块1403,用于通过将每个像素点与背景点之间的图像距离作为每个像素点的像素值,获取背景距离图像。
在本申请的另一个实施例中,确定模块1402,用于从原始图像上确定指定区域;及通过对比第一分割图像的初始目标区域和原始图像,从指定区域中获取前景点和背景点。
在本申请的另一个实施例中,获取模块1403,用于获取多个训练样本图像和多个标注有目标区域的标注图像,训练样本图像与标注图像一一对应。 该装置还包括:训练模块,用于根据多个训练样本图像和多个标注图像,对初始第一分割模型进行训练,得到第一分割模型。
在本申请的另一个实施例中,获取模块1403,用于获取多个训练样本图像和多个标注有目标区域的标注图像,训练样本图像与标注图像一一对应;还用于根据每个训练样本图像和对应的标注图像,获取训练样本前景点距离图像和训练样本背景点距离图像。该装置还包括:训练模块,用于根据多个训练样本图像、多个标注图像、多个训练样本前景点距离图像及多个训练样本背景点距离图像,对初始第二分割模型进行训练,得到第二分割模型。在本申请的另一个实施例中,获取模块1403,用于获取第三分割图像,第三分割图像为人工对第二分割图像进行修正得到的图像。该装置还包括:训练模块,用于根据第三分割图像,对第一分割模型和第二分割模型进行训练。
在本申请的另一个实施例中,该装置应用于医学图像处理场景,医学图像处理场景至少包括医学图像标注场景、病理图像分析场景及医学肿瘤处理场景。
在本申请的另一个实施例中,该图像处理装置用于对乳腺核磁共振成像MRI图像进行处理。
处理模块1401,还用于将待标注的乳腺MRI图像输入到第一分割模型中,输出初始肿瘤区域,第一分割模型用于从乳腺MRI图像中标注出初始肿瘤区域。
确定模块1402,还用于根据初始肿瘤区域,从待标注的乳腺MRI图像中确定感兴趣区域ROI区域,ROI区域为对待标注的乳腺MRI图像进行标注的有效区域,对于ROI区域以外的标注结果被视为无效。
获取模块1403,还用于在ROI区域上选取前景点和背景点;基于待标注的乳腺MRI图像、前景点及背景点,获取前景点距离图像和背景点距离图像;
处理模块1401,还用于将待标注的乳腺MRI图像、前景点距离图像及背景点距离图像输入到第二分割模型中,输出肿瘤区域,第二分割模型用于基于乳腺MRI图像、前景点距离图像及背景点距离图像,从乳腺MRI图像中标 注出肿瘤区域。
综上,本申请实施例提供的装置,通过计算原始图像中每个像素点分别与前景点和背景点之间的图像距离,获取前景点距离图像和背景点距离图像,由于距离图像可直接根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定,无需遍历探索所有可能路径的路径距离,因而减小了图像处理过程中的计算量,降低了资源消耗,缩短了处理时间。
图15是根据一示例性实施例示出的一种用于图像处理的服务器。参照图15,服务器1500包括处理组件1522,其进一步包括一个或多个处理器,以及由存储器1532所代表的存储器资源,用于存储可由处理组件1522的执行的指令,例如应用程序。存储器1532中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1522被配置为执行指令,以执行上述图像分割方法中服务器所执行的功能。
服务器1500还可以包括一个电源组件1526被配置为执行服务器1500的电源管理,一个有线或无线网络接口1550被配置为将服务器1500连接到网络,和一个输入输出(I/O)接口1558。服务器1500可以操作基于存储在存储器1532的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
本申请实施例提供的服务器,通过计算原始图像中每个像素点分别与前景点和背景点之间的图像距离,获取前景点距离图像和背景点距离图像,由于距离图像可直接根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定,无需遍历探索所有可能路径的路径距离,因而减小了图像处理过程中的计算量,降低了资源消耗,缩短了处理时间。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述图像处理方法的步骤。此处图像处理方法的步骤可以是上述各个实施例的图像处理方法中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机可读 指令,计算机可读指令被处理器执行时,使得处理器执行上述图像处理方法的步骤。此处图像处理方法的步骤可以是上述各个实施例的图像处理方法中的步骤。
需要说明的是:上述实施例提供的图像处理装置在处理图像时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将图像处理装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域 的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (24)

  1. 一种图像处理方法,其特征在于,由服务器执行,所述方法包括:
    将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,所述第一分割模型用于从原始图像中预测出初始目标区域;
    根据所述第一分割图像的初始目标区域,确定前景点和背景点;
    通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,所述图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定;及
    将所述原始图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像,所述第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。
  2. 根据权利要求1所述的方法,其特征在于,所述通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,包括:
    获取所述原始图像中每个像素点与所述前景点之间的图像距离;
    根据每个像素点与所述前景点之间的图像距离,获取所述前景点距离图像;
    获取所述原始图像中每个像素点与所述背景点之间的图像距离;及
    根据每个像素点与所述背景点之间的图像距离,获取所述背景点距离图像。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述原始图像中每个像素点与所述前景点之间的图像距离,包括:
    获取每个像素点和所述前景点在三维坐标系中的坐标;
    根据每个像素点的坐标和所述前景点的坐标,获取每个像素点与所述前景点之间的坐标距离;
    获取每个像素点和所述前景点的灰度值;
    根据每个像素点的灰度值和所述前景点的灰度值,获取每个像素点与所 述前景点之间的灰度距离;及
    根据每个像素点与所述前景点之间的坐标距离和灰度距离,获取所述原始图像中每个像素点与所述前景点之间的图像距离。
  4. 根据权利要求2所述的方法,其特征在于,所述根据每个像素点与所述前景点之间的图像距离,获取所述前景点距离图像,包括:
    通过将每个像素点与所述前景点之间的图像距离作为每个像素点的像素值,获取所述前景距离图像。
  5. 根据权利要求2所述的方法,其特征在于,所述获取所述原始图像中每个像素点与所述背景点之间的图像距离,包括:
    获取每个像素点和所述背景点在三维坐标系中的坐标;
    根据每个像素点的坐标和所述背景点的坐标,获取每个像素点与所述背景点之间的坐标距离;
    获取每个像素点和所述背景点的灰度值;
    根据每个像素点的灰度值和所述背景点的灰度值,获取每个像素点与所述背景点之间的灰度距离;及
    根据每个像素点与所述背景点之间的坐标距离和灰度距离,获取所述原始图像中每个像素点与所述背景点之间的图像距离。
  6. 根据权利要求2所述的方法,其特征在于,所述根据每个像素点与所述背景点之间的图像距离,获取所述背景点距离图像,包括:
    通过将每个像素点与所述背景点之间的图像距离作为每个像素点的像素值,获取所述背景距离图像。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述第一分割图像的初始目标区域,确定前景点和背景点,包括:
    从所述原始图像上确定指定区域;及
    通过对比所述第一分割图像的初始目标区域和所述原始图像,从所述指定区域中获取所述前景点和所述背景点。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取多个训练样本图像和多个标注有目标区域的标注图像,所述训练样本图像与所述标注图像一一对应;及
    根据所述多个训练样本图像和多个标注图像,对初始第一分割模型进行训练,得到所述第一分割模型。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取多个训练样本图像和多个标注有目标区域的标注图像,所述训练样本图像与所述标注图像一一对应;
    根据每个训练样本图像和对应的标注图像,获取训练样本前景点距离图像和训练样本背景点距离图像;及
    根据所述多个训练样本图像、多个标注图像、多个训练样本前景点距离图像及多个训练样本背景点距离图像,对初始第二分割模型进行训练,得到所述第二分割模型。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取第三分割图像,所述第三分割图像为人工对所述第二分割图像进行修正得到的图像;及
    根据所述第三分割图像,对所述第一分割模型和所述第二分割模型进行训练。
  11. 根据权利要求1所述的方法,其特征在于,所述方法用于对乳腺核磁共振成像MRI图像进行处理,所述将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,包括:
    将待标注的乳腺MRI图像输入到第一分割模型中,输出初始肿瘤区域,所述第一分割模型用于从乳腺MRI图像中标注出初始肿瘤区域;
    所述根据所述第一分割图像的初始目标区域,确定前景点和背景点,通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,包括:
    根据所述初始肿瘤区域,从所述待标注的乳腺MRI图像中确定感兴趣区域ROI区域,所述ROI区域为对所述待标注的乳腺MRI图像进行标注的有 效区域,对于所述ROI区域以外的标注结果被视为无效;及
    在所述ROI区域上选取前景点和背景点,并基于所述待标注的乳腺MRI图像、所述前景点及所述背景点,获取前景点距离图像和背景点距离图像,所述图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确定;及
    所述将所述原始图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像,包括:
    将所述待标注的乳腺MRI图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出肿瘤区域,所述第二分割模型用于基于乳腺MRI图像、前景点距离图像及背景点距离图像,从乳腺MRI图像中标注出肿瘤区域。
  12. 根据权利要求11所述的方法,其特征在于,所述基于所述待标注的乳腺MRI图像、所述前景点及所述背景点,获取前景点距离图像和背景点距离图像,包括:
    获取所述待标注的乳腺MRI图像中每个像素点与所述前景点之间的图像距离;
    根据每个像素点与所述前景点之间的图像距离,获取所述前景点距离图像;
    获取所述待标注的乳腺MRI图像中每个像素点与所述背景点之间的图像距离;及
    根据每个像素点与所述背景点之间的图像距离,获取所述背景点距离图像。
  13. 根据权利要求12所述的方法,其特征在于,所述获取所述待标注的乳腺MRI图像中每个像素点与所述前景点之间的图像距离,包括:
    获取每个像素点和所述前景点在三维坐标系中的坐标;
    根据每个像素点的坐标和所述前景点的坐标,获取每个像素点与所述前景点之间的坐标距离;
    获取每个像素点和所述前景点的灰度值;
    根据每个像素点的灰度值和所述前景点的灰度值,获取每个像素点与所述前景点之间的灰度距离;及
    根据每个像素点与所述前景点之间的坐标距离和灰度距离,获取所述待标注的乳腺MRI图像中每个像素点与所述前景点之间的图像距离。
  14. 根据权利要求12所述的方法,其特征在于,所述根据每个像素点与所述前景点之间的图像距离,获取所述前景点距离图像,包括:
    通过将每个像素点与所述前景点之间的图像距离作为每个像素点的像素值,获取所述前景距离图像。
  15. 根据权利要求12所述的方法,其特征在于,所述获取所述待标注的乳腺MRI图像中每个像素点与所述背景点之间的图像距离,包括:
    获取每个像素点和所述背景点在三维坐标系中的坐标;
    根据每个像素点的坐标和所述背景点的坐标,获取每个像素点与所述背景点之间的坐标距离;
    获取每个像素点和所述背景点的灰度值;
    根据每个像素点的灰度值和所述背景点的灰度值,获取每个像素点与所述背景点之间的灰度距离;及
    根据每个像素点与所述背景点之间的坐标距离和灰度距离,获取所述待标注的乳腺MRI图像中每个像素点与所述背景点之间的图像距离。
  16. 根据权利要求12所述的方法,其特征在于,所述根据每个像素点与所述背景点之间的图像距离,获取所述背景点距离图像,包括:
    通过将每个像素点与所述背景点之间的图像距离作为每个像素点的像素值,获取所述背景距离图像。
  17. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    获取多个病人图像和多个标注有肿瘤区域的标注图像,所述标注图像为对病人图像进行标注得到的图像,每个病人图像与每个标注图像一一对应;及
    根据所述多个病人图像和多个标注图像,对初始第一分割模型进行训练,得到所述第一分割模型。
  18. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    获取多个病人图像和多个标注有肿瘤区域的标注图像,所述标注图像为对病人图像进行标注得到的图像,每个病人图像与每个标注图像一一对应;
    根据每个病人图像和对应的标注图像,获取训练样本前景点距离图像和训练样本背景点距离图像;及
    根据所述多个病人图像、多个标注图像、多个训练样本前景点距离图像及多个训练样本背景点距离图像,对初始第二分割模型进行训练,得到所述第二分割模型。
  19. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    获取修正后的肿瘤图像,所述修正后的肿瘤图像为人工对所述肿瘤区域进行修正得到的图像;及
    根据所述修正后的肿瘤图像,对所述第一分割模型和所述第二分割模型进行训练。
  20. 根据权利要求1至19中任一项所述的方法,其特征在于,所述方法应用于医学图像处理场景,所述医学图像处理场景至少包括医学图像标注场景、病理图像分析场景及医学肿瘤处理场景。
  21. 一种图像处理装置,其特征在于,所述装置包括:
    处理模块,用于将原始图像输入到第一分割模型中,输出标注有初始目标区域的第一分割图像,所述第一分割模型用于从原始图像中预测出初始目标区域;
    确定模块,用于根据所述第一分割图像的初始目标区域,确定前景点和背景点;
    获取模块,用于通过计算所述原始图像中每个像素点分别与所述前景点和所述背景点之间的图像距离,获取前景点距离图像和背景点距离图像,所述图像距离根据像素点与前景点或背景点之间的坐标距离和灰度值距离确 定;及
    所述处理模块,用于将所述原始图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出标注有目标区域的第二分割图像,所述第二分割模型用于基于前景点距离图像及背景点距离图像,从原始图像中预测出目标区域。
  22. 根据权利要求21所述的装置,其特征在于,所述装置用于对乳腺核磁共振成像MRI图像进行处理;
    所述处理模块,还用于将待标注的乳腺MRI图像输入到第一分割模型中,输出初始肿瘤区域,所述第一分割模型用于从乳腺MRI图像中标注出初始肿瘤区域;
    所述确定模块,还用于根据所述初始肿瘤区域,从所述待标注的乳腺MRI图像中确定感兴趣区域ROI区域,所述ROI区域为对所述待标注的乳腺MRI图像进行标注的有效区域,对于所述ROI区域以外的标注结果被视为无效;
    所述获取模块,还用于在所述ROI区域上选取前景点和背景点;基于所述待标注的乳腺MRI图像、所述前景点及所述背景点,获取前景点距离图像和背景点距离图像;
    所述处理模块,还用于将所述待标注的乳腺MRI图像、所述前景点距离图像及所述背景点距离图像输入到第二分割模型中,输出肿瘤区域,所述第二分割模型用于基于乳腺MRI图像、前景点距离图像及背景点距离图像,从乳腺MRI图像中标注出肿瘤区域。
  23. 一种服务器,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至20中任一项所述的方法的步骤。
  24. 一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如权利要求1至20中任一项所述的方法的步骤。
PCT/CN2020/077772 2019-03-08 2020-03-04 图像处理方法、装置、服务器及存储介质 WO2020182036A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/221,595 US11715203B2 (en) 2019-03-08 2021-04-02 Image processing method and apparatus, server, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910746265.9 2019-03-08
CN201910746265.9A CN110458830B (zh) 2019-03-08 2019-03-08 图像处理方法、装置、服务器及存储介质
CN201910176668.4A CN109934812B (zh) 2019-03-08 2019-03-08 图像处理方法、装置、服务器及存储介质
CN201910176668.4 2019-03-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/221,595 Continuation US11715203B2 (en) 2019-03-08 2021-04-02 Image processing method and apparatus, server, and storage medium

Publications (1)

Publication Number Publication Date
WO2020182036A1 true WO2020182036A1 (zh) 2020-09-17

Family

ID=66986586

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077772 WO2020182036A1 (zh) 2019-03-08 2020-03-04 图像处理方法、装置、服务器及存储介质

Country Status (3)

Country Link
US (1) US11715203B2 (zh)
CN (2) CN110458830B (zh)
WO (1) WO2020182036A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114396911A (zh) * 2021-12-21 2022-04-26 中汽创智科技有限公司 一种障碍物测距方法、装置、设备及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458830B (zh) * 2019-03-08 2021-02-09 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质
CN111008655A (zh) * 2019-11-28 2020-04-14 上海识装信息科技有限公司 辅助鉴定实物商品品牌真伪的方法、装置和电子设备
CN112508969B (zh) * 2020-02-18 2021-12-07 广州柏视医疗科技有限公司 基于深度学习网络的三维影像的管状结构分割图断裂修复系统
CN111768425B (zh) * 2020-07-23 2021-08-10 腾讯科技(深圳)有限公司 图像处理方法、装置及设备
CN112085696B (zh) * 2020-07-24 2024-02-23 中国科学院深圳先进技术研究院 医学图像分割网络模型的训练方法、分割方法及相关设备
CN112003999A (zh) * 2020-09-15 2020-11-27 东北大学 基于Unity 3D的三维虚拟现实合成算法
CN112766338B (zh) * 2021-01-11 2023-06-16 明峰医疗系统股份有限公司 一种计算距离图像的方法、系统及计算机可读存储介质
CN114052704B (zh) * 2021-11-25 2023-04-18 电子科技大学 一种基于功能网络图能量的帕金森病识别系统
WO2023187623A1 (en) * 2022-03-28 2023-10-05 Shenzhen Escope Tech Co., Ltd. A pre-processing method to generate a model for fluid-structure interaction simulation based on image data
CN115170568B (zh) * 2022-09-06 2022-12-02 北京肿瘤医院(北京大学肿瘤医院) 直肠癌图像自动分割方法和系统及放化疗反应预测系统
CN117934855B (zh) * 2024-03-22 2024-07-30 北京壹点灵动科技有限公司 医学图像分割方法和装置、存储介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090069666A1 (en) * 2007-09-11 2009-03-12 Siemens Medical Solutions Usa, Inc. Correction of Intensity Inhomogeneity in Breast MRI
CN105719294A (zh) * 2016-01-21 2016-06-29 中南大学 一种乳腺癌病理学图像有丝分裂核自动分割方法
CN107464250A (zh) * 2017-07-03 2017-12-12 深圳市第二人民医院 基于三维mri图像的乳腺肿瘤自动分割方法
CN109934812A (zh) * 2019-03-08 2019-06-25 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7426290B1 (en) * 2003-07-02 2008-09-16 The United States Of America As Represented By The Secretary Of The Navy Nonparametric method for detection and identification of regions of concern in multidimensional intensity images
CN101527047B (zh) * 2008-03-05 2013-02-13 深圳迈瑞生物医疗电子股份有限公司 使用超声图像检测组织边界的方法与装置
JP5024962B2 (ja) * 2008-07-11 2012-09-12 日本電信電話株式会社 多視点距離情報符号化方法,復号方法,符号化装置,復号装置,符号化プログラム,復号プログラムおよびコンピュータ読み取り可能な記録媒体
JP4963492B2 (ja) * 2008-08-08 2012-06-27 トヨタ自動車株式会社 画像セグメンテーション方法、プログラムおよび装置
US8625897B2 (en) * 2010-05-28 2014-01-07 Microsoft Corporation Foreground and background image segmentation
WO2012016242A2 (en) * 2010-07-30 2012-02-02 Aureon Biosciences, Inc. Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions
CN103632361B (zh) * 2012-08-20 2017-01-18 阿里巴巴集团控股有限公司 一种图像分割方法和系统
CN103544505B (zh) * 2013-07-31 2016-12-28 天津大学 面向无人机航拍图像的船只识别系统及方法
CN105654458A (zh) * 2014-11-14 2016-06-08 华为技术有限公司 图像处理的方法及装置
CN105005980B (zh) * 2015-07-21 2019-02-01 深圳Tcl数字技术有限公司 图像处理方法及装置
CN106446896B (zh) * 2015-08-04 2020-02-18 阿里巴巴集团控股有限公司 一种字符分割方法、装置及电子设备
US10869644B2 (en) * 2016-07-30 2020-12-22 Shanghai United Imaging Healthcare Co., Ltd. Method and system for extracting lower limb vasculature
CN106504264B (zh) * 2016-10-27 2019-09-20 锐捷网络股份有限公司 视频前景图像提取方法和装置
CN106651885B (zh) * 2016-12-31 2019-09-24 中国农业大学 一种图像分割方法及装置
CN106874906B (zh) * 2017-01-17 2023-02-28 腾讯科技(上海)有限公司 一种图片的二值化方法、装置及终端
CN106875444B (zh) * 2017-01-19 2019-11-19 浙江大华技术股份有限公司 一种目标物定位方法及装置
CN108694719B (zh) * 2017-04-05 2020-11-03 北京京东尚科信息技术有限公司 图像输出方法和装置
JP6955303B2 (ja) * 2017-04-12 2021-10-27 富士フイルム株式会社 医用画像処理装置および方法並びにプログラム
CN107292890A (zh) * 2017-06-19 2017-10-24 北京理工大学 一种医学图像分割方法和装置
CN108596935A (zh) * 2018-04-24 2018-09-28 安徽锐捷信息科技有限公司 一种磁共振图像的分割方法及装置
CN109360210B (zh) * 2018-10-16 2019-10-25 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机设备及存储介质
US12079950B2 (en) * 2020-02-14 2024-09-03 Tencent Technology (Shenzhen) Company Limited Image processing method and apparatus, smart microscope, readable storage medium and device
CN111311578B (zh) * 2020-02-17 2024-05-03 腾讯科技(深圳)有限公司 基于人工智能的对象分类方法以及装置、医学影像设备
CN111899252B (zh) * 2020-08-06 2023-10-27 腾讯科技(深圳)有限公司 基于人工智能的病理图像处理方法和装置
CN112330624A (zh) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 医学图像处理方法和装置
CN112330688A (zh) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 基于人工智能的图像处理方法、装置和计算机设备
CN113781387A (zh) * 2021-05-26 2021-12-10 腾讯科技(深圳)有限公司 模型训练方法、图像处理方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090069666A1 (en) * 2007-09-11 2009-03-12 Siemens Medical Solutions Usa, Inc. Correction of Intensity Inhomogeneity in Breast MRI
CN105719294A (zh) * 2016-01-21 2016-06-29 中南大学 一种乳腺癌病理学图像有丝分裂核自动分割方法
CN107464250A (zh) * 2017-07-03 2017-12-12 深圳市第二人民医院 基于三维mri图像的乳腺肿瘤自动分割方法
CN109934812A (zh) * 2019-03-08 2019-06-25 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质
CN110458830A (zh) * 2019-03-08 2019-11-15 腾讯科技(深圳)有限公司 图像处理方法、装置、服务器及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG, LVCHUAN ET AL.: "Tumor Ultrasound Image Segmentation Algorithm Based on Sparse Representation of Superpixel Clustering", CHINESE JOURNAL OF MEDICAL PHYSICS, vol. 32, no. 6, 25 November 2015 (2015-11-25), ISSN: 1005-202X, DOI: 20200529104818Y *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114396911A (zh) * 2021-12-21 2022-04-26 中汽创智科技有限公司 一种障碍物测距方法、装置、设备及存储介质
CN114396911B (zh) * 2021-12-21 2023-10-31 中汽创智科技有限公司 一种障碍物测距方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN110458830A (zh) 2019-11-15
US20210225003A1 (en) 2021-07-22
CN110458830B (zh) 2021-02-09
US11715203B2 (en) 2023-08-01
CN109934812A (zh) 2019-06-25
CN109934812B (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
WO2020182036A1 (zh) 图像处理方法、装置、服务器及存储介质
CN109872333B (zh) 医学影像分割方法、装置、计算机设备和存储介质
CN111047609B (zh) 肺炎病灶分割方法和装置
CN111862044B (zh) 超声图像处理方法、装置、计算机设备和存储介质
CA3078095A1 (en) Automated classification and taxonomy of 3d teeth data using deep learning methods
CN110246580B (zh) 基于神经网络和随机森林的颅侧面影像分析方法和系统
CN113034389B (zh) 图像处理方法、装置、计算机设备和存储介质
CN109285142B (zh) 一种头颈部肿瘤检测方法、装置及计算机可读存储介质
US11798161B2 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
JP6469731B2 (ja) 画像をセグメント化するためのパラメータの最適化
US20210065388A1 (en) Registering a two-dimensional image with a three-dimensional image
CN111488872B (zh) 图像检测方法、装置、计算机设备和存储介质
CN110211200B (zh) 一种基于神经网络技术的牙弓线生成方法及其系统
JP2013220319A (ja) 画像処理装置、方法及びプログラム
CN111275707A (zh) 肺炎病灶分割方法和装置
US12020428B2 (en) System and methods for medical image quality assessment using deep neural networks
CN111568451A (zh) 一种曝光剂量调节方法和系统
CN114332132A (zh) 图像分割方法、装置和计算机设备
CN109658425B (zh) 一种肺叶分割方法、装置、计算机设备及存储介质
CN115880358A (zh) 定位模型的构建方法、影像标志点的定位方法及电子设备
JPWO2014030262A1 (ja) 形状データ生成プログラム、形状データ生成方法及び形状データ生成装置
CN112767314A (zh) 医学图像处理方法、装置、设备及存储介质
CN113222886B (zh) 颈静脉球窝、乙状窦沟定位方法和智能颞骨影像处理系统
US20230394791A1 (en) Image processing method, image processing system, and non-transitory computer readable medium
TWI579798B (zh) 有效和可調權重之影像切割方法其程式產品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20770994

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20770994

Country of ref document: EP

Kind code of ref document: A1