WO2021189901A1 - 图像分割方法、装置、电子设备及计算机可读存储介质 - Google Patents

图像分割方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021189901A1
WO2021189901A1 PCT/CN2020/131978 CN2020131978W WO2021189901A1 WO 2021189901 A1 WO2021189901 A1 WO 2021189901A1 CN 2020131978 W CN2020131978 W CN 2020131978W WO 2021189901 A1 WO2021189901 A1 WO 2021189901A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image set
training
segmentation model
feature
Prior art date
Application number
PCT/CN2020/131978
Other languages
English (en)
French (fr)
Inventor
郭冰雪
初晓
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021189901A1 publication Critical patent/WO2021189901A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of image processing technology, and in particular to an image segmentation method, device, electronic equipment, and computer-readable storage medium.
  • Medical image segmentation technology is one of the important topics in the field of medical image processing and analysis, and it is also a hot issue that has attracted much attention from researchers in recent years.
  • the purpose of medical image segmentation is to segment different regions with special meanings in the image, and to make the segmentation result as close to the anatomical structure as possible.
  • the segmentation of medical images plays an important role in the screening of many diseases. For example, during cervical examination, the number of squamous epithelial cell nuclei of the cervix needs to be used to determine the female cervical disease. Therefore, it is necessary to segment the squamous epithelium of the cervix.
  • the nucleus area however, the squamous epithelial cell nuclear area and the squamous cytoplasmic area are usually difficult to accurately segment, resulting in difficult counting.
  • the inventor realizes that the currently adopted method is to segment the squamous epithelial cell nucleus region and the squamous cytoplasmic region by threshold, which is very unsatisfactory for medical pictures with complex foreground and background.
  • An image segmentation method including:
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • An image segmentation method and device includes:
  • a data processing module for obtaining a medical image set, segmenting a training image set and a test image set from the medical image set, labeling the training image set and the test image set, and generating a label image set;
  • the model training module is used to construct an image segmentation model based on the Unet network, use the image segmentation model to perform up-sampling and down-sampling processing on the training image set to obtain a feature image set, and the feature image set performs binarization processing, Obtain the standard feature set, and calculate the error value between the standard feature set and the label atlas corresponding to the training image set; adjust the internal parameters of the image segmentation model according to the error value until the error value is less than Preset thresholds to obtain an initial image segmentation model; use the test image set to verify and adjust the initial image segmentation model to obtain a standard image segmentation model;
  • the segmentation module is used to perform segmentation processing on the image to be segmented by using the standard image segmentation model to obtain an image segmentation result.
  • An electronic device which includes:
  • Memory storing at least one instruction
  • the processor executes the instructions stored in the memory to implement the following steps:
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the following steps are implemented:
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • This application can improve the efficiency of the image segmentation method and solve the problem of inaccurate image segmentation.
  • FIG. 1 is a schematic flowchart of an image segmentation method provided by an embodiment of this application
  • FIG. 2 is a schematic flowchart of one of the steps in the image segmentation method provided by an embodiment of the application;
  • FIG. 3 is a schematic flowchart of one of the steps in the image segmentation method provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of modules of an image segmentation method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the internal structure of an electronic device for implementing an image segmentation method provided by an embodiment of the application;
  • the embodiment of the present application provides an image segmentation method.
  • the execution subject of the image segmentation method includes, but is not limited to, at least one of the electronic devices that can be configured to execute the method provided in the embodiments of the present application, such as a server and a terminal.
  • the image segmentation method may be executed by software or hardware installed on a terminal device or a server device, and the software may be a blockchain platform.
  • the server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, etc.
  • the image segmentation method includes:
  • the medical image set includes a scanned image of a smear of squamous epithelial cells of the cervix.
  • the acquiring a medical image set includes:
  • the embodiment of the present application obtains the area scanned Medical images, and perform a splicing operation on the medical images scanned in the region to remove duplicate data between the pictures, and further, segment the spliced images to obtain small-sized medical images.
  • the segmentation processing of the stitched image to obtain a medical image set includes: mapping the stitched image to a preset two-dimensional coordinate system; acquiring the coordinate starting point of the stitched image, and according to With the preset segmentation step length, the spliced image is segmented in the order from left to right and top to bottom to obtain a medical image set.
  • the coordinate starting point of the stitched image may be the pixel coordinates of the upper left corner of the stitched image
  • the segmentation step may be a preset image length and width
  • a training image set and a test image set are segmented from the medical image set according to a preset ratio, the training image set can be used for subsequent model training, and the test image set can be used for subsequent Model verification to prevent the model from overfitting during the training process.
  • the preset ratio may be 7:3.
  • the labeling the training image set and the test image set to generate a label image set includes:
  • Binarization processing is performed on the segmented image set to obtain a label image set.
  • the training image set and the test image set are labeled using the existing labeling technology, the edge of the region of interest in the image is drawn, the region of interest is segmented, and the segmented image is obtained.
  • the binarization process is performed on the segmented image set, so that the gray value of the pixel points on the region of interest in the segmented image set is converted into a preset first gray value, and the segmentation is The gray value of pixels outside the region of interest in the image is converted into a preset second gray value.
  • the gray value of pixels on the region of interest is converted to 255, and the gray value of pixels outside the region of interest in the segmented image is converted to 0, so that the region of interest is White with black background.
  • the use of the image segmentation model to perform up-sampling and down-sampling on the training image set to obtain a feature image set includes:
  • the using the image segmentation model to down-sample the training image set to obtain a down-sampled image set includes:
  • Pooling is performed on the convolutional image set by using the pooling layer in the image segmentation model to obtain a down-sampled image set.
  • the greater the number of down-sampling the smaller the scale of the corresponding generated image in the down-sampled image set, that is, the lower the resolution, the stronger the semantic feature of the image in the down-sampled image set, and the more obvious the feature.
  • the using the convolution layer in the image segmentation model to perform convolution processing on the training image set to obtain a convolution image set includes:
  • the training images in the training image set are divided in order from top to bottom and from left to right to obtain multiple training sub-images;
  • the convolution image set is obtained.
  • convolution processing is a linear operation. Performing the convolution processing on the training image set can eliminate noise and enhance features, so that the pre-built image segmentation model can extract richer feature information to compensate for the following In the sampling process, the internal data structure is lost, the spatial level information is lost, and other information is lost.
  • the using the pooling layer in the image segmentation model to perform pooling processing on the convolutional image set to obtain a down-sampled image set includes:
  • the pooling layer in the image segmentation model is used to perform pooling processing on several blocks in the convolutional image to obtain a down-sampled image.
  • the pooling process can perform feature selection and information filtering on the convolutional image set, avoid overfitting to a certain extent by reducing the dimensionality of features, and keep rotation, translation, and expansion without distortion.
  • the up-sampling of the down-sampled image set by using the image segmentation model to obtain an up-sampled image set includes:
  • the preset size of the convolution kernel divide the filled image set in the order from top to bottom and from left to right to obtain a plurality of filled sub-image sets;
  • the up-sampled image set is obtained.
  • the edge pixels of the down-sampled image set may not be located at the center of the preset convolution kernel, which will make the edge pixels of the down-sampled image set have less influence on the image segmentation model than those located at the center point.
  • the influence of pixels is not conducive to extracting features, so it is necessary to perform pixel filling processing on the down-sampled image set to obtain a filled image set, which is convenient for subsequent processing.
  • the image segmentation model performs up-sampling on the down-sampled image set, and restores the image feature information lost in the encoding process.
  • the performing feature fusion on the down-sampled image set and the up-sampled image set to obtain a feature image set includes:
  • a transposed convolution operation may be used to perform dimensional transformation processing on the down-sampled image set to obtain a transformed image set.
  • the transposed convolution operation is equivalent to the back propagation of the normal convolution operation, and the transposed convolution operation Not only can the up-sampled image set be spatially enlarged, but also the up-sampled image set can be dimensionally transformed according to the number of channels of the up-sampled image set.
  • the up-sampled image set and the transformed image set are weighted to obtain a characteristic image set, that is, the transformed image set is used as a weight to be multiplied with the up-sampled image set.
  • This weighting method can avoid
  • the down-sampled image set containing more semantic information covers the detailed information of the up-sampled image set, which helps to make the fused feature image set contain semantic information similar to the up-sampled image set.
  • the performing binarization processing on the feature image set to obtain a standard feature set includes:
  • the first gray value may be 255
  • the second gray value may be 0.
  • the region of interest is the squamous epithelial cell nucleus
  • the feature image is binarized, and the gray value of the pixel located on the squamous epithelial cell nucleus in the feature image is Converted to 255, the gray value of the pixel located outside the squamous epithelial cell nucleus is converted to 0, and the binary image is obtained.
  • the embodiment of the present application uses the following loss function to calculate the error value of the standard feature set and the label atlas to obtain the error value:
  • Is the error value Is the standard feature set
  • Y is the label atlas
  • represents the error factor, which is a preset constant.
  • the internal parameters of the image segmentation model are adjusted according to the error value, the adjusted image segmentation model is used for training, the error value is calculated and compared with a preset threshold, until the error value is less than The threshold is preset to obtain the initial image segmentation model.
  • the internal parameter may be the weight, gradient, etc. of the model.
  • test image set uses the test image set to verify and adjust the initial image segmentation model to obtain a standard image segmentation model.
  • the test image set is used to verify and adjust the initial image segmentation model, and the test image set is input into the initial image segmentation model to obtain the segmented image output by the test image set, The segmented image output from the test image set is compared with the label image corresponding to the test image set.
  • the initial image segmentation model is the standard image segmentation model.
  • the similarity is less than or equal to the preset
  • the parameters of the initial image segmentation model are adjusted.
  • inputting the training image set into the initial image segmentation model for training may usually perform too well. This phenomenon is called overfitting.
  • the overfitting is likely to lead to poor generalization performance of the model. , Cannot be applied to new data well.
  • the purpose of verifying and adjusting the initial image segmentation model by using the test image set is to adjust the model.
  • the gap between the test image set and the training image set can be compared from the index to understand the generalization performance of the model and adjust the model to make it Can better fit new data.
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • the image segmentation result obtained by using the standard image segmentation model described in the embodiments of the present application can segment the contained cell nuclei and present it in the form of a binary graph at the same time, which is convenient for counting and observation.
  • the embodiments of the application segment the medical image set, obtain the training image set and the test image set and label them to generate the label image set.
  • the obtained training image set is used to train the model to ensure the accuracy of model training, and the test image set is used
  • the image segmentation model is constructed based on the Unet network to perform up-sampling and down-sampling processing on the training image set, thereby combining low-resolution and high-resolution image features, and further
  • the image features are binarized to obtain the region of interest, thereby improving the segmentation effect of the region of interest.
  • FIG. 5 it is a schematic diagram of the modules of the image segmentation device of the present application.
  • the image segmentation apparatus 100 described in this application can be installed in an electronic device.
  • the image segmentation device 100 may include a data processing module 101, a model training module 102, and a segmentation module 103.
  • the module described in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of an electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the data processing module 101 is used to obtain a medical image set, segment the training image set and the test image set from the medical image set, label the training image set and the test image set, and generate a label image set.
  • the medical image set includes a scanned image of a smear of squamous epithelial cells of the cervix.
  • the data processing module 101 uses the following operations to obtain the medical image set:
  • Segmentation processing is performed on the stitched image to obtain a medical image set.
  • the module 101 acquires a medical image of a regional scan, performs a splicing operation on the medical image of the regional scan to remove duplicate data between the pictures, and further, divides the spliced image to obtain a small-sized medical image.
  • the data processing module 101 performs segmentation processing on the stitched image to obtain a medical image set, including: mapping the stitched image to a preset two-dimensional coordinate system; and obtaining the coordinates of the stitched image. Starting point, and according to the preset segmentation step length, segment the spliced image in the order from left to right and from top to bottom to obtain a medical image set.
  • the coordinate starting point of the stitched image may be the pixel coordinates of the upper left corner of the stitched image
  • the segmentation step may be a preset image length and width
  • the data processing module 101 segments a training image set and a test image set from the medical image set according to a preset ratio, and the training image set can be used for subsequent model training and the test The image set can be used for subsequent model verification to prevent the model from overfitting during the training process.
  • the preset ratio may be 7:3.
  • the data processing module 101 performs labeling on the training image set and the test image set to generate a label image set, including: using existing labeling technology to perform labeling on the regions of interest in the training image set and the test image set. Draw a line on the edge to obtain a segmented image set;
  • Binarization processing is performed on the segmented image set to obtain a label image set.
  • the training image set and the test image set are labeled using the existing labeling technology, the edge of the region of interest in the image is drawn, the region of interest is segmented, and the segmented image is obtained.
  • the binarization process is performed on the segmented image set, so that the gray value of the pixel points on the region of interest in the segmented image set is converted into a preset first gray value, and the segmentation is The gray value of pixels outside the region of interest in the image is converted into a preset second gray value.
  • the gray value of pixels on the region of interest is converted to 255, and the gray value of pixels outside the region of interest in the segmented image is converted to 0, so that the region of interest is White with black background.
  • the model training module 102 is configured to construct an image segmentation model based on the Unet network, and use the image segmentation model to perform up-sampling and down-sampling processing on the training image set to obtain a feature image set.
  • the model training module 102 uses the image segmentation model to perform up-sampling and down-sampling on the training image set to obtain a feature image set, including:
  • the using the image segmentation model to down-sample the training image set to obtain a down-sampled image set includes:
  • Pooling is performed on the convolutional image set by using the pooling layer in the image segmentation model to obtain a down-sampled image set.
  • the greater the number of downsampling the smaller the scale of the corresponding generated image in the down-sampled image set, that is, the lower the resolution, the stronger the semantic feature of the image in the down-sampled image set, and the more obvious the feature.
  • the using the convolution layer in the image segmentation model to perform convolution processing on the training image set to obtain a convolution image set includes:
  • the training images in the training image set are divided in order from top to bottom and from left to right to obtain multiple training sub-images;
  • the convolution image set is obtained.
  • convolution processing is a linear operation. Performing the convolution processing on the training image set can eliminate noise and enhance features, so that the pre-built image segmentation model can extract richer feature information to compensate for the following In the sampling process, the internal data structure is lost, the spatial level information is lost, and other information is lost.
  • the using the pooling layer in the image segmentation model to perform pooling processing on the convolutional image set to obtain a down-sampled image set includes:
  • the pooling layer in the image segmentation model is used to perform pooling processing on several blocks in the convolutional image to obtain a down-sampled image.
  • the pooling process can perform feature selection and information filtering on the convolutional image set, avoid overfitting to a certain extent by reducing the dimensionality of features, and keep rotation, translation, and expansion without distortion.
  • the up-sampling of the down-sampled image set by using the image segmentation model to obtain an up-sampled image set includes:
  • the preset size of the convolution kernel divide the filled image set in the order from top to bottom and from left to right to obtain a plurality of filled sub-image sets;
  • the up-sampled image set is obtained.
  • the edge pixels of the down-sampled image set may not be located at the center of the preset convolution kernel, which will make the edge pixels of the down-sampled image set have less influence on the image segmentation model than those located at the center point.
  • the influence of pixels is not conducive to extracting features, so it is necessary to perform pixel filling processing on the down-sampled image set to obtain a filled image set, which is convenient for subsequent processing.
  • the image segmentation model performs up-sampling on the down-sampled image set, and restores the image feature information lost in the encoding process.
  • the performing feature fusion on the down-sampled image set and the up-sampled image set to obtain a feature image set includes:
  • a transposed convolution operation may be used to perform dimensional transformation processing on the down-sampled image set to obtain a transformed image set.
  • the transposed convolution operation is equivalent to the back propagation of the normal convolution operation, and the transposed convolution operation Not only can the up-sampled image set be spatially enlarged, but also the up-sampled image set can be dimensionally transformed according to the number of channels of the up-sampled image set.
  • the up-sampled image set and the transformed image set are weighted to obtain a characteristic image set, that is, the transformed image set is used as a weight to be multiplied with the up-sampled image set.
  • This weighting method can avoid
  • the down-sampled image set containing more semantic information covers the detailed information of the up-sampled image set, which helps to make the fused feature image set contain semantic information similar to the up-sampled image set.
  • the model training module 102 is further configured to perform binarization processing on the feature image set to obtain a standard feature set, and calculate the error value between the standard feature set and the label atlas corresponding to the training image set .
  • the model training module 102 performs binarization processing on the feature image set to obtain a standard feature set, including:
  • the first gray value may be 255
  • the second gray value may be 0.
  • the region of interest is the squamous epithelial cell nucleus
  • the feature image is binarized, and the gray value of the pixel located on the squamous epithelial cell nucleus in the feature image is Converted to 255, the gray value of the pixel located outside the squamous epithelial cell nucleus is converted to 0, and the binary image is obtained.
  • the embodiment of the present application uses the following loss function to calculate the error value of the standard feature set and the label atlas to obtain the error value:
  • Is the error value Is the standard feature set
  • Y is the label atlas
  • represents the error factor, which is a preset constant.
  • the model training module 102 is further configured to adjust the internal parameters of the image segmentation model according to the error value until the error value is less than a preset threshold value to obtain an initial image segmentation model.
  • the model training module 102 adjusts the internal parameters of the image segmentation model according to the error value, uses the adjusted image segmentation model for training, calculates the error value and compares it with a preset threshold value, Until the error value is less than the preset threshold, the initial image segmentation model is obtained.
  • the internal parameter may be the weight, gradient, etc. of the model.
  • the model training module 102 is further configured to use the test image set to verify and adjust the initial image segmentation model to obtain a standard image segmentation model.
  • the model training module 102 uses the test image set to verify and adjust the initial image segmentation model, and inputs the test image set into the initial image segmentation model to obtain a test image set output Compare the segmented image output from the test image set with the label image corresponding to the test image set.
  • the initial image segmentation model is the standard image segmentation model, and when the similarity is less than When it is equal to or equal to a preset standard, parameter adjustment is performed on the initial image segmentation model.
  • inputting the training image set into the initial image segmentation model for training may usually perform too well. This phenomenon is called overfitting.
  • the overfitting is likely to lead to poor generalization performance of the model. , Cannot be applied to new data well.
  • the purpose of verifying and adjusting the initial image segmentation model by using the test image set is to adjust the model.
  • the gap between the test image set and the training image set can be compared from the index to understand the generalization performance of the model and adjust the model to make it Can better fit new data.
  • the segmentation module 103 is configured to use the standard image segmentation model to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • the segmentation module 103 uses the standard image segmentation model to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • the image segmentation result obtained by using the standard image segmentation model described in the embodiments of the present application can segment the contained cell nuclei and present it in the form of a binary graph at the same time, which is convenient for counting and observation.
  • the embodiments of the application segment the medical image set, obtain the training image set and the test image set and label them to generate the label image set.
  • the obtained training image set is used to train the model to ensure the accuracy of model training, and the test image set is used
  • the model is subsequently tested to facilitate the adjustment of the model.
  • the pre-built image segmentation model is used to up-sampling and down-sampling the training image set to obtain a feature image set, and then binarize the feature image set Process, obtain the standard feature set and calculate its error value before the label atlas corresponding to the training image set, adjust the internal parameters of the image segmentation model according to the error value and use the test image set for verification and adjustment to obtain the standard
  • the image segmentation model performs segmentation processing on the image to be segmented to obtain the image segmentation result. Therefore, the image segmentation method, device, and computer-readable storage medium proposed in this application can improve the efficiency of the image segmentation method and solve the problem of inaccurate image segmentation.
  • FIG. 6 it is a schematic structural diagram of an electronic device implementing the image segmentation method of the present application.
  • the electronic device 1 may include a processor 10, a memory 11, and a bus, and may also include a computer program stored in the memory 11 and running on the processor 10, such as an image segmentation program 12.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (such as SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, for example, a mobile hard disk of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart media card (SMC), and a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as the code of the image segmentation program 12, etc., but also to temporarily store data that has been output or will be output.
  • the processor 10 may be composed of integrated circuits in some embodiments, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combinations of central processing unit (CPU), microprocessor, digital processing chip, graphics processor, and various control chips, etc.
  • the processor 10 is the control unit of the electronic device, which uses various interfaces and lines to connect the various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as executing Image segmentation programs, etc.), and call data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • the bus may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection and communication between the memory 11 and at least one processor 10 and the like.
  • FIG. 6 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 6 does not constitute a limitation on the electronic device 1, and may include fewer or more components than shown in the figure. Components, or a combination of certain components, or different component arrangements.
  • the electronic device 1 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the at least one processor 10 through a power management device, thereby controlling power
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
  • the electronic device 1 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface.
  • the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may also include a user interface.
  • the user interface may be a display (Display) and an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • the image segmentation program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • the integrated module/unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer-readable storage medium may be volatile or non-volatile, and the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, Mobile hard disks, magnetic disks, optical disks, computer memory, read-only memory (ROM, Read-Only Memory).
  • the computer usable storage medium may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, a computer program required by at least one function, etc.; the storage data area may store a block chain node When the computer program is executed by the processor, it can realize:
  • the standard image segmentation model is used to perform segmentation processing on the image to be segmented to obtain an image segmentation result.
  • all the above-mentioned data can also be stored in a node of a blockchain.
  • images to be segmented, medical image sets or feature image sets, etc. these data can all be stored in the blockchain node.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.

Abstract

一种图像分割方法,包括:对医学图像集分割得到训练图像集及测试图像集并进行标签标记,生成标签图像集,利用图像分割模型对训练图像集执行上采样、下采样及二值化处理,得到标准特征图像集,根据标准特征图像集与标签图像集之间误差值调整图像分割模型的内部参数,得到初始图像分割模型,利用测试图像集对初始图像分割模型进行验证调整,得到标准图像分割模型,利用标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。本方法还涉及区块链技术,标签图像集可以存储在区块链中。本方法可以提高图像分割的准确性。

Description

图像分割方法、装置、电子设备及计算机可读存储介质
本申请要求于2020年10月15日提交中国专利局、申请号为CN202011103254.8,发明名称为“图像分割方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像分割方法、装置、电子设备及计算机可读存储介质。
背景技术
医学图像分割技术是医学图像处理与分析领域的重要课题之一,也是近年来备受研究人员关注的热点问题。医学图像分割的目的是把图像中具有特殊含义的不同区域分割开来,并使分割结果尽可能的接近解剖结构。
医学图像的分割对于很多疾病的筛查都具有很重要的作用,例如在进行宫颈检查时,需要根据宫颈的鳞状上皮细胞核的数量来判断女性宫颈病状,因此,需要分割出宫颈的鳞状上皮细胞核区域,然而鳞状上皮细胞核区域和鳞状细胞质区域通常难以精准分割,导致计数困难。
发明人意识到目前采用的方法是通过阈值对鳞状上皮细胞核区域和鳞状细胞质区域进行分割这种分割方法,对于前景和背景复杂的医学图片效果十分不理想。
发明内容
一种图像分割方法,包括:
获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
一种图像分割方法装置,所述装置包括:
数据处理模块,用于获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
模型训练模块,用于基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
分割模块,用于利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
一种电子设备,所述电子设备包括:
存储器,存储至少一个指令;及
处理器,执行所述存储器中存储的指令以实现如下步骤:
获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:
获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
本申请可以提高图像分割方法的效率,解决图像分割不精确的问题。
附图说明
图1为本申请一实施例提供的图像分割方法的流程示意图;
图2为本申请一实施例提供的图像分割方法中其中一个步骤的流程示意图;
图3为本申请一实施例提供的图像分割方法中其中一个步骤的流程示意图;
图4为本申请一实施例提供的图像分割方法中其中一个步骤的流程示意图;
图5为本申请一实施例提供的图像分割方法的模块示意图;
图6为本申请一实施例提供的实现图像分割方法的电子设备的内部结构示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供一种图像分割方法。所述图像分割方法的执行主体包括但不限于服务端、终端等能够被配置为执行本申请实施例提供的该方法的电子设备中的至少一种。换言之,所述图像分割方法可以由安装在终端设备或服务端设备的软件或硬件来执行,所述软件可以是区块链平台。所述服务端包括但不限于:单台服务器、服务器集群、云端服务器或云端服务器集群等。
参照图1所示,为本申请实施例提供的图像分割方法的流程示意图。本实施例中,所述图像分割方法包括:
S1、获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集。
在本申请实施例中,所述医学图像集包含宫颈的鳞状上皮细胞涂片的扫描图片。
具体地,参阅图2所示,所述获取医学图像集,包括:
S101、获取区域扫描的医学图像,对所述区域扫描的医学图像执行拼接操作,得到拼接图像;
S102、对所述拼接图像进行切分处理,得到医学图像集。
由于传统的医学图像的尺寸较大且用于扫描所述医学图像的扫描仪光学分辨率很高,导致传统的医学图像数据较大,为了提高计算机的处理速度,本申请实施例获取区域扫描的医学图像,并对所述区域扫描的医学图像执行拼接操作以去除图片之间的重复数据,并进一步地,对所述拼接图像进行切分,得到小尺寸的医学图像。
具体地,所述对所述拼接图像进行切分处理,得到医学图像集,包括:将所述拼接图像映射到预设的二维坐标系上;获取所述拼接图像的坐标起始点,并根据预设的切分步长,按照从左往右,由上至下的顺序对所述拼接图像进行切分,得到医学图像集。
其中,本申请实施例中,所述拼接图像的坐标起始点可以是拼接图像的左上角像素坐标,以及所述切分步长可以是预设的图像长宽。
进一步地,本申请实施例按照预设的比例从所述医学图像集中分割出训练图像集及测试图像集,所述训练图像集可用于后续的模型训练,以及所述测试图像集可用于后续的模型验证,以防止模型在训练过程中产生的过拟合。
优选地,所述预设的比例可以为7:3。
进一步地,所述对所述训练图像集及测试图像集进行标签标记,生成标签图像集,包括:
利用现有的标记技术对所述训练图像集及测试图像集中的感兴趣区域进行边缘画线,得到分割图像集;
对所述分割图像集进行二值化处理,得到标签图像集。
详细地,利用现有的标记技术对所述训练图像集及测试图像集进行标记,对图像中的感兴趣区域边缘画线,分割出感兴趣区域,得到分割图像。
具体地,所述对所述分割图像集进行二值化处理,令所述分割图像集中的感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述分割图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值。例如,将所述感兴趣区域上的像素点的灰度值转化为255,以及将所述分割图像中所述感兴趣区域之外的像素点的灰度值转化为0,使得感兴趣区域为白色,背景为黑色。
S2、基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集。
本申请实施例中,参阅图3所示,所述利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,包括:
S21、利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集;
S22、利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集;
S23、对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集。
具体地,所述利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集,包括:
利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集;
利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集。
其中,向下采样的次数越多,对应生成的所述下采样图像集中图像的尺度越小,即分 辨率越低,所述下采样图像集中的图像的语义特征越强,特征越明显。
进一步地,所述利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集,包括:
根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述训练图像集中的训练图像,得到多个训练子图像;
将所述预设的卷积核中的像素值与所述训练子图像中的像素值进行相乘,得到像素乘积值;
对所述像素乘积值进行求和,得到目标像素值;
直到所述训练图像集中的训练图像完成卷积操作,得到所述卷积图像集。
其中,卷积处理是一种线性运算,对所述训练图像集进行所述卷积处理可以消除噪声、增强特征,使得所述预构建的图像分割模型能够提取到更丰富的特征信息,弥补下采样过程中内部数据结构丢失,空间层级信息丢失等信息损失。
具体地,所述利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集,包括:
将所述卷积图像集中的卷积图像按照从左至右,从上至下的顺序划分出N*N的区块;
利用所述图像分割模型中的池化层对所述卷积图像中的若干区块进行池化处理,得到下采样图像。
详细地,所述池化处理能对所述卷积图像集进行特征选择和信息过滤,通过降低特征的维度在一定程度上避免过拟合,保持旋转、平移、伸缩不变形。
进一步地,所述利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集,包括:
对所述下采样图像集进行填充像素处理,得到填充图像集;
根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述填充图像集,得到多个填充子图像集;
将所述预设的卷积核中的像素值与所述填充子图像集中的像素值进行相乘,得到像素乘积值;
对所述像素乘积值进行求和,得到填充像素值;
直到所述填充图像集完成上采样处理,得到所述上采样图像集。
详细地,所述下采样图像集的边缘像素可能不会位于预设的卷积核的中心,这样会使得所述下采样图像集的边缘像素对所述图像分割模型的影响小于位于中心点的像素的影响,不利于抽取特征,故需要对所述下采样图像集进行填充像素处理,得到填充图像集,方便后续处理。
所述图像分割模型对所述下采样图像集进行上采样,恢复编码过程中损失的图像特征信息。
进一步地,本申请实施例中,所述对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集,包括:
对所述下采样图像集进行维度变换处理,得到变换图像集;
将所述上采样图像集与所述变换图像集进行加权处理,得到特征图像集。
具体地,可利用转置卷积操作对所述下采样图像集进行维度变换处理,得到变换图像集,所述转置卷积操作相当于正常卷积操作的反向传播,转置卷积操作不仅可对所述上采样图像集进行空间上的放大,还可根据所述上采样图像集的通道数对上采样图像集进行维度变换。
进一步地,将所述上采样图像集与所述变换图像集进行加权处理,得到特征图像集,即将所述变换图像集作为权重与所述上采样图像集进行相乘,这种加权方式可避免包含有 较多语义信息的下采样图像集覆盖上采样图像集的细节信息,有助于使融合后的特征图像集包含着与所述上采样图像集相似的语义信息。
S3、对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值。
在本申请实施例中,参阅图4所示,所述对所述特征图像集执行二值化处理,得到标准特征集,包括:
S31、提取所述特征图像集中特征图像的感兴趣区域;
S32、将所述感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述特征图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值,得到二值化图像;
S33、当所述特征图像集中的特征图像经过二值化处理,得到标准特征集。
较佳地,所述第一灰度值可以为255,所述第二灰度值可以为0。
例如,在本申请实施例中,所述感兴趣区域是鳞状上皮细胞核,对所述特征图像进行二值化处理,将所述特征图像中位于鳞状上皮细胞核上的像素点的灰度值转化为255,位于鳞状上皮细胞核之外的像素点的灰度值转化为0,得到二值化图像。
进一步地,本申请实施例利用下述损失函数对所述标准特征集与标签图集进行误差值计算,得到误差值:
Figure PCTCN2020131978-appb-000001
其中,
Figure PCTCN2020131978-appb-000002
为误差值,
Figure PCTCN2020131978-appb-000003
为所述标准特征集,Y为所述标签图集,α表示误差因子,为预设常数。
S4、判断所述误差值是否小于预设阈值。当所述误差值大于或者等于预设阈值时,执行S5、调整所述图像分割模型的内部参数,并返回上述的S2,直到所述判断所述误差值小于所述预设阈值时,执行S6,得到初始图像分割模型
本申请实施例中,根据所述误差值调整所述图像分割模型的内部参数,利用调整后的图像分割模型进行训练,计算所述误差值并于预设阈值进行比较,直到所述误差值小于预设阈值,得到初始图像分割模型。
优选地,所述内部参数可以是模型的权重、梯度等。
S7、利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型。
本申请实施例中,所述利用所述测试图像集对所述初始图像分割模型进行验证调整,将所述测试图像集输入至所述初始图像分割模型中,得到测试图像集输出的分割图像,将所述测试图像集输出的分割图像与测试图像集对应的标签图像进行对比,当相似度大于预设标准时,所述初始图像分割模型即为标准图像分割模型,当相似度小于或等于预设标准时,对所述初始图像分割模型进行参数调整。
详细地,将所述训练图像集输入至所述初始图像分割模型中进行训练,通常可能表现过好,此现象称之为过拟合,所述过拟合很可能导致模型的泛化性能差,不能很好的应用到新的数据上。利用所述测试图像集对所述初始图像分割模型进行验证调整就是为了调节模型,可以从指标上对比测试图像集和训练图像集的差距,了解模型的泛化性能并以此来调节模型使其能够更好的拟合新的数据。
S8、利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
本申请实施例中,利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
利用本申请实施例中所述标准图像分割模型得到的图像分割结果可以将所包含的细胞核分割出来,同时以二值图的形式呈现,方便计数和观察。
本申请实施例对医学图像集进行分割,得到训练图像集及测试图像集并进行标签标记,生成标签图像集,得到的训练图像集用于训练模型,保证模型训练的精确性,测试图像集用于后续对模型进行验证,防止模型过拟合,基于Unet网络构建图像分割模型对所述训练图像集进行上采样、下采样处理,从而结合了低分辨率和高分辨率的图像特征,进一步对图像特征执行二值化处理得到感兴趣区域,从而提高了感兴趣区域的分割效果。
如图5所示,是本申请图像分割装置的模块示意图。
本申请所述图像分割装置100可以安装于电子设备中。根据实现的功能,所述图像分割装置100可以包括数据处理模块101、模型训练模块102和分割模块103。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
所述数据处理模块101,用于获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集。
在本申请实施例中,所述医学图像集包含宫颈的鳞状上皮细胞涂片的扫描图片。
具体地,所述数据处理模块101采用下述操作获取所述医学图像集:
获取区域扫描的医学图像,对所述区域扫描的医学图像执行拼接操作,得到拼接图像;
对所述拼接图像进行切分处理,得到医学图像集。
由于传统的医学图像的尺寸较大且用于扫描所述医学图像的扫描仪光学分辨率很高,导致传统的医学图像数据较大,为了提高计算机的处理速度,本申请实施例所述数据处理模块101获取区域扫描的医学图像,并对所述区域扫描的医学图像执行拼接操作以去除图片之间的重复数据,并进一步地,对所述拼接图像进行切分,得到小尺寸的医学图像。
具体地,所述数据处理模块101对所述拼接图像进行切分处理,得到医学图像集,包括:将所述拼接图像映射到预设的二维坐标系上;获取所述拼接图像的坐标起始点,并根据预设的切分步长,按照从左往右,由上至下的顺序对所述拼接图像进行切分,得到医学图像集。
其中,本申请实施例中,所述拼接图像的坐标起始点可以是拼接图像的左上角像素坐标,以及所述切分步长可以是预设的图像长宽。
进一步地,本申请实施例所述数据处理模块101按照预设的比例从所述医学图像集中分割出训练图像集及测试图像集,所述训练图像集可用于后续的模型训练,以及所述测试图像集可用于后续的模型验证,以防止模型在训练过程中产生的过拟合。
优选地,所述预设的比例可以为7:3。
进一步地,所述数据处理模块101对所述训练图像集及测试图像集进行标签标记,生成标签图像集,包括:利用现有的标记技术对所述训练图像集及测试图像集中的感兴趣区域进行边缘画线,得到分割图像集;
对所述分割图像集进行二值化处理,得到标签图像集。
详细地,利用现有的标记技术对所述训练图像集及测试图像集进行标记,对图像中的感兴趣区域边缘画线,分割出感兴趣区域,得到分割图像。
具体地,所述对所述分割图像集进行二值化处理,令所述分割图像集中的感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述分割图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值。例如,将所述感兴趣区域上的像素点的灰度值转化为255,以及将所述分割图像中所述感兴趣区域之外的像素点的灰度值转化为0,使得感兴趣区域为白色,背景为黑色。
所述模型训练模块102,用于基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集。
本申请实施例中,所述模型训练模块102利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,包括:
利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集;
利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集;
对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集。
具体地,所述利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集,包括:
利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集;
利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集。
其中,向下采样的次数越多,对应生成的所述下采样图像集中图像的尺度越小,即分辨率越低,所述下采样图像集中的图像的语义特征越强,特征越明显。
进一步地,所述利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集,包括:
根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述训练图像集中的训练图像,得到多个训练子图像;
将所述预设的卷积核中的像素值与所述训练子图像中的像素值进行相乘,得到像素乘积值;
对所述像素乘积值进行求和,得到目标像素值;
直到所述训练图像集中的训练图像完成卷积操作,得到所述卷积图像集。
其中,卷积处理是一种线性运算,对所述训练图像集进行所述卷积处理可以消除噪声、增强特征,使得所述预构建的图像分割模型能够提取到更丰富的特征信息,弥补下采样过程中内部数据结构丢失,空间层级信息丢失等信息损失。
具体地,所述利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集,包括:
将所述卷积图像集中的卷积图像按照从左至右,从上至下的顺序划分出N*N的区块;
利用所述图像分割模型中的池化层对所述卷积图像中的若干区块进行池化处理,得到下采样图像。
详细地,所述池化处理能对所述卷积图像集进行特征选择和信息过滤,通过降低特征的维度在一定程度上避免过拟合,保持旋转、平移、伸缩不变形。
进一步地,所述利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集,包括:
对所述下采样图像集进行填充像素处理,得到填充图像集;
根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述填充图像集,得到多个填充子图像集;
将所述预设的卷积核中的像素值与所述填充子图像集中的像素值进行相乘,得到像素乘积值;
对所述像素乘积值进行求和,得到填充像素值;
直到所述填充图像集完成上采样处理,得到所述上采样图像集。
详细地,所述下采样图像集的边缘像素可能不会位于预设的卷积核的中心,这样会使得所述下采样图像集的边缘像素对所述图像分割模型的影响小于位于中心点的像素的影响,不利于抽取特征,故需要对所述下采样图像集进行填充像素处理,得到填充图像集,方便后续处理。
所述图像分割模型对所述下采样图像集进行上采样,恢复编码过程中损失的图像特征信息。
进一步地,本申请实施例中,所述对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集,包括:
对所述下采样图像集进行维度变换处理,得到变换图像集;
将所述上采样图像集与所述变换图像集进行加权处理,得到特征图像集。
具体地,可利用转置卷积操作对所述下采样图像集进行维度变换处理,得到变换图像集,所述转置卷积操作相当于正常卷积操作的反向传播,转置卷积操作不仅可对所述上采样图像集进行空间上的放大,还可根据所述上采样图像集的通道数对上采样图像集进行维度变换。
进一步地,将所述上采样图像集与所述变换图像集进行加权处理,得到特征图像集,即将所述变换图像集作为权重与所述上采样图像集进行相乘,这种加权方式可避免包含有较多语义信息的下采样图像集覆盖上采样图像集的细节信息,有助于使融合后的特征图像集包含着与所述上采样图像集相似的语义信息。
所述模型训练模块102,还用于对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值。
在本申请实施例中,所述模型训练模块102对所述特征图像集执行二值化处理,得到标准特征集,包括:
提取所述特征图像集中特征图像的感兴趣区域;
将所述感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述特征图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值,得到二值化图像;
当所述特征图像集中的特征图像经过二值化处理,得到标准特征集。
较佳地,所述第一灰度值可以为255,所述第二灰度值可以为0。
例如,在本申请实施例中,所述感兴趣区域是鳞状上皮细胞核,对所述特征图像进行二值化处理,将所述特征图像中位于鳞状上皮细胞核上的像素点的灰度值转化为255,位于鳞状上皮细胞核之外的像素点的灰度值转化为0,得到二值化图像。
进一步地,本申请实施例利用下述损失函数对所述标准特征集与标签图集进行误差值计算,得到误差值:
Figure PCTCN2020131978-appb-000004
其中,
Figure PCTCN2020131978-appb-000005
为误差值,
Figure PCTCN2020131978-appb-000006
为所述标准特征集,Y为所述标签图集,α表示误差因子,为预设常数。
所述模型训练模块102还用于根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型。
本申请实施例中,所述模型训练模块102根据所述误差值调整所述图像分割模型的内部参数,利用调整后的图像分割模型进行训练,计算所述误差值并于预设阈值进行比较,直到所述误差值小于预设阈值,得到初始图像分割模型。
优选地,所述内部参数可以是模型的权重、梯度等。
所述模型训练模块102,还用于利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型。
本申请实施例中,所述模型训练模块102利用所述测试图像集对所述初始图像分割模型进行验证调整,将所述测试图像集输入至所述初始图像分割模型中,得到测试图像集输出的分割图像,将所述测试图像集输出的分割图像与测试图像集对应的标签图像进行对比,当相似度大于预设标准时,所述初始图像分割模型即为标准图像分割模型,当相似度小于或等于预设标准时,对所述初始图像分割模型进行参数调整。
详细地,将所述训练图像集输入至所述初始图像分割模型中进行训练,通常可能表现过好,此现象称之为过拟合,所述过拟合很可能导致模型的泛化性能差,不能很好的应用到新的数据上。利用所述测试图像集对所述初始图像分割模型进行验证调整就是为了调节模型,可以从指标上对比测试图像集和训练图像集的差距,了解模型的泛化性能并以此来调节模型使其能够更好的拟合新的数据。
所述分割模块103用于利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
本申请实施例中,所述分割模块103利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
利用本申请实施例中所述标准图像分割模型得到的图像分割结果可以将所包含的细胞核分割出来,同时以二值图的形式呈现,方便计数和观察。
本申请实施例对医学图像集进行分割,得到训练图像集及测试图像集并进行标签标记,生成标签图像集,得到的训练图像集用于训练模型,保证模型训练的精确性,测试图像集用于后续对模型进行测试处理,便于对模型进行调整,利用预构建的图像分割模型对所述训练图像集进行上采样及下采样处理,得到特征图像集,对所述特征图像集进行二值化处理,得到标准特征集并计算其与所述训练图像集对应的标签图集之前的误差值,根据所述误差值调整所述图像分割模型的内部参数并利用测试图像集进行验证调整,得到标准图像分割模型并对待分割图像进行分割处理,得到图像分割结果。因此本申请提出的图像分割方法、装置及计算机可读存储介质,可以提高图像分割方法的效率,解决图像分割不精确的问题。
如图6所示,是本申请实现图像分割方法的电子设备的结构示意图。
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如图像分割程序12。
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如图像分割程序12的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如执行图像分割程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。
图6仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图6示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些 部件,或者不同的部件布置。
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的图像分割程序12是多个指令的组合,在所述处理器10中运行时,可以实现:
获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读存储介质可以是易失性的,也可以是非易失性的,所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
进一步地,所述计算机可用存储介质可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序等;存储数据区可存储根据区块链节点的使用所创建的数据等,所述计算机程序被处理器执行时,可以实现:
获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
本申请之计算机可读存储介质的具体实施方式与上述图像分割方法的具体实施方式大致相同,在此不再赘述。
在另一个实施例中,本申请所提供的图像分割方法,为进一步保证上述所有出现的数据的私密和安全性,上述所有数据还可以存储于一区块链的节点中。例如待分割图像、医学图像集或特征图像集等等,这些数据均可存储在区块链节点中。
需要说明的是,本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图表记视为限制所涉及的权利要求。
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种图像分割方法,其中,所述方法包括:
    获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
    基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
    对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
    根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
    利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
    利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
  2. 如权利要求1所述的图像分割方法,其中,所述获取医学图像集,包括:
    获取区域扫描的医学图像,对所述区域扫描的医学图像执行拼接操作,得到拼接图像;
    对所述拼接图像进行切分处理,得到医学图像集。
  3. 如权利要求2所述的图像分割方法,其中,所述对所述拼接图像进行切分处理,得到医学图像集,包括:
    将所述拼接图像映射到预设的二维坐标系上;
    获取所述拼接图像的坐标起始点,并根据预设的切分步长,按照从左往右,由上至下的顺序对所述拼接图像进行切分,得到医学图像集。
  4. 如权利要求1所述的图像分割方法,其中,所述利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,包括:
    利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集;
    利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集;
    对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集。
  5. 如权利要求4所述的图像分割方法,其中,所述对所述训练图像集进行下采样,得到下采样图像集,包括:
    利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集;
    利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集。
  6. 如权利要求5所述的图像分割方法,其中,所述利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集,包括:
    根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述训练图像集中的训练图像,得到多个训练子图像;
    将所述预设的卷积核中的像素值与所述训练子图像中的像素值进行相乘,得到像素乘积值;
    对所述像素乘积值进行求和,得到目标像素值;
    直到所述训练图像集中的训练图像完成卷积操作,得到所述卷积图像集。
  7. 如权利要求1至6中任意一项所述的图像分割方法,其中,所述对所述特征图像集执行二值化处理,得到标准特征集,包括:
    提取所述特征图像集中特征图像的感兴趣区域;
    将所述感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述特征图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值,得到标准特征集。
  8. 一种图像分割方法装置,其中,所述装置包括:
    数据处理模块,用于获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
    模型训练模块,用于基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
    分割模块,利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
    获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
    基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
    对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
    根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
    利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
    利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
  10. 如权利要求9所述的电子设备,其中,所述获取医学图像集,包括:
    获取区域扫描的医学图像,对所述区域扫描的医学图像执行拼接操作,得到拼接图像;
    对所述拼接图像进行切分处理,得到医学图像集。
  11. 如权利要求10所述的电子设备,其中,所述对所述拼接图像进行切分处理,得到医学图像集,包括:
    将所述拼接图像映射到预设的二维坐标系上;
    获取所述拼接图像的坐标起始点,并根据预设的切分步长,按照从左往右,由上至下的顺序对所述拼接图像进行切分,得到医学图像集。
  12. 如权利要求9所述的电子设备,其中,所述利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,包括:
    利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集;
    利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集;
    对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集。
  13. 如权利要求12所述的电子设备,其中,所述对所述训练图像集进行下采样,得到下采样图像集,包括:
    利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集;
    利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集。
  14. 如权利要求13所述的电子设备,其中,所述利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集,包括:
    根据预设的卷积核的大小,按照从上往下,从左往右的顺序划分所述训练图像集中的训练图像,得到多个训练子图像;
    将所述预设的卷积核中的像素值与所述训练子图像中的像素值进行相乘,得到像素乘积值;
    对所述像素乘积值进行求和,得到目标像素值;
    直到所述训练图像集中的训练图像完成卷积操作,得到所述卷积图像集。
  15. 如权利要求9至14中任意一项所述的电子设备,其中,所述对所述特征图像集执行二值化处理,得到标准特征集,包括:
    提取所述特征图像集中特征图像的感兴趣区域;
    将所述感兴趣区域上的像素点的灰度值转化为预设的第一灰度值,将所述特征图像中所述感兴趣区域之外的像素点的灰度值转化为预设的第二灰度值,得到标准特征集。
  16. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:
    获取医学图像集,从所述医学图像集中分割出训练图像集及测试图像集,对所述训练图像集及测试图像集进行标签标记,生成标签图像集;
    基于Unet网络构建图像分割模型,利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集;
    对所述特征图像集执行二值化处理,得到标准特征集,并计算所述标准特征集和所述训练图像集对应的标签图集之间的误差值;
    根据所述误差值调整所述图像分割模型的内部参数,直到所述误差值小于预设阈值,得到初始图像分割模型;
    利用所述测试图像集对所述初始图像分割模型进行验证调整,得到标准图像分割模型;
    利用所述标准图像分割模型对待分割图像进行分割处理,得到图像分割结果。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述获取医学图像集,包括:
    获取区域扫描的医学图像,对所述区域扫描的医学图像执行拼接操作,得到拼接图像;
    对所述拼接图像进行切分处理,得到医学图像集。
  18. 如权利要求17所述的计算机可读存储介质,其中,所述对所述拼接图像进行切分处理,得到医学图像集,包括:
    将所述拼接图像映射到预设的二维坐标系上;
    获取所述拼接图像的坐标起始点,并根据预设的切分步长,按照从左往右,由上至下的顺序对所述拼接图像进行切分,得到医学图像集。
  19. 如权利要求16所述的计算机可读存储介质,其中,所述利用所述图像分割模型对所述训练图像集执行上采样及下采样处理,得到特征图像集,包括:
    利用所述图像分割模型对所述训练图像集进行下采样,得到下采样图像集;
    利用所述图像分割模型对所述下采样图像集进行上采样,得到上采样图像集;
    对所述下采样图像集和所述上采样图像集进行特征融合,得到特征图像集。
  20. 如权利要求19所述的计算机可读存储介质,其中,所述对所述训练图像集进行下采样,得到下采样图像集,包括:
    利用所述图像分割模型中的卷积层对所述训练图像集进行卷积处理,得到卷积图像集;
    利用所述图像分割模型中的池化层对所述卷积图像集进行池化处理,得到下采样图像集。
PCT/CN2020/131978 2020-10-15 2020-11-26 图像分割方法、装置、电子设备及计算机可读存储介质 WO2021189901A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011103254.8A CN112233125B (zh) 2020-10-15 2020-10-15 图像分割方法、装置、电子设备及计算机可读存储介质
CN202011103254.8 2020-10-15

Publications (1)

Publication Number Publication Date
WO2021189901A1 true WO2021189901A1 (zh) 2021-09-30

Family

ID=74113756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131978 WO2021189901A1 (zh) 2020-10-15 2020-11-26 图像分割方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112233125B (zh)
WO (1) WO2021189901A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641443A (zh) * 2022-12-08 2023-01-24 北京鹰瞳科技发展股份有限公司 训练图像分割网络模型的方法、处理图像的方法及产品
CN117372433A (zh) * 2023-12-08 2024-01-09 菲沃泰纳米科技(深圳)有限公司 厚度参数的控制方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065607A (zh) * 2021-04-20 2021-07-02 平安国际智慧城市科技股份有限公司 图像检测方法、装置、电子设备及介质
CN112884770B (zh) * 2021-04-28 2021-07-02 腾讯科技(深圳)有限公司 图像分割处理方法、装置及计算机设备
CN114119640B (zh) * 2022-01-27 2022-04-22 广东皓行科技有限公司 模型训练方法、图像分割方法以及图像分割系统
CN115170807B (zh) * 2022-09-05 2022-12-02 浙江大华技术股份有限公司 一种图像分割、模型训练方法、装置、设备及介质
CN117648632B (zh) * 2024-01-29 2024-05-03 杭州海康威视数字技术股份有限公司 光纤振动异常识别方法、装置、设备及计算机程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083793A1 (en) * 2015-09-18 2017-03-23 Htc Corporation Method, electronic apparatus, and computer readable medium of constructing classifier for skin-infection detection
CN107622492A (zh) * 2017-06-30 2018-01-23 上海联影医疗科技有限公司 肺裂分割方法及系统
CN108986106A (zh) * 2017-12-15 2018-12-11 浙江中医药大学 面向青光眼临床诊断的视网膜血管自动分割方法
CN109948707A (zh) * 2019-03-20 2019-06-28 腾讯科技(深圳)有限公司 模型训练方法、装置、终端及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110234400B (zh) * 2016-09-06 2021-09-07 医科达有限公司 用于生成合成医学图像的神经网络
CN110838124B (zh) * 2017-09-12 2021-06-18 深圳科亚医疗科技有限公司 用于分割具有稀疏分布的对象的图像的方法、系统和介质
CN109461495B (zh) * 2018-11-01 2023-04-14 腾讯科技(深圳)有限公司 一种医学图像的识别方法、模型训练的方法及服务器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083793A1 (en) * 2015-09-18 2017-03-23 Htc Corporation Method, electronic apparatus, and computer readable medium of constructing classifier for skin-infection detection
CN107622492A (zh) * 2017-06-30 2018-01-23 上海联影医疗科技有限公司 肺裂分割方法及系统
CN108986106A (zh) * 2017-12-15 2018-12-11 浙江中医药大学 面向青光眼临床诊断的视网膜血管自动分割方法
CN109948707A (zh) * 2019-03-20 2019-06-28 腾讯科技(深圳)有限公司 模型训练方法、装置、终端及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641443A (zh) * 2022-12-08 2023-01-24 北京鹰瞳科技发展股份有限公司 训练图像分割网络模型的方法、处理图像的方法及产品
CN115641443B (zh) * 2022-12-08 2023-04-11 北京鹰瞳科技发展股份有限公司 训练图像分割网络模型的方法、处理图像的方法及产品
CN117372433A (zh) * 2023-12-08 2024-01-09 菲沃泰纳米科技(深圳)有限公司 厚度参数的控制方法、装置、设备及存储介质
CN117372433B (zh) * 2023-12-08 2024-03-08 菲沃泰纳米科技(深圳)有限公司 厚度参数的控制方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112233125B (zh) 2023-06-02
CN112233125A (zh) 2021-01-15

Similar Documents

Publication Publication Date Title
WO2021189901A1 (zh) 图像分割方法、装置、电子设备及计算机可读存储介质
WO2021217851A1 (zh) 异常细胞自动标注方法、装置、电子设备及存储介质
US11861829B2 (en) Deep learning based medical image detection method and related device
WO2022077917A1 (zh) 实例分割模型样本筛选方法、装置、计算机设备及介质
WO2021189912A1 (zh) 图像中目标物的检测方法、装置、电子设备及存储介质
WO2020108525A1 (zh) 图像分割方法、装置、诊断系统、存储介质及计算机设备
CN110705583B (zh) 细胞检测模型训练方法、装置、计算机设备及存储介质
WO2022121156A1 (zh) 图像中目标物检测方法、装置、电子设备及可读存储介质
WO2020253886A1 (zh) 一种病理辅助诊断方法
Jiao et al. Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: more accurate and more convenient
WO2021189913A1 (zh) 图像中目标物的分割方法、装置、电子设备及存储介质
TW202014984A (zh) 一種圖像處理方法、電子設備及存儲介質
WO2021151272A1 (zh) 细胞图像分割方法、装置、电子设备及可读存储介质
WO2021151338A1 (zh) 医学影像图片分析方法、装置、电子设备及可读存储介质
WO2021151277A1 (zh) 目标物损伤程度判定方法、装置、电子设备及存储介质
WO2021189856A1 (zh) 证件校验方法、装置、电子设备及介质
WO2020125062A1 (zh) 一种图像融合方法及相关装置
WO2021151275A1 (zh) 图像分割方法、装置、设备及存储介质
WO2021151307A1 (zh) 基于病理切片扫描和分析一体化方法、装置、设备及介质
WO2021168703A1 (zh) 字符处理及字符识别方法、存储介质和终端设备
US9087272B2 (en) Optical match character classification
CN114723636A (zh) 基于多特征融合的模型生成方法、装置、设备及存储介质
CN114742750A (zh) 异常细胞检测方法、装置、终端设备及可读存储介质
WO2023109086A1 (zh) 文字识别方法、装置、设备及存储介质
CN115274099B (zh) 一种人与智能交互的计算机辅助诊断系统与方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927777

Country of ref document: EP

Kind code of ref document: A1