WO2022036972A1 - Procédé et appareil de segmentation d'image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de segmentation d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022036972A1
WO2022036972A1 PCT/CN2020/138131 CN2020138131W WO2022036972A1 WO 2022036972 A1 WO2022036972 A1 WO 2022036972A1 CN 2020138131 W CN2020138131 W CN 2020138131W WO 2022036972 A1 WO2022036972 A1 WO 2022036972A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
pixel
target object
segmentation result
Prior art date
Application number
PCT/CN2020/138131
Other languages
English (en)
Chinese (zh)
Inventor
韩泓泽
刘星龙
黄宁
孙辉
张少霆
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to KR1020227001101A priority Critical patent/KR20220012407A/ko
Priority to JP2021576593A priority patent/JP2022548453A/ja
Publication of WO2022036972A1 publication Critical patent/WO2022036972A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image segmentation method and device, an electronic device and a storage medium.
  • Image segmentation refers to the technology and process of dividing an image into several specific regions with unique properties and proposing objects of interest. Image segmentation is a key step from image processing to image analysis. Image segmentation methods in the related art are mainly divided into the following categories: threshold-based segmentation methods, region-based segmentation methods, edge-based segmentation methods, and specific theory-based segmentation methods.
  • the present disclosure provides an image segmentation method and device, an electronic device and a storage medium.
  • an image segmentation method comprising:
  • the pixel values of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the target object are determined. Adjustment is performed to obtain a first segmentation result corresponding to the image to be processed.
  • a preliminary segmented image corresponding to the image to be processed is obtained, and according to the edge information of the target object in the image to be processed, in the preliminary segmented image, the In the enclosed area included in the edge of the target object, the predicted pixel values of the pixels that do not belong to the target object are adjusted to obtain the first segmentation result corresponding to the to-be-processed image, so that a more accurate and robust image can be obtained.
  • the predicted pixel value of the pixel belonging to the target object is a first preset value
  • the predicted pixel value of the pixel not belonging to the target object is the first preset value. the second preset value
  • the predicted pixels that do not belong to the target object are analyzed.
  • the pixel value is adjusted to obtain the first segmentation result corresponding to the to-be-processed image, including:
  • the pixel values of the filled preliminary segmented image are adjusted to obtain the first segmentation result corresponding to the to-be-processed image.
  • a filled preliminary segmented image is obtained by adjusting the pixel value of the closed area whose pixel value is the second preset value in the preliminary segmented image to the first preset value, thereby enabling the
  • the first segmentation result corresponding to the processed image covers the inside of the organ of the target object, for example, covers the lung parenchyma such as the lung, the inside of the digestive tract (eg, the gastrointestinal tract), and the like. That is, by adopting the above-mentioned implementation manner, the missing holes in the target object (for example, in the human body) after image segmentation can be filled.
  • the first segmentation result corresponding to the to-be-processed image can be obtained, thereby reducing the The background part in the image to be processed (ie the part that does not belong to the target object) is divided into the probability of belonging to the target object.
  • adjusting the pixel value of the enclosed area with the pixel value of the second preset value in the preliminary segmented image to the first preset value to obtain the filled preliminary segmentation images including:
  • the seed point of the flood filling operation belongs to the background part ( That is, the part that does not belong to the target object), so that the first segmentation result corresponding to the image to be processed can cover the inside of the organ of the target object, thereby obtaining a more accurate segmentation result.
  • the pixel value of the filled preliminary segmented image is adjusted according to the edge information of the target object in the to-be-processed image to obtain the corresponding pixel value of the to-be-processed image.
  • Describe the first segmentation result including:
  • the edge information of the target object in the image to be processed determine the maximum connected domain included in the edge of the target object in the filled preliminary segmented image
  • the pixel values of the pixels outside the maximum connected region in the filled preliminary segmented image are adjusted to the second preset value to obtain the first segmentation result corresponding to the to-be-processed image.
  • false positive regions that are not connected to the target object can be eliminated, thereby greatly reducing the probability of erroneously classifying the background part as belonging to the target object, thereby improving the accuracy of image segmentation.
  • the target object is a human body
  • false positive regions that are not connected to the human body can be eliminated, thereby greatly reducing the probability that the background part (eg, bed board, etc.) is erroneously classified as belonging to the human body.
  • the method further includes:
  • the continuity of the image to be processed and the second segmentation result can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional segmentation result.
  • the target object is a human body
  • the continuity of the image to be processed and the human body in the adjacent images can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional human body segmentation result.
  • the segmentation result corresponding to each CT image in the CT image sequence can be obtained by using this implementation manner, thereby obtaining a smoother and more accurate three-dimensional human body segmentation result.
  • the first segmentation result is adjusted according to the pixel value of the pixel at the same position in the image to be processed and the adjacent image, and the second segmentation result, to obtain
  • the third segmentation result corresponding to the image to be processed includes:
  • the first segmentation result is adjusted to obtain the third segmentation result corresponding to the image to be processed.
  • the third segmentation result corresponding to the image to be processed is obtained, so that the first segmentation corresponding to the image to be processed can be divided according to the segmentation result corresponding to the pixels in the adjacent images that are relatively related to the image to be processed The result is adjusted, thereby helping to improve the accuracy of the final segmentation result corresponding to the image to be processed.
  • the difference value of the pixel values belonging to the target object and at the same position as the image to be processed For pixels less than or equal to the third preset value, adjust the first segmentation result to obtain the third segmentation result corresponding to the image to be processed, including:
  • a first pixel set is obtained according to the pixels whose difference between the pixel values at the same position in the image to be processed and the adjacent image is less than or equal to a third preset value, and according to the first pixel set at the same position
  • the pixels belonging to the target object in the second segmentation result are obtained to obtain a second pixel set, and the pixels of the second pixel set in the first segmentation result are adjusted to belong to the target object to obtain the to-be-processed
  • the third segmentation result corresponding to the image so that the first segmentation result corresponding to the image to be processed can be processed according to the pixels in the second segmentation result that belong to the target object and are relatively related to the image to be processed. adjustment, thereby helping to improve the accuracy of the final segmentation result corresponding to the image to be processed.
  • the method further includes: training a neural network according to the training image and labeling data of the training image, wherein the labeling data of the training image includes the training image The true value of the pixels belonging to the target object in ;
  • the predicting the pixels belonging to the target object in the image to be processed to obtain a preliminary segmented image corresponding to the image to be processed includes: inputting the image to be processed into the neural network, and predicting the image to be processed through the neural network According to the information of the pixels belonging to the target object in the to-be-processed image, a preliminary segmented image corresponding to the to-be-processed image is obtained.
  • the part of the image to be processed that belongs to the target object is predicted by the neural network.
  • the image to be processed is a CT image
  • the target object is a human body
  • this implementation does not consider removing various bedboards in the CT image, that is, no longer focuses on the non-human body part, but focuses on
  • the segmentation of the human body in the CT image can ensure the accuracy and robustness of the segmentation results under a large amount of special-shaped bed plate data. That is, even if the image to be processed contains a special-shaped bed plate, an accurate and robust segmentation result can be obtained by adopting this implementation manner.
  • the training image is an electronic computed tomography CT image
  • the training of the neural network according to the training image and the labeled data of the training image includes: normalizing the pixel values of the training image according to a preset CT value range to obtain a normalized training image;
  • the normalized training image and the labeled data of the training image train the neural network.
  • the pixel values of the training image are normalized according to a preset CT value range to obtain a normalized training image, and according to the normalized training image and the training image
  • the labeled data of the image trains the neural network, thereby helping to reduce the computational load of the neural network and improve the convergence speed of the neural network.
  • an image segmentation method comprising:
  • the preliminary segmented image is adjusted to obtain a fourth segmentation result corresponding to the to-be-processed image.
  • the preliminary segmented image is adjusted to obtain the fourth segmentation result corresponding to the image to be processed includes:
  • the difference value of the pixel values belonging to the target object and at the same position as the image to be processed For pixels less than or equal to the third preset value, adjust the preliminary segmented image to obtain a fourth segmentation result corresponding to the to-be-processed image, including:
  • the pixels of the second pixel set in the preliminary segmented image are adjusted to belong to the target object, and a fourth segmentation result corresponding to the to-be-processed image is obtained.
  • an image segmentation apparatus comprising:
  • a first segmentation part configured to predict pixels belonging to the target object in the to-be-processed image, and obtain a preliminary segmented image corresponding to the to-be-processed image
  • the first adjustment part is configured to, according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image, predict that in the enclosed area included in the edge of the target object, those that do not belong to the
  • the pixel values of the pixels of the target object are adjusted to obtain a first segmentation result corresponding to the image to be processed.
  • the predicted pixel value of the pixel belonging to the target object is a first preset value
  • the predicted pixel value of the pixel not belonging to the target object is the first preset value. the second preset value
  • the first adjustment module is used for:
  • the pixel values of the filled preliminary segmented image are adjusted to obtain a first segmentation result corresponding to the to-be-processed image.
  • the first adjustment module is used for:
  • the first adjustment module is used for:
  • the edge information of the target object in the to-be-processed image determine the maximum connected domain included in the edge of the target object in the filled preliminary segmented image
  • the pixel values of the pixels outside the maximum connected region in the filled preliminary segmented image are adjusted to the second preset value to obtain the first segmentation result corresponding to the to-be-processed image.
  • the apparatus further includes:
  • a second acquisition module configured to acquire an image adjacent to the to-be-processed image and a second segmentation result corresponding to the adjacent image
  • a third adjustment module configured to adjust the first segmentation result according to the pixel value of the pixel at the same position in the image to be processed and the adjacent image, and the second segmentation result, to obtain the to-be-processed image The third segmentation result corresponding to the image.
  • the third adjustment module is used for:
  • the third adjustment module is used for:
  • the device further includes: a training module for training a neural network according to the training image and the labeling data of the training image, wherein the labeling data of the training image includes the true values of the pixels belonging to the target object in the training image. value;
  • the first segmentation module is used for: inputting the image to be processed into the neural network, and predicting the information of the pixels belonging to the target object in the image to be processed through the neural network; The pixel information of the object is obtained to obtain a preliminary segmented image corresponding to the to-be-processed image.
  • the training image is an electronic computed tomography CT image
  • the training module is used for: normalizing the pixel values of the training image according to a preset CT value range to obtain a normalized training image; according to the normalized training image and the training image
  • the labeled data of the images trains the neural network.
  • an image segmentation apparatus comprising:
  • the second segmentation part is configured to predict the pixels belonging to the target object in the to-be-processed image, and obtain a preliminary segmented image corresponding to the to-be-processed image;
  • a first acquiring part configured to acquire an image adjacent to the to-be-processed image and a second segmentation result corresponding to the adjacent image
  • the second adjustment part is configured to adjust the preliminary segmented image according to the pixel value of the pixel at the same position in the image to be processed and the adjacent image and the second segmentation result to obtain the image to be processed The corresponding fourth segmentation result.
  • the second adjustment module is used for:
  • the preliminary segmented image is adjusted to obtain a fourth segmentation result corresponding to the to-be-processed image.
  • the second adjustment module is used for:
  • the pixels of the second pixel set in the preliminary segmented image are adjusted to belong to the target object, and a fourth segmentation result corresponding to the to-be-processed image is obtained.
  • an electronic device comprising: one or more processors; a memory configured to store executable instructions; wherein the one or more processors are configured to invoke the memory storage executable instructions to perform the above image segmentation method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
  • a preliminary segmented image corresponding to the to-be-processed image is obtained by predicting the pixels belonging to the target object in the to-be-processed image, and according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image
  • the pixel values of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the target object are adjusted to obtain the first segmentation result corresponding to the image to be processed. Get more accurate and robust segmentation results.
  • a computer program including computer-readable codes, where, when the computer-readable codes are executed in an electronic device, a processor in the electronic device executes the above-mentioned image segmentation method.
  • 1-1 is a schematic diagram 1 of an application scenario of an image segmentation method provided by an embodiment of the present disclosure
  • 1-2 is a second schematic diagram of an application scenario of an image segmentation method provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of an image segmentation method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a U-shaped convolutional neural network provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of splicing edges of a preset width around a preliminary segmented image to obtain a preliminarily segmented image after splicing, according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart of an image segmentation method provided by another embodiment of the present disclosure.
  • FIG. 6 is a block diagram of an image segmentation apparatus provided by an embodiment of the present disclosure.
  • FIG. 7 is another block diagram of an image segmentation apparatus provided by an embodiment of the present disclosure.
  • FIG. 8 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • FIG. 9 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the bed plate of the CT instrument will become an artifact in the scanned CT image sequence.
  • This kind of artifact will cause great interference in the 3D visualization of the human body by the computer-aided software (ie, the 3D human body model obtained from the CT image sequence) and the subsequent processing.
  • the computer-aided software ie, the 3D human body model obtained from the CT image sequence
  • bed boards of various shapes will block the human body during 3D visualization, and when the organs in the human body are segmented, some special-shaped bed boards outside the human body may be identified as false positives.
  • the bed board in the CT image is mainly removed through threshold and morphological operations, and the human body part in the CT image is retained.
  • the shape of the bed plate, the CT value of the bed plate in the CT image, and the uniformity of the CT value of the bed plate in the CT image are significantly different from those of the human body, which can be removed by thresholding and morphological manipulation.
  • related technologies cannot obtain accurate segmentation results. For example, a curved cortical bed board that fits closely with the human body is closely fitted with the human body in the CT image, the demarcation is not obvious, and the CT value is relatively close, so it is difficult to separate it from the human body.
  • CT value is a calculation unit for measuring the density of a local tissue or organ in the human body, also known as Hounsfield Unit (HU).
  • the embodiments of the present disclosure provide an image segmentation method and device, an electronic device, and a storage medium.
  • the pixel value of the pixel is adjusted to obtain the first segmentation result corresponding to the to-be-processed image, so that a more accurate and robust segmentation result can be obtained.
  • the executing subject of the image segmentation method may be an image segmentation device.
  • the image segmentation method may be performed by a terminal device or a server or other processing device.
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, or a wearable devices, etc.
  • the image segmentation method may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • the image segmentation device 10 may include a processing device 11 and an image acquisition device 12.
  • the image segmentation device 11 can pass
  • the image acquisition device 12 acquires the image to be divided, and the processing device 12 performs segmentation processing on the to-be-divided image to obtain a first segmentation result.
  • the image segmentation device may be implemented as a CT machine, and a CT image to be segmented is acquired by a CT scanner, and image segmentation processing is performed on the acquired CT image to be segmented.
  • the image segmentation apparatus 10 can receive the real-time collected images to be segmented transmitted by other devices 13 through the network 14 . , in this way, the image segmentation device 10 can perform segmentation processing on the received image to be segmented to obtain the first segmentation result.
  • the image segmentation device can be implemented as a smart phone, and the smart phone can receive the CT image to be segmented sent by the CT machine through the network, so that the smart phone can perform image segmentation processing on the received CT image to be segmented.
  • FIG. 1 shows a flowchart of the image segmentation method provided by an embodiment of the present disclosure.
  • the image segmentation method includes step S11 and step S12.
  • step S11 pixels belonging to the target object in the image to be processed are predicted, and a preliminary segmented image corresponding to the image to be processed is obtained.
  • the image to be processed may represent an image that needs to be segmented.
  • the to-be-processed image may be a two-dimensional image or a three-dimensional image.
  • the image to be processed may be a medical image.
  • the image to be processed may be a CT image, an MRI (Magnetic Resonance Imaging, magnetic resonance imaging) image, and the like.
  • the to-be-processed image can also be any image that needs to be segmented other than medical images.
  • the target object may represent an object that needs to be segmented.
  • the target object may be a human body, an animal body, an organ of a human body, an organ of an animal body, or the like.
  • each pixel in the image to be processed belongs to the target object.
  • the probability that each pixel in the image to be processed belongs to the target object can be predicted.
  • the probability of the pixel belonging to the target object is greater than or equal to the preset threshold, it can be determined that the pixel belongs to the target object; if the probability of the pixel belonging to the target object is less than the preset threshold, it can be determined The pixel does not belong to the target object.
  • the preset threshold may be 0.5.
  • a binarized preliminary segmented image corresponding to the to-be-processed image can be obtained.
  • the size of the preliminary segmented image may be the same as that of the image to be processed. For example, if the height of the image to be processed is H and the width is W, the height of the preliminary segmented image is also H and the width is W.
  • the predicted pixel value of the pixel belonging to the target object is a first preset value
  • the predicted pixel value of the pixel not belonging to the target object is the first preset value.
  • the second preset value is not equal to the second preset value.
  • the pixel value of the pixel in the preliminarily segmented image is the first preset value; if it is predicted that the pixel does not belong to the target object, the image is preliminarily segmented
  • the pixel value of the pixel in is the second preset value.
  • the first preset value is 1 and the second preset value is 0, that is, the predicted pixel value of the pixel belonging to the target object in the preliminary segmented image is 1, and the predicted pixel value of the pixel not belonging to the target object is 0 .
  • the embodiments of the present disclosure do not limit the values of the first preset value and the second preset value, as long as the first preset value and the second preset value are different.
  • the first preset value may be 0, and the second preset value may be 255.
  • the method before predicting the pixels belonging to the target object in the image to be processed, the method further includes: training a neural network according to the training image and the labeled data of the training image, wherein the training The labeling data of the image includes the true value of the pixels belonging to the target object in the training image; the predicting the pixels belonging to the target object in the image to be processed, and obtaining a preliminary segmented image corresponding to the image to be processed, including: The processed image is input into the neural network, and the neural network is used to predict the information of the pixels belonging to the target object in the to-be-processed image; according to the information of the pixels belonging to the target object in the to-be-processed image, the to-be-processed image is obtained.
  • the image corresponds to the preliminary segmented image.
  • the labeled data of the training image may include a mask corresponding to the training image, and the size of the mask corresponding to the training image may be the same as the training image.
  • the pixel value of the pixel may be a first preset value, for example, the first preset value The value can be 1; if in the training image, the true value of the pixel does not belong to the target object, then in the mask corresponding to the training image, the pixel value of the pixel can be the second preset value, for example , the second preset value may be 0.
  • the labeled data of the training image is not limited to be represented by a mask.
  • the labeled data of the training image may also be represented by a matrix, a table, or the like.
  • the training image may be input into the neural network, and the predicted segmentation result of the training image may be output via the neural network, wherein the predicted segmentation result of the training image may include The probability of each pixel belonging to the target object; according to the labeled data of the training image and the predicted segmentation result of the training image, the value of the loss function corresponding to the training image is obtained; according to the value of the loss function corresponding to the training image value to train the neural network.
  • the value of the Dice loss function can be obtained according to the predicted segmentation result of the training image obtained by the neural network and the labeling data of the training image.
  • the predicted segmentation result of the training image obtained by the neural network is P
  • the labeled data of the training image is M
  • the value of the Dice loss function in other examples, a loss function such as a cross-entropy loss function may also be employed.
  • the value of the loss function can be passed to each parameter of the neural network layer by layer through reverse derivation, and adaptive matrix estimation (Adaptive moment estimation, Adam) can be used (for example, the learning rate is 0.0003), Stochastic Gradient Descent (SGD) and other optimizers to update the parameters of the neural network.
  • adaptive matrix estimation Adaptive moment estimation, Adam
  • SGD Stochastic Gradient Descent
  • the information of pixels in the image to be processed that are predicted by the neural network and belong to the target object may include the probability that each pixel in the image to be processed belongs to the target object.
  • the obtaining a preliminary segmented image corresponding to the to-be-processed image according to the information of the pixels belonging to the target object in the to-be-processed image may include: for any pixel, if the If the probability that the pixel belongs to the target object is greater than or equal to the preset threshold, the pixel value of the pixel in the preliminary segmented image corresponding to the image to be processed is the first preset value; if the pixel in the image to be processed belongs to the target object If the probability is less than the preset threshold, the pixel value of the pixel in the preliminary segmented image corresponding to the image to be processed is the second preset value.
  • the information of the pixels belonging to the target object in the image to be processed predicted by the neural network may include position information of the pixels belonging to the target object in the image to be processed.
  • the obtaining a preliminary segmented image corresponding to the to-be-processed image according to the information of the pixels belonging to the target object in the to-be-processed image may include: for any pixel, if the to-be-processed image is The position information of the pixel belonging to the target object includes the position of the pixel, then the pixel value of the pixel in the preliminary segmented image corresponding to the image to be processed is the first preset value; if the image to be processed belongs to the target object If the position information of the pixel does not include the position of the pixel, the pixel value of the pixel in the preliminary segmented image corresponding to the image to be processed is the second preset value.
  • the part of the image to be processed that belongs to the target object is predicted by the neural network.
  • the image to be processed is a CT image
  • the target object is a human body
  • this implementation does not consider removing various bedboards in the CT image, that is, no longer focuses on the non-human body part, but focuses on
  • the segmentation of the human body in the CT image can ensure the accuracy and robustness of the segmentation results under a large amount of special-shaped bed plate data. That is, even if the image to be processed contains a special-shaped bed plate, an accurate and robust segmentation result can be obtained by adopting this implementation manner.
  • the neural network may be a deep learning-based neural network.
  • the neural network may be a U-shaped convolutional neural network.
  • FIG. 3 shows a schematic diagram of a U-shaped convolutional neural network in an embodiment of the present disclosure.
  • the data flow is from left to right, and the U-shaped convolutional neural network includes a compression process and a decompression process.
  • the U-shaped convolutional neural network can be input, and the human body part in the image to be processed can be fitted by the U-shaped convolutional neural network. Finally, output the preliminary segmented image.
  • Convolution-regularization-activation can be replaced with residual block (Residual Block), depth convolution block (Inception Block), dense block (Dense Block), etc.
  • Pooling can be either max pooling or average pooling, or it can be replaced by a convolutional layer with a stride of 2.
  • the training image is a two-dimensional CT image
  • the neural network is a two-dimensional convolutional neural network
  • training images can be augmented.
  • the training image can be randomly scaled by a factor of 0.6 to 1.4, and then cropped from the center of the scaled image at a size of 512 ⁇ 512 to obtain training images of the same size at different scales.
  • training images can be divided into training and validation sets.
  • training images can be split into training and validation sets in a 4:1 ratio.
  • the neural network may be repeatedly trained using training images until the loss of the neural network on the validation set falls below 0.03.
  • the related technology uses operations such as morphology to segment the image, it is necessary to introduce a large number of hyperparameters, such as the threshold selected during binarization, the number of opening/closing operations, and the structure selected during erosion/dilation.
  • hyperparameters such as the threshold selected during binarization, the number of opening/closing operations, and the structure selected during erosion/dilation.
  • the threshold value needs to be changed to obtain normal segmentation results.
  • the target object in the training image is segmented through a neural network, which can be widely used in similar tasks without setting hyperparameters, so the robustness is high.
  • the training image is an electronic computed tomography CT image
  • the training of the neural network according to the training image and the labeled data of the training image includes: according to a preset CT value range, performing The pixel values of the training image are normalized to obtain a normalized training image; the neural network is trained according to the normalized training image and the labeled data of the training image.
  • the preset CT value range may be determined according to the CT value range of the target object. For example, if the target object is the human body, the preset CT value range may be set to [-500, 1200] according to the CT value range of the human body organs.
  • performing normalization processing on pixel values of the training image according to a preset CT value range to obtain a normalized training image including: for any pixel in the training image, According to the preset CT value range, the pixel value of the pixel is preprocessed to obtain the preprocessed pixel value of the pixel, wherein the preprocessed pixel value of the pixel is within the preset range.
  • the ratio of the first difference to the second difference is taken as the normalized pixel value of the pixel, wherein the first difference is equal to the preprocessed pixel value of the pixel.
  • the difference from the lower boundary value of the preset CT value range, the second difference is equal to the difference between the upper boundary value of the preset CT value range and the preprocessed pixel value of the pixel .
  • the preprocessed pixel value of the pixel is h
  • the lower boundary value of the preset CT value range is h min
  • the upper boundary value of the preset CT value range is h max
  • the The normalized pixel value of the pixel can be equal to According to the normalized pixel value of each pixel of the training image, a normalized training image can be obtained. That is, in the normalized training image, the pixel value of any pixel is the normalized pixel value of the pixel.
  • performing preprocessing on the pixel value of the pixel according to the preset CT value range to obtain the preprocessed pixel value of the pixel may include : For any pixel in the training image, if the pixel value of the pixel is smaller than the lower boundary value of the preset CT value range, the lower boundary value can be used as the preprocessed pixel value of the pixel ; If the pixel value of the pixel is greater than the upper boundary value of the preset CT value range, then the upper boundary value can be used as the preprocessed pixel value of the pixel; if the pixel value of the pixel is in Within the preset CT value range, the pixel value of the pixel may be used as the preprocessed pixel value of the pixel.
  • the preset CT value range is [-500, 1200], the lower boundary value of the preset CT value range is -500, and the upper boundary value of the preset CT value range is 1200.
  • the pixel value of a pixel in the training image is -505, you can use -500 as the preprocessed pixel value of the pixel; if the pixel value of a pixel in the training image is 1250, you can use 1200 as the pixel value after preprocessing.
  • the pixel values of the training image are normalized to obtain a normalized training image, and according to the normalized training image and the training image
  • the labeled data is used to train the neural network, thereby helping to reduce the computational load of the neural network and improve the convergence speed of the neural network.
  • step S12 according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image, it is predicted that in the closed area included in the edge of the target object, the target object does not belong to the target object
  • the pixel value of the pixel is adjusted to obtain the first segmentation result corresponding to the image to be processed.
  • an edge detection method may be used to determine the edge information of the target object in the image to be processed.
  • edge detection methods such as Canny algorithm and Sobel algorithm may be used to determine the edge information of the target object in the image to be processed.
  • the edge information of the target object in the image to be processed may include position information of pixels belonging to the edge of the target object in the image to be processed.
  • the first segmentation result may be used as the final segmentation result corresponding to the image to be processed.
  • a preliminary segmented image corresponding to the to-be-processed image is obtained by predicting the pixels belonging to the target object in the to-be-processed image, and according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image
  • the pixel values of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the target object are adjusted to obtain the first segmentation result corresponding to the image to be processed.
  • the target object is a human body or an animal body
  • the pixels inside the organs of the target object can also be segmented as belonging to the target object, so that a more accurate and robust segmentation result can be obtained.
  • the CT image is segmented by using the image segmentation method provided by the embodiment of the present disclosure, so that the human body part in the CT image can be accurately segmented, and the outside of the human body in the CT image can be accurately removed.
  • interference e.g. bed board, ventilator lines, head fixtures, etc.
  • the predicted value in the enclosed area included in the edge of the target object is Adjusting the pixel values of the pixels that do not belong to the target object to obtain the first segmentation result corresponding to the image to be processed, comprising: adjusting the pixel value in the preliminary segmented image to the value of the closed area of the second preset value.
  • the pixel value is adjusted to the first preset value to obtain a filled preliminary segmented image; according to the edge information of the target object in the to-be-processed image, the pixel value of the filled preliminary segmented image is adjusted, A first segmentation result corresponding to the to-be-processed image is obtained.
  • a filled preliminary segmented image is obtained by adjusting the pixel value of the closed area whose pixel value is the second preset value in the preliminary segmented image to the first preset value, thereby enabling the
  • the first segmentation result corresponding to the processed image covers the inside of the organ of the target object, for example, covers the lung parenchyma such as the lung, the inside of the digestive tract (eg, the gastrointestinal tract), and the like.
  • the missing holes in the target object (for example, in the human body) after image segmentation can be filled.
  • the first segmentation result corresponding to the to-be-processed image can be obtained, thereby reducing the The background part in the image to be processed (ie the part that does not belong to the target object) is divided into the probability of belonging to the target object.
  • adjusting the pixel value of the enclosed area with the pixel value of the second preset value in the preliminary segmented image to the first preset value to obtain the filled preliminary segmented image comprising: splicing edges of a preset width around the preliminary segmented image to obtain a preliminarily segmented image after splicing, wherein the pixel value of the pixels of the edge of the spliced preset width is the second preset value ; Select the pixel of the image edge of the preliminarily segmented image after the splicing as a seed point, and perform a flood filling operation on the preliminarily segmented image after the splicing to obtain the preliminary segmented image after the filling.
  • the preset width may be greater than or equal to 1 pixel.
  • the preset width may be 1 pixel.
  • FIG. 4 shows a schematic diagram of splicing edges of a preset width around a preliminary segmented image to obtain a preliminarily segmented image after splicing.
  • the preset width is 1 pixel.
  • edges with preset widths may be spliced around the preliminary segmented image.
  • a side with a preset width can also be spliced on one side, two sides or three sides of the preliminary segmented image.
  • the pixels of the image edge of the preliminarily segmented image after splicing may refer to the pixels on the edge of the preliminarily segmented image after splicing, for example, the uppermost pixel of the preliminarily segmented image after splicing pixel, bottommost pixel, leftmost pixel, rightmost pixel, etc.
  • the pixel in the upper left corner of the stitched preliminary segmented image may be used as the seed point.
  • the seeds of the flood filling operation can be guaranteed.
  • the point belongs to the background part (ie the part that does not belong to the target object), so that the first segmentation result corresponding to the image to be processed can cover the inside of the organ of the target object, thereby obtaining a more accurate segmentation result.
  • the pixel value of the filled preliminary segmented image is adjusted according to the edge information of the target object in the to-be-processed image to obtain the first corresponding to the to-be-processed image.
  • the segmentation result includes: according to the edge information of the target object in the to-be-processed image, determining the maximum connected domain included in the edge of the target object in the preliminarily segmented image after filling; The pixel values of the pixels outside the maximum connected region in the segmented image are adjusted to the second preset value to obtain the first segmentation result corresponding to the to-be-processed image.
  • false positive regions that are not connected to the target object can be eliminated, thereby greatly reducing the probability of erroneously classifying the background part as belonging to the target object, thereby improving the accuracy of image segmentation.
  • the target object is a human body
  • false positive regions that are not connected to the human body can be eliminated, thereby greatly reducing the probability that the background part (eg, bed board, etc.) is erroneously classified as belonging to the human body.
  • the method further includes: acquiring images adjacent to the image to be processed and the adjacent images The corresponding second segmentation result; according to the pixel value of the pixel at the same position in the image to be processed and the adjacent image, and the second segmentation result, adjust the first segmentation result to obtain the to-be-processed The third segmentation result corresponding to the image.
  • the image adjacent to the image to be processed may be an image belonging to the same image sequence as the image to be processed and adjacent to the image to be processed.
  • the image to be processed is a CT image
  • the adjacent images may be images belonging to the same CT image sequence as the image to be processed and adjacent to the image to be processed.
  • the second segmentation result may refer to the final segmentation result corresponding to the adjacent images.
  • the continuity of the image to be processed and the second segmentation result can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional segmentation result.
  • the target object is a human body
  • the continuity of the image to be processed and the human body in the adjacent images can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional human body segmentation result.
  • the segmentation result corresponding to each CT image in the CT image sequence can be obtained by using this implementation manner, thereby obtaining a smoother and more accurate three-dimensional human body segmentation result.
  • the first segmentation result is adjusted according to the pixel value of the pixel at the same position in the to-be-processed image and the adjacent image, and the second segmentation result, to obtain the
  • the third segmentation result corresponding to the image to be processed includes: according to the adjacent images, the second segmentation result belongs to the target object and is in the same position as the image to be processed. For pixels whose difference value is less than or equal to a third preset value, the first segmentation result is adjusted to obtain a third segmentation result corresponding to the image to be processed.
  • the difference between the pixel values of the adjacent image and the image to be processed at the same position may refer to the difference between the normalized pixel values of the adjacent image and the image to be processed at the same position .
  • the third preset value may be 0.1.
  • the segmentation result corresponding to any pixel in the adjacent images may refer to whether the pixel belongs to the target object in the second segmentation result.
  • the difference between the pixel values of the target object in the second segmentation result and at the same position as the image to be processed is less than or equal to the first three preset values of pixels, adjusting the first segmentation result to obtain a third segmentation result corresponding to the image to be processed, including: according to the pixel value at the same position in the image to be processed and the adjacent image Pixels whose difference value is less than or equal to the third preset value are obtained to obtain a first pixel set; according to the pixels belonging to the target object in the second segmentation result of the first pixel set, a second pixel set is obtained; The pixels of the second pixel set in the first segmentation result are adjusted to belong to the target object, and a third segmentation result corresponding to the image to be processed is obtained.
  • the difference between the pixel values of any pixel in the first pixel set in the to-be-processed image and the adjacent image is less than or equal to a third preset value.
  • the first pixel set is obtained according to the pixels whose difference between the pixel values in the image to be processed and the adjacent images at the same position is less than or equal to the third preset value, and according to the first pixel set.
  • a pixel is concentrated in the pixels belonging to the target object in the second segmentation result to obtain a second pixel set, and the pixels of the second pixel set in the first segmentation result are adjusted to belong to the target object,
  • the third segmentation result corresponding to the image to be processed is obtained, whereby the pixels corresponding to the image to be processed can be classified according to the pixels in the second segmentation result that belong to the target object and are relatively related to the image to be processed.
  • the first segmentation result is adjusted, thereby helping to improve the accuracy of the final segmentation result corresponding to the image to be processed.
  • the third segmentation result may be used as the final segmentation result corresponding to the image to be processed.
  • FIG. 5 shows another flowchart of the image segmentation method provided by the embodiment of the present disclosure.
  • the executing subject of the image segmentation method may be an image segmentation device.
  • the image segmentation method may be performed by a terminal device or a server or other processing device.
  • the terminal device may be a user equipment, a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like.
  • the image segmentation method may be implemented by a processor invoking computer-readable instructions stored in a memory. As shown in FIG. 5 , the image segmentation method includes steps S41 to S43.
  • step S41 pixels belonging to the target object in the image to be processed are predicted, and a preliminary segmented image corresponding to the image to be processed is obtained.
  • step S42 an image adjacent to the to-be-processed image and a second segmentation result corresponding to the adjacent image are acquired.
  • the image adjacent to the to-be-processed image may be an image that belongs to the same image sequence as the to-be-processed image and is adjacent to the to-be-processed image.
  • the image to be processed is a CT image
  • the adjacent images may be images belonging to the same CT image sequence as the image to be processed and adjacent to the image to be processed.
  • the second segmentation result may refer to the final segmentation result corresponding to the adjacent images.
  • step S43 according to the pixel value of the pixel at the same position in the image to be processed and the adjacent image, and the second segmentation result, the preliminary segmented image is adjusted to obtain a fourth corresponding image to be processed. Split result.
  • the continuity of the image to be processed and the second segmentation result can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional segmentation result.
  • the target object is a human body
  • the continuity of the image to be processed and the human body in the adjacent images can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional human body segmentation result.
  • a segmentation result corresponding to each CT image in the CT image sequence can be obtained by using the embodiments of the present disclosure, thereby obtaining a smoother and more accurate three-dimensional human body segmentation result.
  • the preliminary segmented image is adjusted to obtain the fourth segmentation result corresponding to the image to be processed includes: according to the adjacent images, the second segmentation result belongs to the target object and is in the same position as the image to be processed. For pixels whose difference value is less than or equal to the third preset value, the preliminary segmented image is adjusted to obtain a fourth segmentation result corresponding to the image to be processed.
  • the difference between the pixel values of the adjacent image and the image to be processed at the same position may refer to the normalized difference between the adjacent image and the image to be processed at the same position Difference of pixel values.
  • the third preset value may be 0.1.
  • the original pixel values of the adjacent images and the image to be processed at the same position can also be compared.
  • the difference between the pixel values belonging to the target object and at the same position as the image to be processed is less than or equal to the first Pixels with three preset values, adjust the preliminary segmented image, and obtain a fourth segmentation result corresponding to the image to be processed.
  • the preliminary segmented image corresponding to the to-be-processed image is adjusted, thereby helping to improve the accuracy of the final segmentation result corresponding to the to-be-processed image.
  • the segmentation result corresponding to any pixel in the adjacent images may refer to whether the pixel belongs to the target object in the second segmentation result.
  • the difference between the pixel values in the second segmentation result belonging to the target object and at the same position as the image to be processed is less than or a pixel equal to a third preset value
  • adjusting the preliminary segmented image to obtain a fourth segmentation result corresponding to the to-be-processed image including: according to the to-be-processed image and the adjacent image at the same position; A pixel whose pixel value difference is less than or equal to a third preset value is obtained to obtain a first pixel set; and a second pixel set is obtained according to the pixels of the first pixel set that belong to the target object in the second segmentation result ; Adjust the pixels of the second pixel set in the preliminary segmented image to belong to the target object, and obtain a fourth segmentation result corresponding to the to-be-processed image.
  • the difference between the pixel values of any pixel in the first pixel set in the to-be-processed image and the adjacent image is less than or equal to a third preset value.
  • a first pixel set is obtained according to a pixel whose difference between the pixel value of the image to be processed and the pixel value of the adjacent image at the same position is less than or equal to a third preset value.
  • a pixel is concentrated in the pixels belonging to the target object in the second segmentation result to obtain a second pixel set, and the pixels of the second pixel set in the first segmentation result are adjusted to belong to the target object,
  • the third segmentation result corresponding to the image to be processed is obtained, whereby the pixels corresponding to the image to be processed can be classified according to the pixels in the second segmentation result that belong to the target object and are relatively related to the image to be processed.
  • the first segmentation result is adjusted, thereby helping to improve the accuracy of the final segmentation result corresponding to the image to be processed.
  • the fourth segmentation result may be used as the final segmentation result corresponding to the image to be processed.
  • the method further includes: according to the edge information of the target object in the to-be-processed image, in the In the fourth segmentation result, the pixel values of the predicted pixels not belonging to the target object in the enclosed area included in the edge of the target object are adjusted to obtain the fifth segmentation result corresponding to the image to be processed.
  • the predicted Adjusting the pixel values of pixels that do not belong to the target object to obtain a fifth segmentation result corresponding to the to-be-processed image including: closing the pixel value in the fourth segmentation result to the second preset value
  • the pixel value of the area is adjusted to the first preset value, and a filled preliminary segmented image corresponding to the fourth segmentation result is obtained; according to the edge information of the target object in the to-be-processed image, the filled The pixel value of the preliminary segmented image is adjusted to obtain the fifth segmentation result corresponding to the to-be-processed image.
  • the filled preliminary segmented image includes: splicing edges of a preset width around the fourth segmentation result to obtain a fourth segmentation result after splicing, wherein the pixel values of the pixels of the spliced edges of the preset width are is the second preset value; select the pixels of the image edge of the fourth segmentation result after splicing as a seed point, and perform a flood filling operation on the fourth segmentation result after splicing to obtain the fourth segmentation
  • the result corresponds to the padded preliminary segmented image.
  • the predicted object does not belong to the target object.
  • adjusting the pixel values of the pixels of the target object to obtain a fifth segmentation result corresponding to the image to be processed including: determining the filled preliminary segmentation according to the edge information of the target object in the image to be processed the maximum connected domain included in the edge of the target object in the image; adjust the pixel values of the pixels outside the maximum connected domain in the filled preliminary segmented image to the second preset value to obtain the The fifth segmentation result corresponding to the image to be processed.
  • the fifth segmentation result may be used as the final segmentation result corresponding to the image to be processed.
  • the training image is a CT image of the human body.
  • the preset CT value range can be set to [-500, 1200] according to the CT values of all the tissues and organs of the human body. In this way, all the tissues and organs of the human body are covered.
  • any pixel in the training image is preprocessed to obtain the preprocessed pixel value.
  • the lower boundary value may be used as the preprocessed value of the pixel.
  • the upper boundary value can be used as the preprocessed pixel value of the pixel; if the pixel value of the pixel is If the value is within the preset CT value range, the pixel value of the pixel may be used as the preprocessed pixel value of the pixel.
  • the pixel value of a pixel in the training image is -505, you can use -500 as the preprocessed pixel value of the pixel; if the pixel value of a pixel in the training image is 1250, you can use -500 as the pixel value after preprocessing. 1200 is used as the preprocessed pixel value of the pixel; if the pixel value of a certain pixel in the training image is 800, 800 can be used as the preprocessed pixel value of the pixel.
  • formula (1) can be used to normalize the pixel value of any pixel of the training image:
  • h is the preprocessed pixel value of the pixel
  • h min is the lower boundary value of the preset CT value range
  • h max is the upper boundary value of the preset CT value range.
  • the normalized training images can be augmented.
  • the normalized training image can be randomly scaled by a factor of 0.6 to 1.4, and then cropped from the center of the scaled image at a size of 512 ⁇ 512 to obtain training images of the same size at different scales.
  • the normalized and augmented training images can be divided into a training set and a validation set.
  • the processed training images can be divided into training and validation sets in a 4:1 ratio.
  • the U-shaped convolutional neural network can be repeatedly trained by using the training set until the loss of the U-shaped convolutional neural network on the verification set drops below 0.03, and a trained U-shaped convolutional neural network is obtained.
  • the CT image to be processed is obtained, and the CT image to be processed is input into the U-shaped convolutional neural network after training, and the U-shaped convolutional neural network is used to predict whether the CT image to be processed belongs to Information of the pixels of the target object; according to the information of the pixels belonging to the target object in the CT image to be processed, a preliminary segmented image corresponding to the CT image to be processed is obtained.
  • edges with a width of 1 pixel can be spliced around the preliminary segmented image to obtain a preliminarily segmented image after splicing; the pixel in the upper left corner of the preliminarily segmented image after splicing is selected as a seed point, and perform a flood filling operation on the preliminarily segmented image after splicing to obtain a preliminarily segmented image after filling.
  • the maximum connected domain included in the edge of the target object in the preliminarily segmented image after filling can be determined;
  • the pixel values of the pixels outside the maximum connected region in the image are adjusted to the second preset value to obtain the first segmentation result corresponding to the CT image to be processed.
  • an image adjacent to the CT image to be processed and a second segmentation result corresponding to the adjacent image may be obtained.
  • the first pixel set can be obtained according to the pixel whose difference between the CT image to be processed and the pixel value at the same position in the adjacent image is less than or equal to the third preset value; according to the first pixel Concentrating on the pixels belonging to the target object in the second segmentation result to obtain a second pixel set; adjusting the pixels of the second pixel set in the first segmentation result to belong to the target object to obtain the The third segmentation result corresponding to the CT image to be processed.
  • the present disclosure also provides image segmentation devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image segmentation method provided by the present disclosure.
  • image segmentation devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image segmentation method provided by the present disclosure.
  • FIG. 6 shows a block diagram of an image segmentation apparatus provided by an embodiment of the present disclosure.
  • the image segmentation device includes: a first segmentation part 51, configured to predict pixels belonging to the target object in the image to be processed, and obtain a preliminary segmented image corresponding to the image to be processed; a first adjustment part 52, It is configured to, according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image, perform an analysis of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the target object. The pixel values are adjusted to obtain the first segmentation result corresponding to the image to be processed.
  • the predicted pixel value of the pixel belonging to the target object is a first preset value
  • the predicted pixel value of the pixel not belonging to the target object is the first preset value.
  • the second preset value; the first adjustment part 52 is configured to adjust the pixel value of the closed area whose pixel value in the preliminary segmented image is the second preset value to the first preset value, to obtain The filled preliminary segmented image; according to the edge information of the target object in the to-be-processed image, the pixel values of the filled preliminary segmented image are adjusted to obtain a first segmentation result corresponding to the to-be-processed image.
  • the first adjustment part 52 is configured to splicing edges of a preset width around the preliminary segmented image to obtain a preliminarily segmented image after splicing, wherein the spliced preset The pixel value of the pixel of the side of the width is the second preset value; the pixel of the image edge of the spliced preliminary segmented image is selected as the seed point, and the flooded filling operation is performed on the spliced preliminary segmented image, Obtain the preliminarily segmented image after filling.
  • the first adjustment part 52 is configured to determine the edge of the target object in the filled preliminary segmented image according to the edge information of the target object in the image to be processed The maximum connected domain included; the pixel values of the pixels outside the maximum connected domain in the filled preliminary segmented image are adjusted to the second preset value to obtain the first segment corresponding to the image to be processed. result.
  • the apparatus further includes: a second acquisition part, configured to acquire an image adjacent to the image to be processed and a second segmentation result corresponding to the adjacent image; a third adjustment part, configured to adjust the first segmentation result according to the pixel value of the pixel at the same position in the to-be-processed image and the adjacent image, and the second segmentation result, to obtain the corresponding pixel value of the to-be-processed image The third segmentation result.
  • the third adjustment part is configured to be based on pixel values in the adjacent images that belong to the target object and are at the same position as the image to be processed in the second segmentation result For pixels whose difference value is less than or equal to a third preset value, adjust the first segmentation result to obtain a third segmentation result corresponding to the image to be processed.
  • the third adjustment part is configured to be smaller than or equal to a third preset value according to the difference between the pixel values in the image to be processed and the pixel values in the adjacent images at the same position.
  • the apparatus further includes: a training part configured to train a neural network according to a training image and labeling data of the training image, wherein the labeling data of the training image includes The true value of the pixels belonging to the target object; the first segmentation part 51 is configured to input the image to be processed into the neural network, and predict the information of the pixels belonging to the target object in the image to be processed through the neural network ; Obtain a preliminary segmented image corresponding to the to-be-processed image according to the information of the pixels belonging to the target object in the to-be-processed image.
  • the training image is an electronic computed tomography CT image; the training part is configured to perform normalization processing on the pixel values of the training image according to a preset CT value range, A normalized training image is obtained; the neural network is trained according to the normalized training image and the labeled data of the training image.
  • a preliminary segmented image corresponding to the to-be-processed image is obtained by predicting the pixels belonging to the target object in the to-be-processed image, and according to the edge information of the target object in the to-be-processed image, in the preliminary segmented image, the target Adjust the pixel values of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the object, and obtain the first segmentation result corresponding to the image to be processed, so that a more accurate and robust segmentation can be obtained. result.
  • FIG. 7 shows another block diagram of an image segmentation apparatus provided by an embodiment of the present disclosure.
  • the image segmentation device includes: a second segmentation part 61, configured to predict pixels belonging to the target object in the image to be processed, and obtain a preliminary segmented image corresponding to the image to be processed; a first acquisition part 62, is configured to obtain the image adjacent to the image to be processed and the second segmentation result corresponding to the adjacent image; the second adjustment part 63 is configured to be the same as the adjacent image according to the image to be processed The pixel value of the pixel at the position and the second segmentation result are used to adjust the preliminary segmented image to obtain a fourth segmentation result corresponding to the to-be-processed image.
  • the second adjustment part 63 is configured to, according to the adjacent images, belong to the target object in the second segmentation result, and are in the same range as the to-be-processed image. If the difference between the pixel values at the same position is less than or equal to the third preset value, the preliminary segmented image is adjusted to obtain a fourth segmented result corresponding to the image to be processed.
  • the second adjustment part 63 is configured to be less than or equal to a third preset value according to the difference between the pixel values in the image to be processed and the pixel values in the adjacent images at the same position obtain the first pixel set; according to the pixels belonging to the target object in the second segmentation result, the second pixel set is obtained; the second pixel set in the preliminary segmented image is The pixels of the set are adjusted to belong to the target object, and a fourth segmentation result corresponding to the image to be processed is obtained.
  • the apparatus further includes: a fourth adjustment part, configured to, according to the edge information of the target object in the image to be processed, in the fourth segmentation result, perform an adjustment on the target The pixel values of the predicted pixels not belonging to the target object in the enclosed area included in the edge of the object are adjusted to obtain a fifth segmentation result corresponding to the image to be processed.
  • the fourth adjustment part is configured to adjust the pixel value of the enclosed area whose pixel value is the second preset value in the fourth segmentation result to the first preset value value to obtain the filled preliminary segmented image corresponding to the fourth segmentation result; according to the edge information of the target object in the to-be-processed image, adjust the pixel values of the filled preliminary segmented image to obtain the the fifth segmentation result corresponding to the image to be processed.
  • the fourth adjustment part is configured to splicing edges of a preset width around the fourth segmentation result to obtain a spliced fourth segmentation result, wherein the spliced
  • the pixel value of the pixel of the side of the width is set to the second preset value; the pixel of the image edge of the fourth segmentation result after the splicing is selected as the seed point, and the fourth segmentation result after the splicing is flooded
  • a filling operation is performed to obtain a filled preliminary segmented image corresponding to the fourth segmentation result.
  • the fourth adjustment part is configured to determine, according to edge information of the target object in the image to be processed, where the edge of the target object in the filled preliminary segmented image is located.
  • the maximum connected domain included; the pixel values of the pixels outside the maximum connected domain in the filled preliminary segmented image are adjusted to the second preset value to obtain the fifth segmentation result corresponding to the image to be processed .
  • the continuity of the image to be processed and the second segmentation result can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional segmentation result.
  • the target object is a human body
  • the continuity of the image to be processed and the human body in the adjacent images can be ensured, thereby helping to obtain a smoother and more accurate three-dimensional human body segmentation result.
  • a segmentation result corresponding to each CT image in the CT image sequence can be obtained by using the embodiments of the present disclosure, thereby obtaining a smoother and more accurate three-dimensional human body segmentation result.
  • the functions or included parts of the apparatus may be configured to execute the methods described in the above method embodiments, and the specific implementation and technical effects may refer to the above method embodiments. It is concise and will not be repeated here.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, a unit, a module or a non-modularity.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, the execution of the processor in the electronic device is configured to implement the above-mentioned image segmentation method.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the image segmentation method provided by any of the foregoing embodiments.
  • Embodiments of the present disclosure further provide an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke executable instructions stored in the memory instruction to execute the above method.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 8 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
  • an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 802 may include one or more sections that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia portion to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of the touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. Buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on communication standards, such as wireless network (Wi-Fi), second generation mobile communication technology (2G), third generation mobile communication technology (3G), fourth generation mobile communication technology (4G) )/Long Term Evolution (LTE) of Universal Mobile Communications Technology, Fifth Generation Mobile Communications Technology (5G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other element implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other element implementation is used to perform the above method.
  • a non-transitory computer-readable storage medium comprising a memory 804 of computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described method.
  • FIG. 9 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more portions each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM). ), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or similar.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a section, segment, or portion of instructions that includes one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • a preliminary segmented image corresponding to the to-be-processed image is obtained by predicting the pixels belonging to the target object in the to-be-processed image; and according to the edge information of the target object in the to-be-processed image, the preliminary segmented image is
  • the pixel values of the predicted pixels that do not belong to the target object in the enclosed area included in the edge of the target object are adjusted to obtain a first segmentation result corresponding to the image to be processed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de segmentation d'image, ainsi qu'un dispositif électronique et un support de stockage. Le procédé consiste à : effectuer une prédiction sur un pixel, qui appartient à un objet cible, dans une image à traiter afin d'obtenir une image segmentée au préalable correspondant à l'image à traiter (S11); et en fonction des informations de bord de l'objet cible dans l'image à traiter, ajuster, dans l'image segmentée au préalable, une valeur du pixel qui n'appartient pas à l'objet cible, dans une zone fermée incluse dans un bord de l'objet cible afin d'obtenir un premier résultat de segmentation correspondant à l'image à traiter (S12).
PCT/CN2020/138131 2020-08-17 2020-12-21 Procédé et appareil de segmentation d'image, dispositif électronique et support de stockage WO2022036972A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227001101A KR20220012407A (ko) 2020-08-17 2020-12-21 이미지 분할 방법 및 장치, 전자 기기 및 저장 매체
JP2021576593A JP2022548453A (ja) 2020-08-17 2020-12-21 画像分割方法及び装置、電子デバイス並びに記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010827077.1A CN111899268B (zh) 2020-08-17 2020-08-17 图像分割方法及装置、电子设备和存储介质
CN202010827077.1 2020-08-17

Publications (1)

Publication Number Publication Date
WO2022036972A1 true WO2022036972A1 (fr) 2022-02-24

Family

ID=73229690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138131 WO2022036972A1 (fr) 2020-08-17 2020-12-21 Procédé et appareil de segmentation d'image, dispositif électronique et support de stockage

Country Status (3)

Country Link
CN (1) CN111899268B (fr)
TW (1) TW202209254A (fr)
WO (1) WO2022036972A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862869A (zh) * 2022-03-30 2022-08-05 北京理工大学 基于ct影像的肾脏组织分割方法及装置
CN114913187A (zh) * 2022-05-25 2022-08-16 北京百度网讯科技有限公司 图像分割方法、训练方法、装置、电子设备以及存储介质
CN117635645A (zh) * 2023-12-08 2024-03-01 兰州交通大学 一种复杂稠密网络下的并置多尺度融合边缘检测模型

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899268B (zh) * 2020-08-17 2022-02-18 上海商汤智能科技有限公司 图像分割方法及装置、电子设备和存储介质
CN113063742A (zh) * 2021-03-24 2021-07-02 和数科技(浙江)有限公司 一种植被生物量测量方法、系统、存储介质及终端
CN113079383B (zh) * 2021-03-25 2023-06-20 北京市商汤科技开发有限公司 视频处理方法、装置、电子设备及存储介质
CN113655973B (zh) * 2021-07-16 2023-12-26 深圳价值在线信息科技股份有限公司 页面分割方法、装置、电子设备及存储介质
CN113284076A (zh) * 2021-07-22 2021-08-20 烟台市综合信息中心(烟台市市民卡管理中心) 基于FloodFill的高铁接触网载流环断裂异常检测方法
CN114445447A (zh) * 2021-12-27 2022-05-06 天翼云科技有限公司 一种图像分割方法、装置、设备及介质
CN114550129B (zh) * 2022-01-26 2023-07-18 江苏联合职业技术学院苏州工业园区分院 一种基于数据集的机器学习模型处理方法和系统
TWI803223B (zh) * 2022-03-04 2023-05-21 國立中正大學 於超頻譜影像之物件偵測方法
CN114693628A (zh) * 2022-03-24 2022-07-01 生仝智能科技(北京)有限公司 病理指标确定方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143190A (zh) * 2014-07-24 2014-11-12 东软集团股份有限公司 Ct图像中组织的分割方法及系统
CN105894517A (zh) * 2016-04-22 2016-08-24 北京理工大学 基于特征学习的ct图像肝脏分割方法及系统
US20180308237A1 (en) * 2017-04-21 2018-10-25 Samsung Electronics Co., Ltd. Image segmentation method and electronic device therefor
CN109949309A (zh) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 一种基于深度学习的肝脏ct图像分割方法
CN111899268A (zh) * 2020-08-17 2020-11-06 上海商汤智能科技有限公司 图像分割方法及装置、电子设备和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9972092B2 (en) * 2016-03-31 2018-05-15 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
CN109993750B (zh) * 2017-12-29 2020-12-25 中国科学院深圳先进技术研究院 一种手腕骨的分割识别方法及系统、终端及可读存储介质
CN109886243B (zh) * 2019-03-01 2021-03-26 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、存储介质、设备以及系统
CN110910396A (zh) * 2019-10-18 2020-03-24 北京量健智能科技有限公司 一种用于优化图像分割结果的方法和装置
CN110782468B (zh) * 2019-10-25 2023-04-07 北京达佳互联信息技术有限公司 图像分割模型的训练方法及装置及图像分割方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143190A (zh) * 2014-07-24 2014-11-12 东软集团股份有限公司 Ct图像中组织的分割方法及系统
CN105894517A (zh) * 2016-04-22 2016-08-24 北京理工大学 基于特征学习的ct图像肝脏分割方法及系统
US20180308237A1 (en) * 2017-04-21 2018-10-25 Samsung Electronics Co., Ltd. Image segmentation method and electronic device therefor
CN109949309A (zh) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 一种基于深度学习的肝脏ct图像分割方法
CN111899268A (zh) * 2020-08-17 2020-11-06 上海商汤智能科技有限公司 图像分割方法及装置、电子设备和存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862869A (zh) * 2022-03-30 2022-08-05 北京理工大学 基于ct影像的肾脏组织分割方法及装置
CN114913187A (zh) * 2022-05-25 2022-08-16 北京百度网讯科技有限公司 图像分割方法、训练方法、装置、电子设备以及存储介质
CN117635645A (zh) * 2023-12-08 2024-03-01 兰州交通大学 一种复杂稠密网络下的并置多尺度融合边缘检测模型
CN117635645B (zh) * 2023-12-08 2024-06-04 兰州交通大学 一种复杂稠密网络下的并置多尺度融合边缘检测模型

Also Published As

Publication number Publication date
CN111899268B (zh) 2022-02-18
TW202209254A (zh) 2022-03-01
CN111899268A (zh) 2020-11-06

Similar Documents

Publication Publication Date Title
WO2022036972A1 (fr) Procédé et appareil de segmentation d'image, dispositif électronique et support de stockage
US12002212B2 (en) Image segmentation method and apparatus, computer device, and storage medium
WO2022151755A1 (fr) Procédé et appareil de détection de cible, et dispositif électronique, support de stockage, produit de programme informatique et programme informatique
TWI743931B (zh) 網路訓練、圖像處理方法、電子設備和儲存媒體
US10956714B2 (en) Method and apparatus for detecting living body, electronic device, and storage medium
TWI755175B (zh) 圖像分割方法、電子設備和儲存介質
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN109658401B (zh) 图像处理方法及装置、电子设备和存储介质
CN113222038B (zh) 基于核磁图像的乳腺病灶分类和定位方法及装置
RU2577188C1 (ru) Способ, аппарат и устройство для сегментации изображения
WO2020134866A1 (fr) Procédé et appareil de détection de point-clé, dispositif électronique, et support de stockage
US20210097715A1 (en) Image generation method and device, electronic device and storage medium
KR20220013404A (ko) 이미지 처리 방법 및 장치, 전자 기기, 저장 매체 및 프로그램 제품
WO2017031901A1 (fr) Procédé et appareil de reconnaissance de visage humain, et terminal
WO2022156235A1 (fr) Procédé et appareil d'entraînement de réseau neuronal, procédé et appareil de traitement d'images et dispositif électronique et support de stockage
KR20220012407A (ko) 이미지 분할 방법 및 장치, 전자 기기 및 저장 매체
CN105678242B (zh) 手持证件模式下的对焦方法和装置
WO2023050691A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique, support de stockage et programme
TWI765386B (zh) 神經網路訓練及圖像分割方法、電子設備和電腦儲存介質
WO2021259390A2 (fr) Procédé et appareil de détection de plaques calcifiées sur des artères coronaires
WO2022022350A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique, support d'enregistrement, et produit programme d'ordinateur
CN110009599A (zh) 肝占位检测方法、装置、设备及存储介质
CN114820584A (zh) 肺部病灶定位装置
CN115170464A (zh) 肺图像的处理方法、装置、电子设备和存储介质
CN111640114B (zh) 图像处理方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021576593

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227001101

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20950173

Country of ref document: EP

Kind code of ref document: A1