CN110211134B - Image segmentation method and device, electronic equipment and storage medium - Google Patents

Image segmentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110211134B
CN110211134B CN201910464349.3A CN201910464349A CN110211134B CN 110211134 B CN110211134 B CN 110211134B CN 201910464349 A CN201910464349 A CN 201910464349A CN 110211134 B CN110211134 B CN 110211134B
Authority
CN
China
Prior art keywords
image
image segmentation
result
interactive operation
segmentation result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910464349.3A
Other languages
Chinese (zh)
Other versions
CN110211134A (en
Inventor
宋涛
朱洁茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201910464349.3A priority Critical patent/CN110211134B/en
Publication of CN110211134A publication Critical patent/CN110211134A/en
Application granted granted Critical
Publication of CN110211134B publication Critical patent/CN110211134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The present disclosure relates to an image segmentation method and apparatus, an electronic device, and a storage medium, wherein the method includes: according to the region position of the image where the object to be segmented is located, carrying out image segmentation on the image to obtain an image segmentation result; acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result; and responding to the first interactive operation to obtain a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result. By adopting the method and the device, the accuracy of the image segmentation result can be improved, so that the focus can be positioned in time.

Description

Image segmentation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image segmentation method and apparatus, an electronic device, and a storage medium.
Background
The deep learning is rapidly developed, and remarkable achievement is achieved in the field of image segmentation. The image segmentation technology based on deep learning adopts manual labeling to obtain a labeling result, the accuracy of the labeling result is not stable enough, and the accuracy of the obtained image segmentation result is not high. There is no effective solution in the related art.
Disclosure of Invention
The present disclosure provides an image segmentation technical solution.
According to an aspect of the present disclosure, there is provided an image segmentation method, the method including:
according to the region position of the image where the object to be segmented is located, carrying out image segmentation on the image to obtain an image segmentation result;
acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result;
and responding to the first interactive operation to obtain a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result.
By adopting the method and the device, the image is segmented through the area position of the image where the object to be segmented is located to obtain the image segmentation result, the first interactive operation can be obtained, the labeling result aiming at the wrong area is obtained by responding to the first interactive operation, and the automatic labeling is realized through the interactive operation. And automatically correcting the image segmentation result obtained by segmentation according to the labeling result obtained by automatic labeling, so that the accuracy of the image segmentation result can be improved, and the focus can be positioned in time.
In a possible implementation manner, when the image is a 2D image, the image segmentation is performed on the image according to the position of the region of the image where the object to be segmented is located, so as to obtain an image segmentation result, including:
determining extraction parameters corresponding to the region positions according to the second interactive operation;
and obtaining a rectangular frame according to the extraction parameters, and performing image segmentation on the 2D image according to the rectangular frame to obtain a 2D image segmentation result.
By adopting the method and the device, the extraction parameters are obtained according to the second interactive operation, the 2D image is segmented according to the rectangular frame obtained by the extraction parameters, and the 2D image segmentation result is visually obtained by adopting the visual effect.
In a possible implementation manner, the performing image segmentation on the 2D image according to the rectangular frame to obtain a 2D image segmentation result includes:
and inputting the 2D image and a rectangular area obtained according to the rectangular frame into a 2D image segmentation network, and outputting to obtain a 2D image segmentation result.
By adopting the method and the device, the 2D image segmentation result is obtained through the processing of the 2D image segmentation network, the accuracy is higher than that of manual processing, and the accuracy of the image segmentation result can be improved.
In a possible implementation manner, the obtaining a first interactive operation when there is a wrong region in the image segmentation result includes:
the mistaken partition area is to mistakenly partition a target object as a foreground into backgrounds to obtain the first interactive operation;
the responding to the first interactive operation to obtain a labeling result aiming at the wrong partition area comprises the following steps:
analyzing the first interactive operation according to preset operation description information to obtain a first labeling result which is described corresponding to the first interactive operation, identifying the first labeling result by first label information, and representing that the target object which is taken as the foreground is mistakenly divided into backgrounds by the first label information.
By adopting the method and the device, the first interactive operation is obtained according to the judgment of the wrong region, the automatic labeling is realized according to the first interactive operation, the first labeling result of the description corresponding to the first interactive operation is obtained, and the image segmentation result obtained by segmentation is automatically corrected through the first labeling result, so that the accuracy of the image segmentation result can be improved.
In a possible implementation manner, the first interactive operation includes: any one of an input operation of a left mouse button, a designated first touch operation and a designated first slide operation.
By adopting the method and the device, the first interactive operation is visual operation, so that the automatic labeling can be more convenient and easier to use.
In a possible implementation manner, the performing correction processing on the image segmentation result according to the labeling result includes:
and inputting the 2D image, the 2D image segmentation result and the first labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
By adopting the method and the device, the 2D image segmentation result is obtained through the 2D image correction network, the accuracy is higher than that of manual processing, and the accuracy of the image segmentation result can be improved.
In a possible implementation manner, the obtaining a first interactive operation when there is a wrong region in the image segmentation result includes:
the wrong partition area is used for wrongly partitioning a part of the background into target objects serving as a foreground to obtain the first interactive operation;
the responding to the first interactive operation to obtain a labeling result aiming at the wrong partition area comprises the following steps:
and analyzing the first interactive operation according to preset operation description information to obtain a second labeling result which is described corresponding to the first interactive operation, identifying the second labeling result by second label information, and representing the target object which is misclassified as the foreground by the part belonging to the background through the second label information.
By adopting the method and the device, the first interactive operation is obtained according to the judgment of the wrong region, the automatic labeling is realized according to the first interactive operation, the second labeling result which is described corresponding to the first interactive operation is obtained, and the image segmentation result obtained by segmentation is automatically corrected through the second labeling result, so that the accuracy of the image segmentation result can be improved.
In a possible implementation manner, the first interactive operation includes: any one of an input operation of a right mouse button, a designated second touch operation, and a designated second slide operation.
By adopting the method and the device, the first interactive operation is visual operation, so that the automatic labeling can be more convenient and easier to use.
In a possible implementation manner, the performing correction processing on the image segmentation result according to the labeling result includes:
and inputting the 2D image, the 2D image segmentation result and the second labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
By adopting the method and the device, the 2D image segmentation result is obtained through the 2D image correction network, the accuracy is higher than that of manual processing, and the accuracy of the image segmentation result can be improved.
In a possible implementation manner, in the process of performing correction processing on the image segmentation result, the image data processed by the 2D image correction network is pixel data calibrated by a two-dimensional coordinate system.
By adopting the method and the device, the aimed image data is the pixel data calibrated by the two-dimensional coordinate system, for the 2D image, the 2D image segmentation result can be obtained through the image correction network, and the accuracy of the 2D image segmentation result can be improved.
According to an aspect of the present disclosure, there is provided an image segmentation apparatus, the apparatus including:
the segmentation module is used for carrying out image segmentation on the image according to the region position of the image where the object to be segmented is located to obtain an image segmentation result;
the operation acquisition module is used for acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result;
and the operation response module is used for responding to the first interactive operation, obtaining the labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result.
In a possible implementation manner, in a case that the image is a 2D image, the segmentation module is further configured to:
determining extraction parameters corresponding to the region positions according to the second interactive operation;
and obtaining a rectangular frame according to the extraction parameters, and performing image segmentation on the 2D image according to the rectangular frame to obtain a 2D image segmentation result.
In a possible implementation manner, the segmentation module is further configured to:
and inputting the 2D image and a rectangular area obtained according to the rectangular frame into a 2D image segmentation network, and outputting to obtain a 2D image segmentation result.
In a possible implementation manner, the operation obtaining module is further configured to:
the mistaken partition area is to mistakenly partition a target object as a foreground into backgrounds to obtain the first interactive operation;
the operation response module is further configured to:
analyzing the first interactive operation according to preset operation description information to obtain a first labeling result which is described corresponding to the first interactive operation, identifying the first labeling result by first label information, and representing that the target object which is taken as the foreground is mistakenly divided into backgrounds by the first label information.
In a possible implementation manner, the operation response module is further configured to:
and inputting the 2D image, the 2D image segmentation result and the first labeling result into a 2D image correction network, and outputting to obtain the 2D image segmentation result.
In a possible implementation manner, the operation obtaining module is further configured to:
the wrong partition area is used for wrongly partitioning a part of the background into target objects serving as a foreground to obtain the first interactive operation;
the operation response module is further configured to:
and analyzing the first interactive operation according to preset operation description information to obtain a second labeling result which is described corresponding to the first interactive operation, identifying the second labeling result by second label information, and representing the target object which is misclassified as the foreground by the part belonging to the background through the second label information.
In a possible implementation manner, the operation response module is further configured to:
and inputting the 2D image, the 2D image segmentation result and the second labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
In a possible implementation manner, in the process of performing correction processing on the image segmentation result, the image data processed by the 2D image correction network is pixel data calibrated by a two-dimensional coordinate system.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above-described image segmentation method is performed.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image segmentation method.
In the embodiment of the disclosure, according to the position of the region of the image where the object to be segmented is located, the image (such as a 2D image) is segmented to obtain an image segmentation result (such as a 2D image segmentation result); acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result; and responding to the first interactive operation to obtain a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result. By adopting the method and the system, automatic labeling is realized through the interactive operation, and the segmented image segmentation result can be automatically corrected according to the labeling result obtained by the automatic labeling, so that the accuracy of the image segmentation result can be improved, and the focus can be timely positioned.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an image segmentation method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of an image segmentation method according to an embodiment of the present disclosure.
FIG. 3 illustrates a flow chart for image segmentation based on automatic interaction for an image according to the present disclosure.
Fig. 4 illustrates a block diagram of an image segmentation apparatus according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
One application direction of image processing is: by optimizing the processing of the medical image, a clearer focus is obtained, so that a doctor can accurately know the overall disease condition of a patient. The medical image segmentation aims to segment parts with certain special meanings (such as lesion parts) in the medical image and extract relevant features. The method is a key technology of a medical image analysis link and plays an increasingly large role in image medicine. Image segmentation is not only an indispensable means for extracting quantitative information of a specific tissue in a video image, but also a step and a premise for visualization implementation and preprocessing. The segmented images are widely used in application scenarios such as quantitative analysis of tissue volume, diagnosis, localization of diseased tissue, learning of anatomical structures, therapy planning, local body effect correction of functional imaging data, and computer-guided surgery. In the field of medical imaging, diagnosis and evaluation depend on image acquisition and image interpretation, and with the rapid development and popularization of medical imaging equipment, imaging technologies include magnetic resonance imaging (MR), Computed Tomography (CT), ultrasound, Positron Emission Tomography (PET), and the like, and a large amount of medical image data is collected at a higher speed and a higher resolution, but image processing functions of related medical images are still performed in a manual mode which is low in efficiency, strong in subjectivity, and prone to fatigue.
The image segmentation can be performed by adopting deep learning, and is essentially pixel-level classification, namely, the classification of each pixel point on the image is judged. The convolutional neural network adopts an end-to-end learning mode, and the classification result of the output image pixel level is realized. A convolutional neural network model based on a U-Net structure adopts a coding-decoding structure and a jump connection structure, so that image features are extracted more effectively, the features are transferred, and a good effect is obtained in an image segmentation task by coding a pixel classification result. However, in the image segmentation process, methods based on voxel (pixel) segmentation and region segmentation are adopted, and the problems of difficulty in threshold selection, sensitivity to image noise and the like exist. In the related art of image segmentation based on deep learning, the segmentation method based on deep learning is simply applied to the field of medical image segmentation, and the problems cannot be well solved. Because medical application has strict requirements on the result of image segmentation, the accuracy of the labeling result (manual labeling) which only depends on a deep learning method does not have enough stability; in addition, the deep learning method lacks adaptability to a special image type, and a large amount of finely labeled segmented data is often required for training in order to achieve a good segmentation effect on a specific type of medical image. These problems make it less feasible to apply deep learning directly to the segmentation task of actual medical images.
The interactive segmentation scheme based on deep learning can finish segmentation tasks of various segmentation targets according to a roughly rectangular region where the target to be segmented is located, wherein the region is given by a marker; if the segmentation result shows that errors occur, the annotator can interactively label the regions with the segmentation errors according to a preset rule, and the obtained labeling result is input into a convolutional neural network for correcting the errors, so that the automatic correction of the regions with the errors found in the image segmentation result is realized, and by replacing the manual labeling of the related technology, the interactive segmentation scheme improves the labeling efficiency, and the accuracy and the stability of image segmentation are improved. The segmentation of multiple types of segmentation target problems can be solved simultaneously.
Fig. 1 shows a flowchart of an image segmentation method according to an embodiment of the present disclosure, which is applied to an image segmentation apparatus, for example, the image segmentation apparatus may be executed by a terminal device or a server or other processing devices, wherein the terminal device may be a User Equipment (UE), a mobile device, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the image segmentation method may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, the process includes:
step S101, according to the area position of the image where the object to be segmented is located, image segmentation is carried out on the image, and an image segmentation result is obtained.
In an example, when the image is a 2D image, a rectangular frame, such as a rectangular frame formed by a rectangular area given by a annotator, may be used to select an object to be segmented in the current image. The rectangular area is used for representing the area position of the 2D image where the object to be segmented is located. The object to be segmented may be a lesion in the medical image, and the position of the region may also be referred to as the position of the lesion region.
In one example, the 2D image is subjected to image segmentation, and the image segmentation may be performed by using a trained model, such as a 2D image segmentation network model.
Step S102, acquiring a first interactive operation under the condition that the image segmentation result has a wrong region.
And S103, responding to the first interactive operation to obtain a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result.
In one example, the 2D image is corrected by using a trained model, such as a 2D image correction network model.
By adopting the method, if an error occurs in the image segmentation result (such as a 2D image segmentation result), manual marking is not needed, a marker can interactively mark the region with the segmentation error according to a preset rule, and the obtained marking result is input into a convolutional neural network for correcting the error, so that the error region found in the image segmentation result is automatically corrected, the interactive cutting scheme improves the marking efficiency, and the accuracy and the stability of image segmentation are improved.
Fig. 2 shows a flowchart of an image segmentation method according to an embodiment of the present disclosure, which is applied to an image segmentation apparatus, for example, the image segmentation apparatus may be executed by a terminal device or a server or other processing devices, wherein the terminal device may be a User Equipment (UE), a mobile device, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the image segmentation method may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 2, the process includes:
step S201, determining extraction parameters corresponding to the position of the 2D image area where the object to be segmented is located.
In one example, the extraction parameters corresponding to the region location may be determined based on an interaction (e.g., a second interaction). The target area can be obtained after the 2D image is segmented according to the approximate area position of the object to be segmented in the 2D image given by the user (annotator), namely, the rectangular frame obtained through the second interactive operation, so that the 2D image can be segmented.
It should be noted that the first interactive operation and the second interactive operation do not represent the execution sequence of the interactive operations, but the first interactive operation and the second interactive operation are respectively referred to as "first" and "second" to refer to different interactive operations. Firstly, a rectangular frame is obtained based on interactive operation in the first step, 2D image segmentation is carried out, a rough segmentation result is obtained, then wrong regions in the 2D image segmentation are labeled based on point labeling of the interactive operation in the second step, and the wrong regions are corrected according to the labeling result.
And S202, obtaining a rectangular frame according to the extraction parameters, and performing image segmentation on the 2D image according to the rectangular frame to obtain a 2D image segmentation result.
In a possible implementation manner of the present disclosure, the 2D image and the rectangular region obtained according to the rectangular frame are input to a 2D image segmentation network, and the 2D image segmentation result is output.
Step S203, acquiring a first interactive operation under the condition that a wrong region exists in the 2D image segmentation result.
And S204, responding to the first interactive operation, obtaining a labeling result aiming at the wrong region, and correcting the 2D image segmentation result according to the labeling result.
There are two cases for a wrong partition area, one is: the portion that belongs to the foreground is misclassified as background. Wherein, the part belonging to the foreground is the target object which the user wants to process, namely the specific position of the focus; the other situation is that: a part belonging to the background is misclassified as a target object as the foreground. The details are as follows:
the first condition is as follows: and under the condition that a wrong partition area exists in the 2D image segmentation result, the wrong partition area is used for mistakenly partitioning a target object as a foreground into backgrounds, and the first interactive operation is obtained. And responding to the first interactive operation, and analyzing the first interactive operation according to preset operation description information to obtain a first labeling result of the corresponding description of the first interactive operation. The first labeling result is identified by first label information (for example, a green label point can be used), and the target object to be used as the foreground is represented as the background by the first label information. Wherein the first interactive operation comprises: any one of an input operation of a left mouse button, a designated first touch operation and a designated first slide operation. The present disclosure is not limited to the left mouse button operation, but may also be a screen touch operation (for example, a finger clicks in the current area, and one touch may be the first touch operation described above), or may also be a track sliding operation (for example, a finger or a mouse cursor slides from left to right in the current area, and the like may be the first sliding operation described above).
In this case, the 2D image segmentation result, and the first annotation result may be input to the 2D image correction network and output to obtain the 2D image segmentation result.
Case two: and under the condition that a wrong partition area exists in the 2D image segmentation result, the wrong partition area is a target object which is used as a foreground and belongs to the background in a wrong partition mode, and the first interactive operation is obtained. And responding to the first interactive operation, analyzing the first interactive operation according to preset operation description information to obtain a second labeling result corresponding to the description of the first interactive operation, identifying the second labeling result by second marking information (such as another identifier different from the green marking point, such as a red marking point), and representing the part of the background as a target object used as the foreground by the second marking information. Wherein the first interactive operation comprises: any one of an input operation of a right mouse button, a designated second touch operation, and a designated second slide operation. The present disclosure is not limited to the right mouse button operation, but may also be a screen touch operation (for example, double-clicking in the current area, such as twice touch, may be the second touch operation described above), or may also be a track sliding operation (sliding from right to left in the current area, and the like, may be the second sliding operation described above).
In this case, the 2D image segmentation result, and the second annotation result may be input to the 2D image correction network and output to obtain the 2D image segmentation result.
In a possible implementation manner of the present disclosure, in the process of processing by the 2D image correction network, the targeted image data is pixel data calibrated by a two-dimensional coordinate system, that is, pixel data calibrated by a horizontal axis x and a vertical axis y in the two-dimensional coordinate system.
In a possible implementation manner of the present disclosure, a 2D image correction network is adopted, and in the process of correcting a wrong region in an image, the local update is corrected and dynamically updated within a certain range of the wrong region in the image, instead of completely replacing the image. Purpose of local update: the method is to process a plurality of pixels contained in the area near the marked point, and obtain the pixel far away from the marked point by distinguishing the relationship between the pixels, wherein the pixel belongs to the area to be locally updated in the image. The specific way how to resolve the relationship between pixels for image segmentation may be an image segmentation (egd-guide graph cut) algorithm guided by exponential geodesic distance. For the correction of the local update, in an example of completing the local update correction by adopting an egd-guide graph cut method, segmentation is performed by adopting image segmentation (graph cut), an image needs to be firstly converted into a graph structure, each pixel in the image is used as a node in the graph structure, and a distance between the pixels is used as a weight of an edge in the graph structure. Because image segmentation (graph cut) needs a method for measuring pixel distance to initialize all edges, and a traditional graph cut cannot achieve the effect of local updating, all edges need to be initialized under the guidance of exponential geodesic distances (such as exponentially decaying geodesic distances) (the ever-decaying distance is obtained in an exponentially decaying geodesic distance mode to initialize edge weights for a graph cut algorithm), then local updating can be achieved, and the range of local updating is dynamic. That is, the index ranging is used for guiding to obtain the optional area range from the annotation point, and then the graphcut algorithm is applied to further optimize based on the optional area range so as to realize image segmentation and finally achieve the effect of local updating. For geodesic distance, geodesic distance is a measurement mode for measuring the distance between pixels in an image, and the geodesic distance can measure not only the spatial distance between two pixels, but also the semantic distance. For semantic distances, such as two pixels on an image, that may be spatially adjacent, but semantically the first pixel belongs to the cat and the second pixel belongs to the blanket on which the cat is lying, it is desirable to find a measure to make clear: these two pixels, although in close spatial proximity, are "semantically distant" in practice, and this measure needs to be achieved by finding a semantic distance.
Application example:
fig. 3 illustrates a flow diagram for image segmentation based on automatic interaction for an image (e.g., a 2D image) according to the present disclosure. The method comprises the following steps:
firstly, a annotator participating in interactive segmentation gives the approximate position of an object to be segmented in an image (such as a rectangular region 21 shown in fig. 3) by drawing a rectangular frame in the image, the position information of the rectangular region 21 is encoded and input into a 2D image segmentation network 23 together with an image 22 to be segmented for segmentation, and a segmentation result 24 is output through the 2D image segmentation network 23 in the first step.
The 2D image segmentation network 23 may be a convolutional neural network of U-Net structure. By receiving the original image (image to be segmented 22) and the rectangular region given by the annotator as input via the 2D image segmentation network 23, the identification and segmentation of objects such as lesions or organs within the rectangular region is accomplished.
Secondly, for the segmentation result 24 outputted by the segmentation through the 2D image segmentation network 23 in the first step, if the annotator is satisfied with the segmentation result, the segmentation is finished, and if the annotator is not satisfied, the annotator clicks and annotates the wrongly segmented region. The annotator can perform annotation in two categories, the first category is that the first mark point 261 in fig. 3 indicates that the part belonging to the foreground is misclassified into the background; second, a second marked point 262 in fig. 3 indicates a case where a portion belonging to the background is erroneously divided into the foreground.
Thirdly, the original image (the image 22 to be segmented), the segmentation result 24 output currently and the wrong segmentation mark are input into the 2D image correction network 25 in the second step together, the segmentation result is corrected through the 2D image correction network 25, the corrected segmentation result 27 is output, and after the 2D image correction network 25 interacts with the annotator for any time, the stable and quick correction processing of the medical image segmentation is realized.
The 2D image correction network 25 may employ a neural network/graph neural network. The original image (the image 22 to be segmented), the segmentation result 24 output in the first step and the wrong segmentation label given by the annotator are received as input through the 2D image correction network 25, and the local correction of the segmentation result in the previous step is completed, or the correction process of local updating is completed by adopting methods such as egd-guide graph cut and the like.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
The above-mentioned method embodiments can be combined with each other to form a combined embodiment without departing from the principle logic, which is limited by the space and will not be repeated in this disclosure.
In addition, the present disclosure also provides an image segmentation apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any image segmentation method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 4 illustrates a block diagram of an image segmentation apparatus according to an embodiment of the present disclosure, and as illustrated in fig. 4, the image segmentation apparatus of the embodiment of the present disclosure includes: the segmentation module 31 is configured to perform image segmentation on the image according to the position of the region of the image where the object to be segmented is located, so as to obtain an image segmentation result; an operation obtaining module 32, configured to obtain a first interactive operation when a wrong region exists in the image segmentation result; and an operation response module 33, configured to respond to the first interactive operation, obtain an annotation result for the erroneous partition, and correct the image segmentation result according to the annotation result.
In a possible implementation manner of the present disclosure, in a case that the image is a 2D image, the segmentation module is further configured to: determining extraction parameters corresponding to the region positions according to the second interactive operation; and obtaining a rectangular frame according to the extraction parameters, and performing image segmentation on the 2D image according to the rectangular frame to obtain a 2D image segmentation result.
In a possible implementation manner of the present disclosure, the dividing module is further configured to: and inputting the 2D image and a rectangular area obtained according to the rectangular frame into a 2D image segmentation network, and outputting to obtain a 2D image segmentation result.
In a possible implementation manner of the present disclosure, the operation obtaining module is further configured to: the mistaken partition area is to mistakenly partition a target object as a foreground into backgrounds to obtain the first interactive operation; the operation response module is further configured to: analyzing the first interactive operation according to preset operation description information to obtain a first labeling result which is described corresponding to the first interactive operation, identifying the first labeling result by first label information, and representing that the target object which is taken as the foreground is mistakenly divided into backgrounds by the first label information. The first interactive operation comprises: any one of an input operation of a left mouse button, a designated first touch operation and a designated first slide operation.
In a possible implementation manner of the present disclosure, the operation response module is further configured to: and inputting the 2D image, the 2D image segmentation result and the first labeling result into a 2D image correction network, and outputting to obtain the 2D image segmentation result.
In a possible implementation manner of the present disclosure, the operation obtaining module is further configured to: the wrong partition area is used for wrongly partitioning a part of the background into target objects serving as a foreground to obtain the first interactive operation; the operation response module is further configured to: and analyzing the first interactive operation according to preset operation description information to obtain a second labeling result which is described corresponding to the first interactive operation, identifying the second labeling result by second label information, and representing the target object which is misclassified as the foreground by the part belonging to the background through the second label information. The first interactive operation comprises: any one of an input operation of a right mouse button, a designated second touch operation, and a designated second slide operation.
In a possible implementation manner of the present disclosure, the operation response module is further configured to: and inputting the 2D image, the 2D image segmentation result and the second labeling result into a 2D image correction network, and outputting to obtain the 2D image segmentation result.
In a possible implementation manner of the present disclosure, in the process of processing by the 2D image correction network, the targeted image data is pixel data calibrated by a two-dimensional coordinate system.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 is a block diagram illustrating an electronic device 900 in accordance with an example embodiment. For example, the electronic device 900 may be provided as a server. Referring to fig. 6, electronic device 900 includes a processing component 922, which further includes one or more processors, and memory resources, represented by memory 932, for storing instructions, such as applications, that are executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 922 is configured to execute instructions to perform the above-described methods.
The electronic device 900 may also include a power component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input/output (I/O) interface 958. The electronic device 900 may operate based on an operating system stored in the memory 932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 932, is also provided that includes computer program instructions executable by the processing component 922 of the electronic device 900 to perform the above-described method.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

1. A method of image segmentation, the method comprising:
according to the region position of the image where the object to be segmented is located, carrying out image segmentation on the image to obtain an image segmentation result;
acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result;
responding to the first interactive operation, obtaining a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result;
under the condition that the image is a 2D image, the image segmentation is carried out on the image according to the region position of the image where the object to be segmented is located, and an image segmentation result is obtained, wherein the image segmentation result comprises the following steps:
determining extraction parameters corresponding to the region positions according to the second interactive operation;
and obtaining a rectangular frame according to the extracted parameters, inputting the 2D image and the coding position information of the rectangular area obtained according to the rectangular frame into a 2D image segmentation network, and outputting to obtain the 2D image segmentation result.
2. The method according to claim 1, wherein the obtaining a first interactive operation in the case that there is a wrong region in the image segmentation result comprises:
the mistaken partition area is to mistakenly partition a target object as a foreground into backgrounds to obtain the first interactive operation;
the responding to the first interactive operation to obtain a labeling result aiming at the wrong partition area comprises the following steps:
analyzing the first interactive operation according to preset operation description information to obtain a first labeling result which is described corresponding to the first interactive operation, identifying the first labeling result by first label information, and representing that the target object which is taken as the foreground is mistakenly divided into backgrounds by the first label information.
3. The method of claim 2, wherein the first interaction comprises: any one of an input operation of a left mouse button, a designated first touch operation and a designated first slide operation.
4. The method of claim 2, wherein performing a correction process on the image segmentation result according to the labeling result comprises:
and inputting the 2D image, the 2D image segmentation result and the first labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
5. The method according to claim 1, wherein the obtaining a first interactive operation in the case that there is a wrong region in the image segmentation result comprises:
the wrong partition area is used for wrongly partitioning a part of the background into target objects serving as a foreground to obtain the first interactive operation;
the responding to the first interactive operation to obtain a labeling result aiming at the wrong partition area comprises the following steps:
and analyzing the first interactive operation according to preset operation description information to obtain a second labeling result which is described corresponding to the first interactive operation, identifying the second labeling result by second label information, and representing the target object which is misclassified as the foreground by the part belonging to the background through the second label information.
6. The method of claim 5, wherein the first interaction comprises: any one of an input operation of a right mouse button, a designated second touch operation, and a designated second slide operation.
7. The method of claim 5, wherein performing a correction process on the image segmentation result according to the labeling result comprises:
and inputting the 2D image, the 2D image segmentation result and the second labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
8. The method according to any one of claims 1 to 7, wherein, in the correction processing of the image segmentation result, the image data for which the processing is performed by the 2D image correction network is pixel data scaled by a two-dimensional coordinate system.
9. An image segmentation apparatus, characterized in that the apparatus comprises:
the segmentation module is used for carrying out image segmentation on the image according to the region position of the image where the object to be segmented is located to obtain an image segmentation result;
the operation acquisition module is used for acquiring a first interactive operation under the condition that a wrong region exists in the image segmentation result;
the operation response module is used for responding to the first interactive operation, obtaining a labeling result aiming at the wrong region, and correcting the image segmentation result according to the labeling result;
in the case that the image is a 2D image, the segmentation module is further configured to:
determining extraction parameters corresponding to the region positions according to the second interactive operation;
and obtaining a rectangular frame according to the extracted parameters, inputting the 2D image and the coding position information of the rectangular area obtained according to the rectangular frame into a 2D image segmentation network, and outputting to obtain the 2D image segmentation result.
10. The apparatus of claim 9, wherein the operation obtaining module is further configured to:
the mistaken partition area is to mistakenly partition a target object as a foreground into backgrounds to obtain the first interactive operation;
the operation response module is further configured to:
analyzing the first interactive operation according to preset operation description information to obtain a first labeling result which is described corresponding to the first interactive operation, identifying the first labeling result by first label information, and representing that the target object which is taken as the foreground is mistakenly divided into backgrounds by the first label information.
11. The apparatus of claim 10, wherein the operation response module is further configured to:
and inputting the 2D image, the 2D image segmentation result and the first labeling result into a 2D image correction network, and outputting to obtain the 2D image segmentation result.
12. The apparatus of claim 9, wherein the operation obtaining module is further configured to:
the wrong partition area is used for wrongly partitioning a part of the background into target objects serving as a foreground to obtain the first interactive operation;
the operation response module is further configured to:
and analyzing the first interactive operation according to preset operation description information to obtain a second labeling result which is described corresponding to the first interactive operation, identifying the second labeling result by second label information, and representing the target object which is misclassified as the foreground by the part belonging to the background through the second label information.
13. The apparatus of claim 12, wherein the operation response module is further configured to:
and inputting the 2D image, the 2D image segmentation result and the second labeling result into a 2D image correction network, and outputting to obtain a 2D image segmentation result.
14. The apparatus according to any one of claims 9 to 13, wherein, in the correction processing of the image segmentation result, the image data for which the processing is performed by the 2D image correction network is pixel data scaled by a two-dimensional coordinate system.
15. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 8.
16. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN201910464349.3A 2019-05-30 2019-05-30 Image segmentation method and device, electronic equipment and storage medium Active CN110211134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910464349.3A CN110211134B (en) 2019-05-30 2019-05-30 Image segmentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910464349.3A CN110211134B (en) 2019-05-30 2019-05-30 Image segmentation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110211134A CN110211134A (en) 2019-09-06
CN110211134B true CN110211134B (en) 2021-11-05

Family

ID=67789802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910464349.3A Active CN110211134B (en) 2019-05-30 2019-05-30 Image segmentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110211134B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751659B (en) * 2019-09-27 2022-06-10 北京小米移动软件有限公司 Image segmentation method and device, terminal and storage medium
CN111259184B (en) * 2020-02-27 2022-03-08 厦门大学 Image automatic labeling system and method for new retail
CN112418205A (en) * 2020-11-19 2021-02-26 上海交通大学 Interactive image segmentation method and system based on focusing on wrongly segmented areas
CN112925938A (en) * 2021-01-28 2021-06-08 上海商汤智能科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN113313700B (en) * 2021-06-09 2023-04-07 浙江大学 X-ray image interactive segmentation method based on deep learning
CN113837194A (en) * 2021-09-23 2021-12-24 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780536A (en) * 2017-01-13 2017-05-31 深圳市唯特视科技有限公司 A kind of shape based on object mask network perceives example dividing method
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus
CN109801260A (en) * 2018-12-20 2019-05-24 北京海益同展信息科技有限公司 The recognition methods of livestock number and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295219B (en) * 2012-03-02 2017-05-10 北京数码视讯科技股份有限公司 Method and device for segmenting image
CN103559719B (en) * 2013-11-20 2016-05-04 电子科技大学 A kind of interactive image segmentation method
CN104899851A (en) * 2014-03-03 2015-09-09 天津医科大学 Lung nodule image segmentation method
CN103971415B (en) * 2014-05-23 2016-06-15 南京大学 The online mask method of a kind of three-dimensional model component
CN109255791A (en) * 2018-07-19 2019-01-22 杭州电子科技大学 A kind of shape collaboration dividing method based on figure convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780536A (en) * 2017-01-13 2017-05-31 深圳市唯特视科技有限公司 A kind of shape based on object mask network perceives example dividing method
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN109801260A (en) * 2018-12-20 2019-05-24 北京海益同展信息科技有限公司 The recognition methods of livestock number and device
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus

Also Published As

Publication number Publication date
CN110211134A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211134B (en) Image segmentation method and device, electronic equipment and storage medium
CN109829920B (en) Image processing method and device, electronic equipment and storage medium
TWI755175B (en) Image segmentation method, electronic device and storage medium
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN112767329B (en) Image processing method and device and electronic equipment
CN110647834A (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
CN111899268B (en) Image segmentation method and device, electronic equipment and storage medium
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
CN112967291B (en) Image processing method and device, electronic equipment and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN109543536B (en) Image identification method and device, electronic equipment and storage medium
CN109635142B (en) Image selection method and device, electronic equipment and storage medium
CN111105454A (en) Method, device and medium for acquiring positioning information
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN109255784B (en) Image processing method and device, electronic equipment and storage medium
CN111476057B (en) Lane line acquisition method and device, and vehicle driving method and device
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN109671051B (en) Image quality detection model training method and device, electronic equipment and storage medium
CN111724361B (en) Method and device for displaying focus in real time, electronic equipment and storage medium
CN110852325B (en) Image segmentation method and device, electronic equipment and storage medium
CN113160947A (en) Medical image display method and device, electronic equipment and storage medium
WO2023050690A1 (en) Image processing method and apparatus, electronic device, storage medium, and program
CN111798498A (en) Image processing method and device, electronic equipment and storage medium
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant