CN113706562B - Image segmentation method, device and system and cell segmentation method - Google Patents

Image segmentation method, device and system and cell segmentation method Download PDF

Info

Publication number
CN113706562B
CN113706562B CN202010652532.9A CN202010652532A CN113706562B CN 113706562 B CN113706562 B CN 113706562B CN 202010652532 A CN202010652532 A CN 202010652532A CN 113706562 B CN113706562 B CN 113706562B
Authority
CN
China
Prior art keywords
image
sample
segmentation
segmentation model
edge information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010652532.9A
Other languages
Chinese (zh)
Other versions
CN113706562A (en
Inventor
田宽
张军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN202010652532.9A priority Critical patent/CN113706562B/en
Publication of CN113706562A publication Critical patent/CN113706562A/en
Application granted granted Critical
Publication of CN113706562B publication Critical patent/CN113706562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The disclosure provides an image segmentation method, device and system and a cell segmentation method, and relates to the field of artificial intelligence. The method comprises the following steps: acquiring an original image, wherein the original image comprises a plurality of objects to be segmented; inputting an original image into an image segmentation model, and performing feature extraction on each object through the image segmentation model to obtain a segmented image; the image segmentation model is obtained by performing iterative training and post-correction training on the image segmentation model to be trained, the iterative training is performed according to the image sample, the point mark image sample corresponding to the image sample and the boundary image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample by the image segmentation model after the iterative training. The image segmentation method and the image segmentation device can improve the efficiency and the accuracy of image segmentation and reduce the cost.

Description

Image segmentation method, device and system and cell segmentation method
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an image segmentation method, an image segmentation apparatus, an image segmentation system, and a cell segmentation method.
Background
With the rapid development of scientific and technical technology and artificial intelligence, people tend to find more intelligent methods for detecting and segmenting objects in images.
Taking cell segmentation in pathology as an example, when a deep learning method is adopted for cell segmentation, a clustering method is mainly used for expanding artificial point labels into pixel-level labels, then point labels are used for generating boundary partitions, finally, clustering results are used as positive samples, boundaries of the boundary partitions are used as negative samples, and a model is trained for segmenting cells. However, the model result in the method depends on the accuracy of the clustering method to a great extent, and in addition, in order to ensure the accuracy of cell clustering, the result of cell clustering is smaller than the real cell, so that the final segmentation result is not accurate.
It is noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides an image segmentation method, an image segmentation device, an image segmentation system and a cell segmentation method, so that the efficiency and the accuracy of image segmentation can be improved at least to a certain extent, and the cost can be reduced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of an embodiment of the present disclosure, there is provided an image segmentation method including: acquiring an original image, wherein the original image comprises a plurality of objects to be segmented; inputting the original image into an image segmentation model, and performing feature extraction on each object through the image segmentation model to obtain a segmented image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to an image sample, a point mark image sample corresponding to the image sample and a boundary image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample through the image segmentation model after the iterative training.
According to an aspect of an embodiment of the present disclosure, there is provided an image segmentation apparatus including: the image acquisition module is used for acquiring an original image, and the original image comprises a plurality of objects to be segmented; the graph segmentation module is used for inputting the original image into an image segmentation model and extracting the characteristics of each object through the image segmentation model so as to obtain a segmented image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point mark image sample and a boundary image sample corresponding to the image sample to be segmented, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample through the image segmentation model after the iterative training.
According to an aspect of an embodiment of the present disclosure, there is provided a cell segmentation method including: acquiring an original pathological image, wherein the original pathological image comprises a plurality of cells to be segmented; inputting the original pathological image into an image segmentation model, and performing feature extraction on each cell through the image segmentation model to obtain a cell segmentation image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point label image sample and a boundary image sample corresponding to a pathological image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the pathological image sample, and the target segmentation image is an image obtained by processing the pathological image sample through the image segmentation model after the iterative training.
According to an aspect of an embodiment of the present disclosure, there is provided a cell segmentation apparatus including: a pathology image acquisition module for acquiring an original pathology image, the original pathology image including a plurality of cells to be segmented; the cell segmentation module is used for inputting the original pathological image into an image segmentation model, and performing feature extraction on each cell through the image segmentation model to obtain a cell segmentation image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point label image sample and a boundary image sample corresponding to a pathological image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the pathological image sample, and the target segmentation image is an image obtained by processing the pathological image sample through the image segmentation model after the iterative training.
According to an aspect of the embodiments of the present disclosure, there is provided a training method of an image segmentation model, including: acquiring an image sample, and marking the image sample and a boundary image sample with an initial point corresponding to the image sample; iteratively training an image segmentation model to be trained according to the image sample, the boundary image sample and the initial point labeling image sample to obtain an image segmentation model to be corrected; performing feature extraction on the image sample to be segmented through the image segmentation model to be corrected to obtain a target segmentation image; performing edge extraction on the target segmentation image to acquire first edge information, performing edge detection on each object in the image sample to acquire second edge information, and performing edge detection on each object in the target segmentation image to acquire third edge information; and correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information to obtain the image segmentation model.
According to an aspect of the embodiments of the present disclosure, there is provided an apparatus for training an image segmentation model, including: the system comprises a sample acquisition module, a boundary image acquisition module and a segmentation module, wherein the sample acquisition module is used for acquiring an image sample to be segmented, and marking the image sample and the boundary image sample with an initial point corresponding to the image sample to be segmented; the iterative training module is used for performing iterative training on the image segmentation model to be trained according to the image sample to be segmented, the boundary image sample and the initial point labeling image sample so as to obtain an image segmentation model to be corrected; the image segmentation module is used for extracting the characteristics of the image sample to be segmented through the image segmentation model to be corrected so as to obtain a target segmentation image; an edge obtaining module, configured to perform edge extraction on the target segmented image to obtain first edge information, perform edge detection on each object in the to-be-segmented image sample to obtain second edge information, and perform edge detection on each object in the target segmented image to obtain third edge information; and the model correction module is used for correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information so as to obtain the image segmentation model.
According to an aspect of an embodiment of the present disclosure, there is provided an image segmentation system including: the device comprises a shooting device, a segmentation device and a segmentation unit, wherein the shooting device is used for shooting an original image containing a plurality of objects to be segmented; image segmentation means, connected to the capturing means, for receiving the original image, and comprising one or more processors and storage means, wherein the storage means is configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to perform the image segmentation method as described in the above embodiments on the original image; and the display device is connected with the image segmentation device and used for receiving the image segmentation result output by the image segmentation device and displaying the image segmentation result on a display screen of the display device.
According to an aspect of an embodiment of the present disclosure, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the three aspects.
In the technical solutions provided by some embodiments of the present disclosure, feature extraction is performed on an object to be segmented in an original image through a trained image segmentation model to obtain a segmented image. The image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to an image sample, a point mark image sample and a boundary image sample corresponding to the image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample by the image segmentation model after the iterative training. According to the technical scheme, on one hand, the image segmentation model can be subjected to iterative training by using the image sample, the point mark image sample corresponding to the image sample and the boundary image sample so as to gradually fit to obtain the range of a real object, and the image segmentation accuracy is improved, on the other hand, the image segmentation model after iterative training can be corrected according to the first edge information and the third edge information corresponding to the target segmentation image and the second edge information corresponding to the image sample, so that the model accuracy and stability are further improved, and the image segmentation efficiency and accuracy are further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present disclosure may be applied;
fig. 2 is a schematic diagram showing a flow of obtaining a cell segmentation model in the related art;
FIG. 3 schematically shows a flow diagram of an image segmentation method according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates an interface diagram of a cell segmentation image according to one embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram for iterative training of a segmentation model to be trained, according to one embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart for iterative training of an image segmentation model to be trained according to one embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic diagram of three iterative training of an image segmentation model to be trained, according to one embodiment of the present disclosure;
FIG. 8 schematically illustrates a flowchart of post-correction training of an image segmentation model to be optimized according to one embodiment of the present disclosure;
FIG. 9 schematically shows a flow diagram for obtaining an image segmentation model according to an embodiment of the present disclosure;
10A-10E schematically illustrate interface diagrams of edges during post-correction training, according to one embodiment of the present disclosure;
11A-11F schematically illustrate interface diagrams of three sets of pathology images and corresponding cell segmentation images, according to one embodiment of the present disclosure;
fig. 12 schematically shows a frame schematic of an image segmentation apparatus according to an embodiment of the present disclosure;
FIG. 13 schematically shows a framework diagram of a training apparatus for an image segmentation model according to one embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a computer system suitable for implementing the image segmentation apparatus and the training apparatus for image segmentation models according to the embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which technical aspects of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, a network 102, a display device 103, and an image segmentation device 104. The photographing apparatus 101 is a terminal device having an imaging structure such as a video camera, a still camera, a smart microscope, etc., for photographing an original image containing a plurality of objects to be segmented; the network 102 is used to provide a medium of communication links between the photographing device 101, the display device 103, and the image dividing device 104. Network 102 may include various types of connections, such as wired communication links, wireless communication links, and so forth; the display device 103 is a terminal device having a display screen, such as a desktop computer, a notebook, a smart phone, a tablet computer, and the like, and is configured to receive the image segmentation result output by the image segmentation device 104 and display the image segmentation result on the display screen of the display device; the image segmentation means 104 is connected to the camera 101 for receiving the original image and comprises one or more processors and storage means, wherein the storage means stores one or more programs executable by the one or more processors for performing image segmentation on the object to be segmented in the original image, together with the original image.
It should be understood that the numbers of the photographing device 101, the network 102, the display device 103, and the image dividing device 104 in fig. 1 are merely illustrative. There may be any number of photographing devices 101, networks 102, display devices 103, and image splitting devices 104, as the actual need arises. For example, the image segmentation apparatus 104 may be specifically an independent server, or may be a server cluster composed of a plurality of servers.
In an embodiment of the present disclosure, the photographing device 101 sends an original image containing a plurality of objects to be segmented to the image segmentation device 104 through the network 102, and after the image segmentation device 104 acquires the original image, the original image may be input into an image segmentation model, and feature extraction is performed on the objects in the original image through the image segmentation model to acquire a segmented image corresponding to the original image. Before the image segmentation model is used for image segmentation, the image segmentation model to be trained needs to be trained, the training process comprises two stages, the first stage is iterative training, and the second stage is post-correction training. The iterative training is carried out according to the image sample, the point mark image sample corresponding to the image sample and the boundary image sample, the post-correction training is carried out according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample through an image segmentation model after the iterative training. Further, when the image segmentation model to be trained is subjected to iterative training, the model adopted in the current round is the model optimized by the previous round of training, and the adopted point labeled image sample is the image obtained after distance filtering of the predicted segmentation image obtained after the image sample is processed by the model optimized by the previous round of training, so that after the image segmentation model to be trained is subjected to the training in the two stages, a stable image segmentation model can be obtained, and the object to be segmented in the image is accurately segmented. The image segmentation device 104 may transmit the image segmentation result to the display device 103 for display after completing the image segmentation of the original image.
In an embodiment of the present disclosure, the photographing device 101 may further send an original image including a plurality of objects to be segmented to the display device 103 through the network 102, the display device 103 sends all or part of the received original image to the image segmentation device 104, and after the image segmentation device 104 obtains the original image, the original image may be input into an image segmentation model, and feature extraction may be performed on the objects in the original image through the image segmentation model to obtain a segmented image corresponding to the original image.
It should be noted that the image segmentation method provided by the embodiment of the present disclosure is generally executed by a server, and accordingly, the image segmentation apparatus is generally disposed in the server. However, in other embodiments of the present disclosure, the image segmentation scheme provided by the embodiments of the present disclosure may also be performed by a terminal device.
The image segmentation in the related art will be described by taking the cell segmentation of the pathological image as an example. Pathology is a microscopic study of cell morphology that can complement molecular information in situ by removing a tissue sample from the body and then placing it in a fixative to make pathological sections for observation under a microscope. Generally, cells in pathological sections are relatively important judgment bases, so many tasks need to detect and divide the cells, because cell data needs to be labeled during cell detection and division, the boundaries of the cells need to be accurately outlined during labeling, currently, cell labeling of pathological sections is mainly realized in a manual mode, in order to reduce the labeling workload, only one point is generally and simply labeled on each cell, which is far away from the precision of pixel-level labeling data, so that the inventor tries to expand the manual point labeling into pixel-level labeling by adopting a clustering method, and then trains a model according to the pixel-level labeling so as to divide the cells by the model. Fig. 2 shows a schematic flowchart of the process of acquiring a cell segmentation model, as shown in fig. 2, in step S201, a pathological image is acquired; in step S202, manual point labeling is performed on cells in the pathological image; typically, each cell can be labeled manually with a marker using a drawing tool or other labeling tool; in step S203, clustering cells in the pathological image according to the marker points labeled by the artificial points; taking the mark point in the manual point marking as a seed point, clustering the cells in the pathological image by using a clustering method such as K-Means and the like to obtain a cell clustering result, wherein the cell clustering result is the pixel level marking of the cells; in step S204, determining a boundary according to the manual point label; carrying out voronoi division on point marks in the manual point marks to obtain voronoi boundaries; in step S205, using the point labels as cell labels, using voronoi boundaries as background labels, and training the cell segmentation model to obtain a cell segmentation model; the background labeling means that pixels corresponding to the boundary should be a background in the segmented image, and only pixels corresponding to the cells are a foreground in the segmented image.
Although the method can obtain a cell segmentation model, the cell clustering is used as cell labeling, so that the model result is greatly dependent on the accuracy of the clustering method and is sensitive to parameters.
In view of the problems in the related art, the embodiments of the present disclosure provide an image segmentation method, which is implemented based on machine learning, which is one of Artificial Intelligence (AI), which is a theory, method, technique, and application system that simulates, extends, and expands human Intelligence, senses an environment, acquires knowledge, and uses the knowledge to obtain an optimal result using a digital computer or a machine controlled by a digital computer. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the disclosure relates to an artificial intelligence image processing technology, and is specifically explained by the following embodiment:
the embodiment of the disclosure firstly provides an image segmentation method, which can be applied to the field of medical image analysis and criminal investigation image analysis, and can also be applied to other fields requiring retrieval and segmentation of images with complex details and containing a plurality of objects to be segmented, and the implementation details of the technical scheme of the embodiment of the disclosure are elaborated in detail by taking cell segmentation in the field of medical image analysis as an example as follows:
fig. 3 schematically shows a flow chart of an image segmentation method according to an embodiment of the present disclosure, which may be performed by a server, which may be the server 103 shown in fig. 1. Referring to fig. 3, the image segmentation method at least includes steps S310 to S320, which are described in detail as follows:
in step S310, an original image is acquired, the original image including a plurality of objects to be segmented.
In one embodiment of the present disclosure, an original image may be acquired by the photographing apparatus 101, and the original image may be an image including a plurality of cells acquired by photographing a pathological section made of a tissue sample, where the included cells are objects to be segmented in the original image. Accordingly, the photographing device 101 may be an intelligent microscope for observing and photographing a pathological section to obtain an image containing cells, and the intelligent microscope is integrated with a real-time photographing device capable of photographing an enlarged pathological section image in the microscope in real time to obtain an original image. In addition, the photographing device 101 may also be a terminal system composed of a microscope and a photographing device, and when the eyepiece and the objective lens of the microscope are adjusted to obtain a clear slice image, the photographing device photographs the slice image in the eyepiece to obtain an original image.
In step S320, inputting the original image into an image segmentation model, and performing feature extraction on each object through the image segmentation model to obtain a segmented image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to an image sample, a point labeling image sample corresponding to the image sample and a boundary image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image, and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample through the image segmentation model after the iterative training.
In an embodiment of the present disclosure, after an original image is obtained, the original image may be input to a trained image segmentation model, and feature extraction is performed on each object in the original image through the image segmentation model to obtain a segmented image corresponding to the original image, where the segmented image includes foreground pixel points and background pixel points, where the pixel points corresponding to the object are foreground pixel points and have a high gray value, for example, white (gray value 255), and the pixel points not belonging to the object are background pixel points and have a low gray value, for example, black (gray value 0), as shown in fig. 4. The image segmentation model in the embodiment of the present disclosure may be any machine learning model that can be used for image segmentation, such as a full convolution neural network model, linkNet, densnet, and the like, which is not specifically limited in the embodiment of the present disclosure.
In one embodiment of the present disclosure, in order to improve the accuracy of segmenting an image, before performing image segmentation using an image segmentation model, the image segmentation model to be trained needs to be trained to obtain a stable image segmentation model. Next, how to train the image segmentation model to be trained will be described in detail.
In an embodiment of the present disclosure, when training an image segmentation model to be trained, the training may be implemented by two training stages, where the first training stage is iterative training, and the second training stage is post-correction training, where the iterative training is performed according to an image sample, a point label image sample corresponding to the image sample, and a boundary image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image, and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample by the image segmentation model after the iterative training.
Fig. 5 is a schematic diagram illustrating a process of iteratively training a segmentation model to be trained, and as shown in fig. 5, the process of iteratively training includes steps S501 to S502, specifically:
in step S501, an image sample is obtained, and an initial point corresponding to the image sample is labeled with the image sample and a boundary image sample.
In an embodiment of the present disclosure, the image sample may specifically be a pathology image sample, and the pathology image sample may be obtained by performing data enhancement on image data in the public data set Monuseg, and performing model training according to the pathology image sample. The image data in the public data set is subjected to data enhancement, specifically, the images are rotated, subjected to color disturbance and the like, so that image samples are diversified, and then the image segmentation model can effectively segment different pathological images to obtain accurate segmented images. It is noted that the image sample may also be obtained from other public data sets or determined by the user collecting the image and making the sample.
In an embodiment of the present disclosure, after a pathological image sample is obtained, manual dot labeling may be performed on cells in the pathological image sample to obtain an initial dot labeled image sample, and specifically, positions of the cells may be marked by markers in various shapes, for example, the positions may be dot-shaped markers, linear markers, triangular markers, circular markers, or character markers, so long as the positions of the cells can be specified, and it is worth noting that the markers may be located at any positions in the cells, as long as the positions of the cells can be marked. Usually, in order to save the labor labeling cost, a point is labeled at the center of the cell by the manual work as the labeling information. Further, after the initial point annotation image sample is obtained, the boundary image sample corresponding to the image sample may be determined according to the point annotation information in the initial point annotation image sample, specifically, the tesson polygon corresponding to each point annotation information in the initial point annotation image sample may be determined according to each point annotation information in the initial point annotation image sample, and the boundary image sample may be determined according to the tesson polygons corresponding to all the point annotation information in the initial point annotation image sample.
In step S502, an image segmentation model to be trained is iteratively trained according to the image sample, the initial point annotation image sample, and the boundary image sample, so as to obtain an image segmentation model to be corrected.
In an embodiment of the present disclosure, in an iterative training process of a model, a point labeled image sample used in a current training round is an image obtained by distance filtering an output image output by performing feature extraction on an image sample for a model after a previous training round is trained, and an image segmentation model used in the current training round is a model with optimized parameters after the previous training round is trained. Fig. 6 is a schematic flowchart illustrating a process of iteratively training an image segmentation model to be trained, and as shown in fig. 6, in step S601, an image sample is input to an nth round image segmentation model to be trained for feature extraction, so as to obtain an nth round output image, where N is a positive integer; in step S602, determining a point loss function according to the nth round of output images and the nth round of point labeling image samples, and determining a boundary loss function according to the nth round of output images and the boundary image samples, where the nth round of point labeling image samples are images obtained by distance filtering of output images obtained by processing image samples by the trained nth-1 round of image segmentation model to be trained, and when N =1, the first round of point labeling image samples are images obtained by distance filtering of initial point labeling image samples; in step S603, parameters of the image segmentation model to be trained are optimized according to the point loss function and the boundary loss function, and the image segmentation model to be trained after the parameter optimization is used as the N +1 th round of image segmentation model to be trained; in step S604, distance filtering is performed on the nth round of output images to obtain an N +1 th round of point annotation image samples; in step S605, training the N +1 th round of image segmentation model to be trained according to the image sample, the boundary image sample, and the N +1 th round of point labeled image sample; in step S606, steps S601-S605 are repeated until the training for the preset number of times is completed, so as to obtain the image segmentation model to be corrected.
The process of iterative training is described in detail by taking an example of three-time iterative training of an image segmentation model to be trained, fig. 7 shows a schematic diagram of three-time iterative training of the image segmentation model to be trained, as shown in fig. 7, a training sample of each training round includes a pathology image sample, a boundary image sample, and a point labeling image sample, where the point labeling image sample is different according to different training rounds, specifically, the point labeling image sample in a first training round is an image obtained by distance filtering of an initial point labeling image sample, the point labeling image sample in a second training round is an image obtained by distance filtering of a first prediction segmentation image, the first prediction segmentation image is a prediction segmentation image obtained by processing the pathology image sample by the image segmentation model after the first round of training, the point labeling image sample in a third training round is an image obtained by distance filtering of a second prediction segmentation image, and the second prediction segmentation image is a prediction image obtained by processing the pathology image sample by the image segmentation model after the second round of training. In each training turn, a point loss function and a boundary loss function can be determined according to the output image, the point labeling graph sample and the boundary image sample of the model, and the model is reversely parametrized according to the determined point loss function and the determined boundary loss function.
It should be noted that, when iteratively training the image segmentation model to be trained, the initial parameters of the image segmentation model to be trained in the first training round may be set randomly, or may be determined by performing model training using a data set. When the data set is used for model training to determine the initial parameters of the model, for example, an image net data set may be used, and the image segmentation model to be trained is iteratively trained according to data in the data set, where the specific parameters used for the iterative training may be, for example: the size of the input image is 512 × 512 pixels, the batch processing size is 8, the learning rate is 0.0001, the number of iterations is 200 rounds, of course, other training parameters may also be set according to actual needs, and this is not specifically limited in the embodiment of the present disclosure.
Determining a point loss function according to the Nth round of output images and the Nth round of point annotation image samples, specifically, firstly obtaining an image information difference between the Nth round of output images and the Nth round of point annotation image samples; then, carrying out linear correction on the N-th round of point annotation image samples to obtain a point annotation correction term; and finally, determining a point loss function according to the point labeling correction term and the image information difference. The expression of the point loss function is specifically shown in formula (1):
Figure GDA0004051449110000141
therein, loss point Is a point loss function, D is the Nth round of point labeling image samples, relu (D) is the point labeling correction term, I is the pathological image sample, f (I) is the Nth round of output images,
Figure GDA0004051449110000142
is the F norm and squared.
Determining a boundary loss function according to the Nth round of output images and boundary image samples, specifically, firstly, performing linear correction on the boundary image samples to obtain a boundary correction term; and then determining a boundary loss function according to the Nth round output image and the boundary correction term. The expression of the boundary loss function is specifically shown in formula (2):
Figure GDA0004051449110000143
therein, loss v Is a boundary loss function, V is a boundary image sample, relu (V) is a point marking correction term, I is a pathological image sample, f (I) is an N-th output image,
Figure GDA0004051449110000144
is the F norm and squared.
From the analysis of expressions (1) and (2), relu (D) and Relu (V) show that only the point labeling area or the boundary area is processed, and the aim of model training is min Loss point And min Loss v Therefore, training the model according to the expression (1) is to make the point labeling information in the nth round of output images close to or the same as the point labeling information in the nth round of point labeling image samples, and training the model according to the expression (2) is to make the boundaries of the nth round of output images serve as backgrounds, so that the image segmentation model obtained through training can accurately process the input pathological image to obtain a segmentation image only taking cells as the foreground.
In an embodiment of the present disclosure, when acquiring an N +1 th round of point annotation image samples, distance filtering needs to be performed on an nth round of output images, where the first round of point annotation image samples are images obtained by distance filtering of an initial point annotation image sample, and then detailed description is given on how to perform distance filtering on the nth round of output images or the initial point annotation image sample.
The output image and the initial point labeling image sample both comprise foreground pixel points and background pixel points corresponding to the object, that is, the object in the image is all displayed in a foreground mode, and the details except the object needing to be segmented are the background. Because the initial point labeling image sample is a manual point labeling image, the position of the object is only represented by one point or other simple marks, and the pixel corresponding to the object cannot be completely represented, in order to realize pixel level labeling, the range of the labeling point in the point labeling image can be gradually expanded outwards, the pixels belonging to the object are all used as foreground display, and the pixels not belonging to the object are used as background. When judging whether the pixel adjacent to the marking point is the area where the object is located, judging according to the distance between two pixel points, specifically, determining the distance between a target pixel point and each background pixel point in a preset area according to a preset coefficient by taking any one foreground pixel point in the Nth round output image or the initial point marking image sample as a target pixel point; subtracting the first distance from each distance to obtain the relative distance between each background pixel point and the target pixel point; and finally, dividing the target background pixel points with the relative distance larger than zero into foreground pixel points, and setting different gray values lower than the gray value of the target pixel points for the target background pixel points according to the relative distance.
The preset coefficient is a preset coefficient, and represents a range size of the outward expansion, for example, the preset coefficient may be set to 0.1, and the outward expansion is performed by about 10 pixels each time, and of course, the preset coefficient may also be set to another value between 0 and 1, which may be specifically set according to a size of the object, which is not specifically limited in the embodiment of the present disclosure. The relative distance represents the similarity degree of the background pixel point and the target pixel point, or the confidence degree of the background pixel point can be used as the foreground pixel point, if the distance between the background pixel point and the target pixel point is larger, the relative distance is smaller, the probability that the background pixel point is used as the foreground pixel point is very small, and when the relative distance is 0, the background pixel point can only be used as the background; if the distance between the background pixel point and the target pixel point is smaller, the relative distance is larger, and the possibility that the background pixel point is used as the foreground pixel point is high. The expression of the distance filtering is specifically shown in formula (3):
Figure GDA0004051449110000151
wherein d is i,j The relative distance is represented, alpha represents a preset coefficient, (i, j) represents the position of a background pixel point in a preset range, and (k, m) represents the position of a target pixel point.
Further, since the pixel point corresponding to the center position of the object is the point with the highest confidence as the foreground pixel point, the gray value of the pixel point should be the largest, for example, 255, and along with the outward diffusion, the confidence that the pixel point is taken as the foreground pixel point gradually decreases, and therefore the gray value should also gradually decrease and be smaller than the gray value of the pixel point corresponding to the center position. When the gray value is set according to the relative distance, the maximum gray value may be multiplied by the relative distance to determine the gray value, for example, when the relative distance between the background pixel and the target pixel is 0.8, the gray value may be set to 0.8 × 255=204, when the relative distance between the background pixel and the target pixel is 0.6, the gray value may be set to 0.6 × 255=153, and when the obtained result includes a decimal part, the final gray value may be determined in an upward rounding manner.
The pixel coverage range of the object can be gradually enlarged by performing distance filtering on the initial point annotation image sample and the Nth round of output images, the range of each object to be segmented in the original image can be accurately determined by performing multiple rounds of iterative training on the point annotation image sample obtained by filtering according to the distance, and the pixel points corresponding to the range can be displayed as the foreground.
After the image segmentation model to be trained is trained by the iterative training method, the image segmentation model to be optimized can be obtained. In order to make the edge in the segmented image closer to the real edge of the object, the image segmentation model to be optimized may also be modified in the embodiment of the present disclosure. Next, how to modify the image segmentation model to be optimized will be described in detail.
Fig. 8 shows a schematic flow chart of performing post-correction training on an image segmentation model to be optimized, as shown in fig. 8, the post-correction training at least includes steps S801 to S803, specifically:
in step S801, an image sample is input to the image segmentation model to be modified for feature extraction, so as to obtain a target segmentation image.
In an embodiment of the present disclosure, an object of the training of the image segmentation model is to enable an edge of an object in a segmented image output by the model to be close to or the same as an edge of an object in an input image, so that accuracy of image segmentation can be improved.
In order to obtain edge information of an object in a predicted segmented image obtained by processing an image sample by an image segmentation model to be modified, the image sample needs to be input into the image segmentation model to be modified, and feature extraction is performed on the object in the image sample by the image segmentation model to be modified so as to obtain a target segmented image, namely the predicted segmented image. Taking a pathological image sample as an example, a cell segmentation image, that is, a target segmentation image, can be obtained after the image segmentation model to be corrected is processed.
In step S802, edge extraction is performed on the target segmented image to obtain first edge information, edge detection is performed on each object in the image sample to obtain second edge information, and edge detection is performed on each object in the target segmented image to obtain third edge information.
In one embodiment of the present disclosure, after the target segmentation image is acquired, the cell edge in the target segmentation image may be extracted to acquire the first edge information. When the first edge information is acquired, the first edge information may be acquired by performing a dilation process on each cell in the segmented target image, performing an erosion process on each cell in the segmented target image, and subtracting the dilated segmented target image from the eroded segmented target image.
In an embodiment of the present disclosure, while the first edge information is obtained, edge detection may be performed on cells in the pathology image sample to obtain second edge information, and edge detection may be performed on cells in the target segmentation image to obtain third edge information, where the algorithms used for edge detection on cells in the pathology image sample and edge detection on cells in the target segmentation image may be the same or different, where the edge detection algorithm may specifically be a sobel operator detection algorithm, a canny operator detection algorithm, and so on.
In step S803, the image segmentation model to be corrected is corrected according to the first edge information, the second edge information, and the third edge information to obtain an image segmentation model.
In an embodiment of the present disclosure, after the first edge information, the second edge information, and the third edge information are obtained, an edge loss function may be determined according to the first edge information, the second edge information, and the third edge information, and a back parameter adjustment may be performed on the image segmentation model to be corrected based on the edge loss function to obtain the image segmentation model. Fig. 9 is a schematic flowchart of a process of acquiring an image segmentation model, as shown in fig. 9, in step S901, the first edge information is multiplied by the second edge information to acquire real edge information; in step S902, performing linear correction on the real edge information to obtain an edge correction term; in step S903, the third edge information is subtracted from the real edge information to obtain an edge information difference; in step S904, an edge loss function is determined according to the edge information difference and the edge correction term, and the image segmentation model to be corrected is corrected based on the edge loss function to obtain the image segmentation model.
The first edge information is only the approximate position of the cell edge, and the real cell edge information can be obtained by multiplying the first edge information and the second edge information. Meanwhile, adding an edge correction term means correcting only edge information. The true edge information is edge information that is expected to be obtained by processing a pathological image by a model, and the edge of an object in a segmented image output by the model can be made to coincide with the edge of the expected object as much as possible by parametrizing the model according to an edge loss function. The expression of the edge loss function is specifically shown in formula (4):
Figure GDA0004051449110000171
wherein L is c For edge lossNumber, E r Relu (E) being true edge information r ) As an edge correction term, I is an image sample, F (I) is a target segmentation image, table (F (I)) is third edge information,
Figure GDA0004051449110000172
is the F norm and squared.
10A-10E illustrate interface views of edges during post-correction training, as shown in FIG. 10A for a pathology image sample; second edge information corresponding to the pathological image sample can be obtained after edge detection, as shown in fig. 10B; fig. 10C shows a target segmentation image output after the feature extraction is performed on the pathological image sample by the image segmentation model to be corrected; performing expansion and corrosion processing on the target segmentation image, and subtracting the expanded target segmentation image from the corroded target segmentation image to obtain first edge information, as shown in fig. 10D; the first edge information is multiplied by the second edge information, so that the real edge information can be obtained, as shown in fig. 10E. Further, an edge loss function can be constructed according to the real edge information and the third edge information, and the image segmentation model to be corrected is corrected according to the edge loss function, so that the image segmentation model is obtained.
The image segmentation method in the embodiment of the present disclosure is mainly applied to the field of cell segmentation, and accordingly, the embodiment of the present disclosure further discloses a cell segmentation method, and the specific process of the method is as follows: firstly, acquiring an original pathological image, wherein the original pathological image comprises a plurality of cells to be segmented; inputting the original pathological image into an image segmentation model, and extracting the characteristics of each cell through the image segmentation model to obtain a cell segmentation image; the image segmentation model is obtained by performing iterative training and post-correction training on the image segmentation model to be trained, wherein the iterative training is performed according to point label image samples and boundary image samples corresponding to pathological image samples, the post-correction training is performed according to first edge information and third edge information corresponding to target segmentation images and second edge information corresponding to the pathological image samples, and the target segmentation images are images obtained by processing the pathological image samples through the image segmentation model after the iterative training.
11A-11F illustrate schematic interface views of three sets of pathology images and corresponding cell segmentation images, each of which is shown in FIGS. 11A, 11C, and 11E, in which a plurality of cells are dispersed; the image segmentation model in the embodiment of the present disclosure can segment cells in each pathological image, and cell segmentation images corresponding to the pathological images are obtained, as shown in fig. 11B, 11D, and 11F, which correspond to fig. 11A, 11C, and 11E, respectively. By the image segmentation method, a more accurate cell segmentation image can be obtained, and as can be seen from the image, the size of the cell is close to or the same as the real size of the cell, and the edge morphology of the cell is more consistent with the real cell edge.
The image segmentation method in the embodiments of the present disclosure may also be used for segmenting objects in other types of images, such as segmenting a mixture in a microscope picture containing a plurality of powder mixtures, segmenting cells in a plant tissue slice, and the like.
According to the image segmentation method, original images containing a plurality of objects to be segmented are processed through trained image segmentation models to obtain segmentation images corresponding to the original images, weak supervision training is carried out based on point labeling and edge detection when the image segmentation models to be trained are trained, the whole training process is divided into two stages, firstly, the image segmentation models to be corrected, which can be used for segmenting images, are obtained through an iterative training stage, then, the image segmentation models to be corrected are corrected through a post-correction training stage, image segmentation models capable of rapidly and accurately segmenting input images are obtained, the image segmentation efficiency and accuracy are further improved, and a foundation is laid for subsequent data analysis and strategy formulation.
Correspondingly, the embodiment of the disclosure also discloses a training method of the image segmentation model, which specifically comprises the following steps: step S1: acquiring an image sample, and marking the image sample and a boundary image sample at an initial point corresponding to the image sample; step S2: performing iterative training on the image segmentation model to be trained according to the image sample, the boundary image sample and the initial point marking image sample to obtain an image segmentation model to be corrected; and step S3: performing feature extraction on an image sample to be segmented through an image segmentation model to be corrected to obtain a target segmentation image; and step S4: performing edge extraction on the target segmentation image to acquire first edge information, performing edge detection on each object in the image sample to acquire second edge information, and performing edge detection on each object in the target segmentation image to acquire third edge information; step S5: and correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information to obtain the image segmentation model.
Further, step S2 may be implemented according to the following process: firstly, inputting an image sample to be segmented into an Nth round image segmentation model to be trained for feature extraction so as to obtain an Nth round output image, wherein N is a positive integer; determining a point loss function according to the Nth round of output images and the Nth round of point annotation image samples, and determining a boundary loss function according to the Nth round of output images and the boundary image samples, wherein the Nth round of point annotation image samples are images obtained by distance filtering of output images obtained by processing image samples by the trained N-1 th round of image segmentation models to be trained, and when N =1, the first round of point annotation image samples are images obtained by distance filtering of initial point annotation image samples; optimizing parameters of the image segmentation model to be trained according to the point loss function and the boundary loss function, and taking the image segmentation model to be trained after the parameters are optimized as an N +1 th round image segmentation model to be trained; meanwhile, distance filtering is carried out on the output image of the Nth round to obtain a point marking image sample of the (N + 1) th round; then, performing (N + 1) th round training on the image segmentation model to be trained according to the image sample to be segmented, the boundary image sample and the (N + 1) th round point annotation image sample; and finally, repeating the steps until the training for preset times is completed so as to obtain the image segmentation model to be corrected.
The training method of the image segmentation model is the same as the model training process involved in the embodiment of the image segmentation method, and is not repeated here.
Embodiments of the apparatus of the present disclosure are described below, which may be used to perform the image segmentation method in the above-described embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the image segmentation method described above in the present disclosure.
Fig. 12 schematically shows a block diagram of an image segmentation apparatus according to one embodiment of the present disclosure.
Referring to fig. 12, an image segmentation apparatus 1200 according to an embodiment of the present disclosure includes: an image acquisition module 1201 and an image segmentation module 1202.
The image obtaining module 1201 is configured to obtain an original image, where the original image includes a plurality of objects to be segmented; a graph segmentation module 1202, configured to input the original image into an image segmentation model, and perform feature extraction on each object through the image segmentation model to obtain a segmented image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point mark image sample and a boundary image sample corresponding to the image sample to be segmented, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample, and the target segmentation image is an image obtained by processing the image sample through the image segmentation model after the iterative training.
In one embodiment of the present disclosure, the image segmentation apparatus further includes: the sample acquisition module is used for acquiring the image sample, and marking the image sample and the boundary image sample by an initial point corresponding to the image sample; and the iterative training module is used for iteratively training the image segmentation model to be trained according to the image sample, the initial point labeling image sample and the boundary image sample so as to obtain the image segmentation model to be corrected.
In one embodiment of the present disclosure, the iterative training module comprises: the processing unit is used for inputting the image samples into an Nth round of image segmentation model to be trained for feature extraction so as to obtain an Nth round of output images, wherein N is a positive integer; a loss function determining unit, configured to determine a point loss function according to the nth round output image and the nth round point annotation image sample, and determine a boundary loss function according to the nth round output image and the boundary image sample, where the nth round point annotation image sample is an image obtained by distance filtering an output image obtained by processing the image sample by an N-1 th trained image segmentation model to be trained, and when N =1, the first round point annotation image sample is an image obtained by distance filtering the initial point annotation image sample; the parameter optimization unit is used for optimizing parameters of the image segmentation model to be trained according to the point loss function and the boundary loss function, and taking the image segmentation model to be trained after parameter optimization as an N +1 th round of image segmentation model to be trained; the distance filtering unit is used for performing distance filtering on the Nth round of output images to obtain an (N + 1) th round of point annotation image samples; the retraining unit is used for training the N +1 th round of image segmentation model to be trained according to the image sample, the boundary image sample and the N +1 th round of point labeling image sample; and repeating the steps until the training for preset times is completed so as to obtain the image segmentation model to be corrected.
In one embodiment of the present disclosure, the loss function determination unit is configured to: acquiring an image information difference between the Nth round output image and the Nth round point annotation image sample; performing linear correction on the point annotation image samples in the Nth round to obtain point annotation correction terms; and determining the point loss function according to the point mark correction term and the image information difference.
In one embodiment of the present disclosure, the loss function determination unit includes: performing linear correction on the boundary image sample to obtain a boundary correction term; and determining the boundary loss function according to the Nth round output image and the boundary correction term.
In one embodiment of the present disclosure, the nth round output image includes foreground pixel points and background pixel points corresponding to the object; the distance filtering unit is configured to: determining the distance between a target pixel point and each background pixel point in a preset area according to a preset coefficient by taking any foreground pixel point in the Nth round of output images as the target pixel point; subtracting the first distance from the second distance to obtain the relative distance between each background pixel point and the target pixel point; and dividing the target background pixel points with the relative distance larger than zero into foreground pixel points, and setting different gray values lower than the gray value of the target pixel points for the target background pixel points according to the relative distance.
In one embodiment of the present disclosure, the image segmentation apparatus further includes: the processing module is used for inputting the image sample into the image segmentation model to be corrected to perform feature extraction so as to obtain a target segmentation image; an edge extraction module, configured to perform edge extraction on the target segmented image to obtain the first edge information, perform edge detection on each object in the image sample to obtain the second edge information, and perform edge detection on each object in the target segmented image to obtain the third edge information; and the correction module is used for correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information so as to obtain the image segmentation model.
In one embodiment of the present disclosure, the edge extraction module is configured to: performing expansion processing on each object in the target segmentation image, and simultaneously performing corrosion processing on each object in the target segmentation image; and subtracting the target segmentation image after the expansion processing and the target segmentation image after the erosion processing to acquire the first edge information.
In one embodiment of the disclosure, the modification module is configured to: multiplying the first edge information and the second edge information to obtain real edge information; performing linear correction on the real edge information to obtain an edge correction term; subtracting the third edge information from the real edge information to obtain an edge information difference; and determining an edge loss function according to the edge information difference and the edge correction term, and correcting the image segmentation model to be corrected based on the edge loss function to obtain the image segmentation model.
In one embodiment of the disclosure, the sample acquisition module is configured to: and determining a Thiessen polygon corresponding to the point marking information according to the point marking information in the initial point marking image, and determining the boundary image sample according to the Thiessen polygon.
An embodiment of the present disclosure also provides a cell dividing device according to an embodiment of the present disclosure, including: a pathological image acquisition module and a cell segmentation module.
The pathological image acquisition module is used for acquiring an original pathological image, wherein the original pathological image comprises a plurality of cells to be segmented; the cell segmentation module is used for inputting the original pathological image into an image segmentation model, and performing feature extraction on each cell through the image segmentation model to obtain a cell segmentation image; the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point label image sample and a boundary image sample corresponding to a pathological image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the pathological image sample, and the target segmentation image is an image obtained by processing the pathological image sample through the image segmentation model after the iterative training.
Fig. 13 schematically shows a block diagram of a training apparatus of an image segmentation model according to one embodiment of the present disclosure.
Referring to fig. 13, an apparatus 1300 for training an image segmentation model according to an embodiment of the present disclosure includes: a sample acquisition module 1301, an iterative training module 1302, an image segmentation module 1303, an edge acquisition module 1304, and a model modification module 1305.
The sample obtaining module 1301 is configured to obtain an image sample to be segmented, and an initial point label image sample and a boundary image sample corresponding to the image sample to be segmented; an iterative training module 1302, configured to perform iterative training on an image segmentation model to be trained according to the image sample to be segmented, the boundary image sample, and the initial point labeling image sample, so as to obtain an image segmentation model to be corrected; the image segmentation module 1303 is used for performing feature extraction on the image sample to be segmented through the image segmentation model to be corrected to obtain a target segmentation image; a 1304 edge obtaining module, configured to perform edge extraction on the target segmented image to obtain first edge information, perform edge detection on each object in the to-be-segmented image sample to obtain second edge information, and perform edge detection on each object in the target segmented image to obtain third edge information; a model modification module 1305, configured to modify the image segmentation model to be modified according to the first edge information, the second edge information, and the third edge information, so as to obtain the image segmentation model.
In one embodiment of the present disclosure, the iterative training module 1302 is configured to: inputting the image sample to be segmented into an Nth round image segmentation model to be trained for feature extraction so as to obtain an Nth round output image; determining a point loss function according to the Nth round of output images and the Nth round of point labeling image samples, and determining a boundary loss function according to the Nth round of output images and the boundary image samples, wherein N is a positive integer, the Nth round of point labeling image samples are images obtained by distance filtering of output images obtained by processing the image samples by an N-1 round of trained image segmentation model to be trained, and when N =1, the first round of point labeling image samples are images obtained by distance filtering of the initial point labeling image samples; optimizing parameters of the image segmentation model to be trained according to the point loss function and the boundary loss function, and taking the image segmentation model to be trained after parameter optimization as an N +1 th round image segmentation model to be trained; performing distance filtering on the Nth round of output images to obtain an (N + 1) th round of point marking image samples; performing (N + 1) th round training on the image segmentation model to be trained according to the image sample to be segmented, the boundary image sample and the (N + 1) th round point labeled image sample; and repeating the steps until the training for preset times is completed to obtain the image segmentation model to be corrected.
Fig. 14 shows a schematic structural diagram of a computer system suitable for implementing the image segmentation apparatus 104 according to the embodiment of the present disclosure.
It should be noted that the computer system 1400 of the image segmentation apparatus 104 shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of the embodiments of the present disclosure.
As shown in fig. 14, a computer system 1400 includes a Central Processing Unit (CPU) 1401, which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1402 or a program loaded from a storage portion 1408 into a Random Access Memory (RAM) 1403, implementing the image segmentation method described in the above embodiments. In the RAM1403, various programs and data necessary for system operation are also stored. The CPU 1401, ROM 1402, and RAM1403 are connected to each other via a bus 1404. An Input/Output (I/O) interface 1405 is also connected to the bus 1404.
The following components are connected to the I/O interface 1405: an input portion 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 1408 including a hard disk and the like; and a communication section 1409 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1409 performs communication processing via a network such as the internet. A drive 1410 is also connected to the I/O interface 1405 as needed. A removable medium 1411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1410 as necessary, so that a computer program read out therefrom is installed into the storage section 1408 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable medium 1411. The computer program performs various functions defined in the system of the present disclosure when executed by a Central Processing Unit (CPU) 1401.
It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present disclosure also provides a computer-readable medium that may be contained in the image processing apparatus described in the above-described embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An image segmentation method, comprising:
acquiring an original image, wherein the original image comprises a plurality of objects to be segmented;
inputting the original image into an image segmentation model, and performing feature extraction on each object to be segmented through the image segmentation model to obtain a segmented image;
the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to an image sample, a point mark image sample corresponding to the image sample and a boundary image sample to obtain the image segmentation model to be corrected, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample to obtain the image segmentation model, and the target segmentation image is an image obtained by processing the image sample by the image segmentation model to be corrected; the first edge information is obtained by performing edge extraction on the target segmentation image, the second edge information is obtained by performing edge detection on an object to be segmented in the image sample, and the third edge information is obtained by performing edge detection on the object to be segmented in the target segmentation image.
2. The image segmentation method according to claim 1, wherein the iteratively training the image segmentation model to be trained comprises:
acquiring the image sample, and marking the image sample and the boundary image sample with an initial point corresponding to the image sample;
and performing iterative training on the image segmentation model to be trained according to the image sample, the initial point labeling image sample and the boundary image sample to obtain the image segmentation model to be corrected.
3. The image segmentation method according to claim 2, wherein iteratively training an image segmentation model to be trained according to the image sample, the initial point labeling image sample, and the boundary image sample to obtain the image segmentation model to be corrected comprises:
inputting the image sample into an Nth round image segmentation model to be trained for feature extraction to obtain an Nth round output image, wherein N is a positive integer;
determining a point loss function according to the Nth round of output images and the Nth round of point annotation image samples, and determining a boundary loss function according to the Nth round of output images and the boundary image samples, wherein the Nth round of point annotation image samples are images obtained by distance filtering of output images obtained by processing the image samples by an N-1 th round of image segmentation model to be trained after training, and when N =1, the first round of point annotation image samples are images obtained by distance filtering of the initial point annotation image samples;
optimizing parameters of the image segmentation model to be trained according to the point loss function and the boundary loss function, and taking the image segmentation model to be trained after parameter optimization as an N +1 th round of image segmentation model to be trained;
performing distance filtering on the Nth round of output images to obtain an (N + 1) th round of point annotation image samples;
training the N +1 round image segmentation model to be trained according to the image sample, the boundary image sample and the N +1 round point labeling image sample;
repeating the steps until the training of preset times is completed to obtain the image segmentation model to be corrected;
the distance filtering is to determine the relative distance between a target pixel point and each background pixel point according to the distance between the target pixel point and different background pixel points in a preset area and one, and to use the target background pixel point with the relative distance larger than zero as a foreground pixel point, wherein the target pixel point is a foreground pixel point in the Nth round of output images.
4. The method of image segmentation of claim 3 wherein determining a point loss function from the Nth round output images and the Nth round point labeling image samples comprises:
acquiring an image information difference between the Nth round output image and the Nth round point annotation image sample;
performing linear correction on the Nth round of point annotation image samples to obtain point annotation correction terms;
and determining the point loss function according to the point mark correction term and the image information difference.
5. The image segmentation method according to claim 3, wherein the determining a boundary loss function from the Nth round output image and the boundary image samples comprises:
performing linear correction on the boundary image sample to obtain a boundary correction term;
and determining the boundary loss function according to the Nth round output image and the boundary correction term.
6. The image segmentation method according to claim 3, wherein the Nth output image includes foreground pixels and background pixels corresponding to the object;
the distance filtering the Nth round output image comprises:
taking any foreground pixel point in the Nth round of output images as a target pixel point, and determining the distance between the target pixel point and each background pixel point in a preset area according to a preset coefficient;
subtracting the first distance from the second distance to obtain the relative distance between each background pixel point and the target pixel point;
and dividing the target background pixel points with the relative distance larger than zero into foreground pixel points, and setting different gray values lower than the gray value of the target pixel points for the target background pixel points according to the relative distance.
7. The image segmentation method according to claim 1, wherein the first edge information is obtained by performing edge extraction on the target segmentation image, and includes:
performing expansion processing on each object in the target segmentation image, and simultaneously performing corrosion processing on each object in the target segmentation image;
and subtracting the target segmentation image after the expansion processing and the target segmentation image after the erosion processing to acquire the first edge information.
8. The image segmentation method according to claim 1 or 7, wherein the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image, and second edge information corresponding to the image sample to obtain the image segmentation model, and comprises:
multiplying the first edge information and the second edge information to obtain real edge information;
performing linear correction on the real edge information to obtain an edge correction term;
subtracting the third edge information from the real edge information to obtain an edge information difference;
and determining an edge loss function according to the edge information difference and the edge correction term, and correcting the image segmentation model to be corrected based on the edge loss function to obtain the image segmentation model.
9. A method of cell segmentation, comprising:
acquiring an original pathology image, wherein the original pathology image comprises a plurality of cells to be segmented;
inputting the original pathological image into an image segmentation model, and performing feature extraction on each cell through the image segmentation model to obtain a cell segmentation image;
the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point label image sample and a boundary image sample corresponding to a pathological image sample, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the pathological image sample, and the target segmentation image is an image obtained by processing the pathological image sample by the image segmentation model after the iterative training; the first edge information is obtained by performing edge extraction on the target segmentation image, the second edge information is obtained by performing edge detection on cells in the pathological image sample, and the third edge information is obtained by performing edge detection on cells in the target segmentation image.
10. A training method of an image segmentation model is characterized by comprising the following steps:
acquiring an image sample, and marking the image sample and a boundary image sample with an initial point corresponding to the image sample;
performing iterative training on an image segmentation model to be trained according to the image sample, the boundary image sample and the initial point label image sample to obtain an image segmentation model to be corrected;
performing feature extraction on the image sample through the image segmentation model to be corrected to obtain a target segmentation image;
performing edge extraction on the target segmentation image to obtain first edge information, performing edge detection on each object in the image sample to obtain second edge information, and performing edge detection on each object in the target segmentation image to obtain third edge information;
and correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information to obtain the image segmentation model.
11. The training method according to claim 10, wherein iteratively training an image segmentation model to be trained according to the image sample, the boundary image sample and the initial point labeling image sample to obtain an image segmentation model to be corrected comprises:
inputting the image sample into an Nth round image segmentation model to be trained for feature extraction to obtain an Nth round output image, wherein N is a positive integer;
determining a point loss function according to the Nth round of output images and the Nth round of point annotation image samples, and determining a boundary loss function according to the Nth round of output images and the boundary image samples, wherein the Nth round of point annotation image samples are images obtained by distance filtering of output images obtained by processing the image samples by an N-1 th round of image segmentation model to be trained after training, and when N =1, the first round of point annotation image samples are images obtained by distance filtering of the initial point annotation image samples;
optimizing parameters of the image segmentation model to be trained according to the point loss function and the boundary loss function, and taking the image segmentation model to be trained after parameter optimization as an N +1 th round of image segmentation model to be trained;
performing distance filtering on the Nth round of output images to obtain an (N + 1) th round of point marking image samples;
performing (N + 1) th round training on the image segmentation model to be trained according to the image sample, the boundary image sample and the (N + 1) th round point labeling image sample;
repeating the steps until the training for preset times is completed to obtain the image segmentation model to be corrected;
the distance filtering is to determine the relative distance between a target pixel point and each background pixel point according to the distance between the target pixel point and different background pixel points in a preset area and a distance value, and to take the target background pixel point with the relative distance larger than zero as a foreground pixel point, wherein the target pixel point is the foreground pixel point in the Nth round of output images.
12. An image segmentation apparatus, comprising:
the image acquisition module is used for acquiring an original image, and the original image comprises a plurality of objects to be segmented;
the graph segmentation module is used for inputting the original image into an image segmentation model and extracting the characteristics of each object to be segmented through the image segmentation model so as to obtain a segmented image;
the image segmentation model is obtained by performing iterative training and post-correction training on an image segmentation model to be trained, wherein the iterative training is performed according to a point mark image sample and a boundary image sample corresponding to the image sample to obtain the image segmentation model to be corrected, the post-correction training is performed according to first edge information and third edge information corresponding to a target segmentation image and second edge information corresponding to the image sample to obtain the image segmentation model, and the target segmentation image is an image obtained by processing the image sample by the image segmentation model to be corrected; the first edge information is obtained by performing edge extraction on the target segmentation image, the second edge information is obtained by performing edge detection on an object to be segmented in the image sample, and the third edge information is obtained by performing edge detection on the object to be segmented in the target segmentation image.
13. An apparatus for training an image segmentation model, comprising:
the system comprises a sample acquisition module, a boundary image acquisition module and a comparison module, wherein the sample acquisition module is used for acquiring an image sample, and marking the image sample and the boundary image sample with an initial point corresponding to the image sample;
the iterative training module is used for iteratively training an image segmentation model to be trained according to the image sample, the boundary image sample and the initial point labeling image sample so as to obtain an image segmentation model to be corrected;
the image segmentation module is used for extracting the characteristics of the image sample through the image segmentation model to be corrected so as to obtain a target segmentation image;
an edge obtaining module, configured to perform edge extraction on the target segmented image to obtain first edge information, perform edge detection on each object in the image sample to obtain second edge information, and perform edge detection on each object in the target segmented image to obtain third edge information;
and the model correction module is used for correcting the image segmentation model to be corrected according to the first edge information, the second edge information and the third edge information so as to obtain the image segmentation model.
14. An image segmentation system, comprising:
the device comprises a shooting device, a segmentation device and a segmentation unit, wherein the shooting device is used for shooting an original image containing a plurality of objects to be segmented;
image segmentation means, connected to the capturing means, for receiving the original image, and comprising one or more processors and storage means, wherein the storage means is configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to perform the image segmentation method of any one of claims 1 to 8 on the original image;
and the display device is connected with the image segmentation device and used for receiving the image segmentation result output by the image segmentation device and displaying the image segmentation result on a display screen of the display device.
CN202010652532.9A 2020-07-08 2020-07-08 Image segmentation method, device and system and cell segmentation method Active CN113706562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010652532.9A CN113706562B (en) 2020-07-08 2020-07-08 Image segmentation method, device and system and cell segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010652532.9A CN113706562B (en) 2020-07-08 2020-07-08 Image segmentation method, device and system and cell segmentation method

Publications (2)

Publication Number Publication Date
CN113706562A CN113706562A (en) 2021-11-26
CN113706562B true CN113706562B (en) 2023-04-07

Family

ID=78646718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010652532.9A Active CN113706562B (en) 2020-07-08 2020-07-08 Image segmentation method, device and system and cell segmentation method

Country Status (1)

Country Link
CN (1) CN113706562B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913187B (en) * 2022-05-25 2023-04-07 北京百度网讯科技有限公司 Image segmentation method, training method, device, electronic device and storage medium
CN115393846B (en) * 2022-10-28 2023-03-03 成都西交智汇大数据科技有限公司 Blood cell identification method, device, equipment and readable storage medium
CN116580041A (en) * 2023-05-30 2023-08-11 山东第一医科大学附属眼科研究所(山东省眼科研究所、山东第一医科大学附属青岛眼科医院) Corneal endothelial cell boundary segmentation method and device based on voronoi diagram

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
CN108876804A (en) * 2017-10-12 2018-11-23 北京旷视科技有限公司 It scratches as model training and image are scratched as methods, devices and systems and storage medium
CN110443818A (en) * 2019-07-02 2019-11-12 中国科学院计算技术研究所 A kind of Weakly supervised semantic segmentation method and system based on scribble
CN110517278A (en) * 2019-08-07 2019-11-29 北京旷视科技有限公司 Image segmentation and the training method of image segmentation network, device and computer equipment
CN111199550A (en) * 2020-04-09 2020-05-26 腾讯科技(深圳)有限公司 Training method, segmentation method, device and storage medium of image segmentation network
CN111340047A (en) * 2020-02-28 2020-06-26 江苏实达迪美数据处理有限公司 Image semantic segmentation method and system based on multi-scale feature and foreground and background contrast

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9972092B2 (en) * 2016-03-31 2018-05-15 Adobe Systems Incorporated Utilizing deep learning for boundary-aware image segmentation
US10878219B2 (en) * 2016-07-21 2020-12-29 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876804A (en) * 2017-10-12 2018-11-23 北京旷视科技有限公司 It scratches as model training and image are scratched as methods, devices and systems and storage medium
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
CN110443818A (en) * 2019-07-02 2019-11-12 中国科学院计算技术研究所 A kind of Weakly supervised semantic segmentation method and system based on scribble
CN110517278A (en) * 2019-08-07 2019-11-29 北京旷视科技有限公司 Image segmentation and the training method of image segmentation network, device and computer equipment
CN111340047A (en) * 2020-02-28 2020-06-26 江苏实达迪美数据处理有限公司 Image semantic segmentation method and system based on multi-scale feature and foreground and background contrast
CN111199550A (en) * 2020-04-09 2020-05-26 腾讯科技(深圳)有限公司 Training method, segmentation method, device and storage medium of image segmentation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Faster R-CNN: towards real-time object detection with region proposal networks;Shaoqing Ren, et.al;《arXiV》;20160106;第1-14页 *
基于感知对抗网络的图像分割迁移方法研究;李君艺等;《合肥工业大学学报(自然科学版)》;20200531;第43卷(第5期);第624-628页 *

Also Published As

Publication number Publication date
CN113706562A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
US11861829B2 (en) Deep learning based medical image detection method and related device
EP3961484A1 (en) Medical image segmentation method and device, electronic device and storage medium
EP3989119A1 (en) Detection model training method and apparatus, computer device, and storage medium
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN113706562B (en) Image segmentation method, device and system and cell segmentation method
WO2021164534A1 (en) Image processing method and apparatus, device, and storage medium
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
CN110853022B (en) Pathological section image processing method, device and system and storage medium
CN110570352B (en) Image labeling method, device and system and cell labeling method
CN112287820A (en) Face detection neural network, face detection neural network training method, face detection method and storage medium
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
CN110767292A (en) Pathological number identification method, information identification method, device and information identification system
CN114445670B (en) Training method, device and equipment of image processing model and storage medium
CN111932529A (en) Image segmentation method, device and system
CN113822314A (en) Image data processing method, apparatus, device and medium
CN115272306B (en) Solar cell panel grid line enhancement method utilizing gradient operation
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
CN105528791B (en) A kind of quality evaluation device and its evaluation method towards touch screen hand-drawing image
CN116468895A (en) Similarity matrix guided few-sample semantic segmentation method and system
WO2023220913A1 (en) Cell image processing method, electronic device and storage medium
CN113763315B (en) Slide image information acquisition method, device, equipment and medium
CN113223037B (en) Unsupervised semantic segmentation method and unsupervised semantic segmentation system for large-scale data
CN113706450A (en) Image registration method, device, equipment and readable storage medium
CN114283178A (en) Image registration method and device, computer equipment and storage medium
CN113763313A (en) Text image quality detection method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211118

Address after: 518052 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Applicant after: Tencent Medical Health (Shenzhen) Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant