CN118196406A - Method, apparatus and computer readable medium for segmentation processing of images - Google Patents
Method, apparatus and computer readable medium for segmentation processing of images Download PDFInfo
- Publication number
- CN118196406A CN118196406A CN202410163502.XA CN202410163502A CN118196406A CN 118196406 A CN118196406 A CN 118196406A CN 202410163502 A CN202410163502 A CN 202410163502A CN 118196406 A CN118196406 A CN 118196406A
- Authority
- CN
- China
- Prior art keywords
- sperm
- target
- segmentation
- image
- head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 173
- 238000000034 method Methods 0.000 title claims abstract description 105
- 238000012545 processing Methods 0.000 title claims abstract description 102
- 230000008569 process Effects 0.000 claims abstract description 38
- 238000000605 extraction Methods 0.000 claims description 41
- 238000012216 screening Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 17
- 239000012535 impurity Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013145 classification model Methods 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 5
- 230000006872 improvement Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 239000002245 particle Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 24
- 238000003709 image segmentation Methods 0.000 abstract description 21
- 238000001000 micrograph Methods 0.000 abstract description 20
- 230000000694 effects Effects 0.000 abstract description 8
- 238000004458 analytical method Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 9
- 238000003759 clinical diagnosis Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 231100000527 sperm abnormality Toxicity 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 210000004204 blood vessel Anatomy 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 210000005036 nerve Anatomy 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 206010063385 Intellectualisation Diseases 0.000 description 2
- 206010003883 azoospermia Diseases 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012797 qualification Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000007447 staining method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000035558 fertility Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/7635—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks based on graphs, e.g. graph cuts or spectral clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Artificial Intelligence (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application provides a method, a device and a computer readable medium for carrying out segmentation processing on an image. The method according to the application comprises the following steps: acquiring a target image to be processed, wherein the target image comprises a plurality of target objects; performing a segmentation process on each target object contained in the target image by using a target segmentation model and a target clustering algorithm; and generating segmentation result information of the target image. According to the application, the image segmentation processing is carried out on the objects with the overlapped slender structures in the image by combining the segmentation model and the clustering algorithm, and the excellent image segmentation effect is achieved; in addition, according to the method provided by the embodiment of the application, the image segmentation processing is carried out on the sperm micrographs in the sperm detection scene by combining the segmentation model and the clustering algorithm, so that the mutually overlapped sperm tails are accurately identified and segmented, the whole process does not need to be manually participated, the automatic segmentation of the sperm is realized, and the segmentation efficiency of the sperm in the micrographs is greatly improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a computer readable medium for performing segmentation processing on an image.
Background
Based on the image segmentation scheme of the prior art, among scenes where a plurality of structures overlapping each other in an image need to be segmented, for example, scenes where a sperm photograph, a blood vessel image, a nerve image, or the like is segmented, the segmentation effect for such overlapping elongated structures is poor.
For the sperm detection scene, sperm morphology analysis is an important link of clinical diagnosis process of hospital men, and is one of important indexes for evaluating sperm quality and male fertility. At present, the work still depends on that the male specialist in the hospital invests for 4 to 6 hours per day to carry out pure naked eye identification, so that not only is the precious time of doctors consumed, but also the requirements of large scale and high efficiency are difficult to meet. Moreover, sperm morphology analysis has the problems of strong subjectivity, poor mutual recognition of experimental time results and the like, and is always an important point and a difficult point in the reproduction inspection field. Therefore, the realization of a system for automatically detecting sperm abnormalities by combining artificial intelligence technology is a need of clinical diagnosis of men today. If the automatic detection system can be directly embedded into a microscope, the automation and the intellectualization of the problem can be thoroughly completed, and the method has great significance for promoting the automation and the intellectualization of the medical field to a certain extent.
However, the sperm micrographs are generally characterized by easy overlapping, poor recognition degree, more interference color blocks and the like, so that the sperm in the micrographs are generally mixed together, which is not beneficial to the segmentation of the existing image segmentation model; meanwhile, the sperm is complex in morphology, and the disc roots in the image are connected in a staggered manner, so that the supervision and segmentation model training is realized through the large-scale labeling data, and the training becomes extremely difficult.
The application of the image detection technology in the field of medical image segmentation is common, but most of schemes based on the prior art are supervised training models which depend on a large amount of annotation data, the model generalization capability is weak, and the implementation of the model in sperm segmentation task is difficult. Another major difficulty in sperm segmentation is that sperm tails are very prone to overlap, so that segmentation models based on the prior art generally have poor segmentation effects on overlapping areas with similar color textures.
Disclosure of Invention
Aspects of the present application provide a method, apparatus, and computer-readable medium for segmentation processing of an image.
In one aspect of the present application, there is provided a method for performing segmentation processing on an image, wherein the method includes:
Acquiring a target image to be processed, wherein the target image comprises a plurality of target objects;
performing a segmentation process on each target object contained in the target image by using a target segmentation model and a target clustering algorithm;
and generating segmentation result information of the target image.
In one aspect of the present application, there is provided an apparatus for performing segmentation processing on an image, wherein the apparatus includes:
means for acquiring a target image to be processed, the target image comprising a plurality of target objects;
Means for performing a segmentation process on each target object contained in the target image by using the target segmentation model and a target clustering algorithm;
Means for generating segmentation result information of the target image.
In another aspect of the present application, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the claimed embodiments.
In another aspect of the application, a computer-readable storage medium having stored thereon computer program instructions executable by a processor to implement a method of an embodiment of the application is provided.
According to the scheme provided by the embodiment of the application, the image segmentation processing is carried out on the objects with the overlapped slender structures in the image by combining the segmentation model and the clustering algorithm, and the excellent image segmentation effect is achieved.
In addition, according to the method provided by the embodiment of the application, the segmentation model and the clustering algorithm are combined to carry out image segmentation processing on the sperm micrographs in the sperm detection scene, so that mutually overlapped sperm tails are accurately identified and segmented, the whole process does not need manual participation, the automatic segmentation of the sperm is realized, the segmentation efficiency of the sperm in the micrographs is greatly improved, and great convenience is provided for carrying out sperm morphology detection; the method of the embodiment of the application can be combined with the labeling data of medical professionals to judge the normal or abnormal sperm morphology and obtain corresponding detection results, provides a convenient and efficient operation process and accurate and reliable results for the clinical diagnosis of male sperm morphology analysis, is hopeful to thoroughly change the dilemma that the sperm morphology at the present stage depends on manual analysis, and plays an important role in the clinical diagnosis of men.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
Fig. 1 is a flow chart of a method for performing segmentation processing on an image according to an embodiment of the present application;
FIG. 2 shows a flow chart of a method for segmentation processing of sperm images in accordance with an embodiment of the application
Fig. 3 is a schematic structural diagram of an apparatus for performing segmentation processing on an image according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for segmentation processing of sperm images according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an apparatus suitable for implementing the solution in an embodiment of the application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In one exemplary configuration of the application, the terminal, the devices of the services network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer program instructions, data structures, modules of the program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
Fig. 1 is a flow chart of a method for performing segmentation processing on an image according to an embodiment of the present application. The method at least comprises step S101, step S102 and step S103.
In an actual scenario, the execution body of the method may be a user device, or a device formed by integrating the user device and a network device through a network, or may also be an application running on the device, where the user device includes, but is not limited to, various terminal devices such as a computer, a mobile phone, a tablet computer, a smart watch, a bracelet, and the network device includes, but is not limited to, a network host, a single network server, a plurality of network server sets, or a computer set based on cloud computing, where the network device is implemented, and may be used to implement a part of processing functions when setting an alarm clock. Here, the Cloud is composed of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual computer composed of a group of loosely coupled computer sets.
In the context of sperm detection, the subject of the method may be adapted to the apparatus in which sperm morphology detection is performed. According to the method provided by the embodiment of the application, the sperm and segmentation in the sperm staining photomicrograph can be realized, and the segmentation of the sperm under a large number of overlapping scenes is particularly solved, so that sperm abnormality detection is performed based on the segmented image.
In addition to the above-described sperm detection, the image segmentation method according to the present embodiment is also applicable to similar scenes in which a plurality of structures in an image are required to be segmented to overlap each other. For example, for blood vessel images, nerve images, street images and the like, because of the overlapping problem of a large number of slender structures in the images, the image segmentation mode based on the prior art scheme is difficult to achieve the image segmentation effect, and the method based on the embodiment of the application can achieve the better image segmentation effect.
Referring to fig. 1, in step S101, a target image to be processed is acquired, the target image containing a plurality of target objects.
Wherein the target object is an object of an elongated structure.
Optionally, the target image includes a plurality of target objects overlapping each other. For example, a sperm image, a blood vessel image, a nerve image in a detection scene, or a street image in a scene, etc.
According to one embodiment, the method further comprises step S104 before step S102.
In step S104, image preprocessing is performed on the acquired target image.
Wherein the image preprocessing includes at least any one of the following operations:
1) Performing standardization processing on the size of the target image;
2) Performing image optimization processing on the target image; for example, python is used to increase the image saturation, image contrast, and/or image green area vividness of the target image;
3) Performing definition improvement processing on the target image; for example, using Color, filters, morphology in Skimage library, etc. functions to simulate haze removal operations in PS to increase the sharpness of key information of the target image;
4) The background and small particle impurities in the target image are turned to pure white.
Continuing with the description of fig. 1, in step S102, each target object included in the target image is identified and segmented by using the target segmentation model and the target clustering algorithm.
Wherein the object segmentation model includes various models that can be used for image segmentation processing.
Optionally, the target segmentation Model is a segmentation cut Model (SEGMENT ANYTHING Model, SAM). The object model may be various variant models based on a SAM model, such as a SAM-Med2D model or TinySAM model, etc.
Wherein the target clustering algorithm includes various clustering algorithms that can be used to process the intersection structure. Optionally, the target clustering algorithm is a multi-manifold spectral clustering algorithm (Spectral Clustering on Multiple Manifolds, SMMC), which is a clustering algorithm that mainly uses a cross-flow structure.
Specifically, the step S102 includes steps S1021 to S1026.
In step S1021, the target objects contained in the target image are identified in an iterative manner using the target segmentation model until all the target objects are identified.
In step S1022, masks corresponding to the target objects identified for each round are stored in the target file.
In step S1023, a skeleton extraction process is performed on the mask corresponding to the identified target object using a skeleton extraction (Skeletonize) algorithm.
The skeleton extraction processing refers to extracting lines connecting all local extreme points from the binarized image to serve as a skeleton of an object. In step S1023, by performing the skeleton extraction processing, all the lines that connect all the local extremum points in each target object are obtained as the "skeleton" of the target object.
In step S1024, based on each layer of mask obtained by the skeleton extraction processing, the number of endpoints is counted and the number of target objects contained in the mask is determined based on a predetermined endpoint matching rule.
In step S1025, the plurality of skeletonized lines obtained by the skeleton extraction process are segmented using a target clustering algorithm.
Specifically, sampling pixel points of a plurality of skeletonized lines obtained through skeleton extraction processing, and converting the skeletonized lines into a lattice structure supported by a target clustering algorithm; and then, a target clustering algorithm is called to divide the plurality of skeletonized lines by setting proper parameters and random seeds.
In step S1026, the segmented skeletonized line is used as a reference line, and a corresponding class is assigned to each pixel point of the original mask.
By distributing corresponding categories for each pixel point of the original mask, classification of the pixel level of the original mask is achieved, and then splitting of each target object in the original mask is achieved.
Continuing with fig. 1, in step S103, the segmentation result information of the target object corresponding to the target image is generated.
The segmentation result includes, but is not limited to, quantity information, contour information, position information and the like of all target objects obtained through segmentation processing in the target image.
According to the method of the embodiment of the application, the image segmentation processing is carried out on the objects of the thin and long structures overlapped with each other in the image by combining the segmentation model and the clustering algorithm, and the excellent image segmentation effect is achieved.
FIG. 2 shows a flow chart of a method for segmentation processing of sperm images in accordance with an embodiment of the present application. The method comprises steps S201 to S206.
In step S201, a target sperm image to be processed is acquired.
Wherein the sperm image data comprises a plurality of sperm superimposed upon one another, such as a sperm micrograph in a sperm detection scenario.
Optionally, sperm micrographs are collected and subjected to a preliminary screening to exclude azoospermia pictures, severe color cast pictures, and the sperm micrographs after the preliminary screening are taken as target sperm images.
Optionally, before step S202, the method further comprises step 207.
In step S207, image preprocessing is performed on the acquired target sperm image. The image preprocessing method is similar to the aforementioned step S104, and will not be described herein.
In step S202, sperm head data is obtained by performing a first segmentation process on the target sperm image using a target segmentation model.
Wherein the target segmentation model is a SAM model. The object segmentation model is the same as or similar to the object segmentation model described in step 102, and will not be described here.
Specifically, the step S202 includes a step S2021 and a step S202.
In step S2021, the head mask corresponding to all the identified sperm heads is obtained by identifying the sperm heads included in the target sperm image using the target segmentation model.
In step S2022, the effective sperm heads are screened out according to predetermined screening conditions based on the head mask file.
Wherein the screening conditions are used to determine whether the identified sperm head is a valid sperm head.
According to one embodiment, the screening conditions are set based on the intersection ratio (Intersection over Union, ioU) of sperm head region to the whole mask. Specifically, in step S2022, a color value range of a sperm head corresponding color corresponding to the sperm head is set based on the head mask file; calculating the intersection ratio of the corresponding color area of the sperm head in each layer of mask and the whole mask; and comparing the effective sperm head in the head mask with a preset cross ratio threshold value.
Optionally, the step S202 further includes a step S2023.
In step S2023, a head mask corresponding to the head of the valid sperm is stored.
Continuing with the description of fig. 2, in step S203, a head subtraction process is performed on the target sperm image based on the sperm head data to obtain target tail data for performing the second segmentation process.
According to a first example of the present application, it is assumed that the staining method employed in the first example stains the sperm head purple. For the sperm photomicrograph subjected to the image preprocessing, the first segmentation processing is performed by using a predetermined SAM model, and Mask files of all the first segmentation processing are obtained. Next, for the generated mask, a range of purple HSV values is set, and an intersection ratio of the purple area in each mask layer and the whole mask is calculated. And then judging whether the head is an effective sperm head or not based on a preset cross ratio threshold value, screening the effective sperm head into a head file for storage, and matting out the area corresponding to the sperm head in the original image so as to divide the tail.
Optionally, the target sperm image after the head subtraction processing is subjected to impurity removal processing, so that the image data after the impurity removal is used as target tail data for performing the second segmentation processing.
Continuing with the first example described above, there are still many colorant block disturbance model identifications after head subtraction, and the following rules are set to remove these colorant blocks: for each layer of mask, find the two pixel points a and b that are furthest from the mask, calculate the distance d between a and b, and calculate s=dζ2, S represents the theoretical maximum area that contains the two points a, b, S represents the actual area, and calculate q=s/S. The larger the q value is, the closer the layer of mask is to the agglomerate structure rather than the slender sperm tail, and a threshold value of a proper q value is set to screen out agglomerate impurities and the sperm tail which does not meet the requirements, so that a pure tail set is obtained for tail fine segmentation.
Continuing with the description of fig. 2, in step S204, the sperm tail data is obtained by performing a second segmentation process on the target tail data using the target model and the target clustering algorithm.
Optionally, the target clustering algorithm is SMMC. The target clustering algorithm is the same as or similar to the target clustering algorithm described in step 102, and will not be described here.
Specifically, the step S204 includes steps S2041 to S2046.
In step S2041, the sperm tails contained in the target tail data are iteratively identified using the target segmentation model until all sperm tails are identified.
In step S2042, a tail mask image corresponding to the tail of the sperm identified in each round is stored.
In step S2043, skeleton extraction processing is performed on the tail mask image using a skeleton extraction algorithm.
In step S2044, based on each layer of mask obtained by the skeleton extraction process, the number of endpoints is counted and the number of sperm contained in the mask is determined based on a predetermined endpoint matching rule.
In step S2045, a plurality of skeletonized lines obtained by the skeleton extraction process are segmented using a target clustering algorithm.
Specifically, sampling pixel points of a plurality of skeletonized lines obtained through skeleton extraction processing, and converting the skeletonized lines into a lattice structure supported by a target clustering algorithm; and then, a target clustering algorithm is called to divide the plurality of skeletonized lines by setting proper parameters and random seeds.
In step S2046, the segmented skeletonized line is used as a reference line, and a corresponding class is allocated to each pixel point of the original mask, so as to split the tail of a single sperm in the original mask.
Continuing with the first example, based on the clean tail set, all sperm tails in the picture are identified in an iterative manner using the SAM model and are saved in a mask format, and many tails in the result obtained in this step overlap, and separation of individual sperm tails is not achieved. And erasing the position corresponding to the mask obtained in the step in the original image once for each iteration, wherein the pixel points at the position corresponding to the mask are set to be white in background color, and the edge residual shadows are removed. Then, the next turn of SAM model processing is carried out until the tail parts of sperms in the pictures are all identified, then tail masks identified by the SAM models of all turns are summarized and stored in a tail file, so that single tail segmentation of the next step is carried out.
Then, the tail mask is subjected to skeleton extraction processing by using Skeletonize algorithm. Then, judging the number of endpoints of each layer of mask after skeletonizing, selecting the mask with the number of endpoints being more than 2 (namely the mask with the number of endpoints being more than one tail), and deducing the number of sperms contained in the mask according to the number of endpoints. The predetermined pairing rules are: number of endpoints 3, 4—number of tails 2; number of endpoints 5, 6-number of tails 3, and so on. And then, sampling pixel points of the skeletonized result, converting the result into a lattice structure supported by a clustering algorithm, setting proper parameters and random seeds, and calling the clustering algorithm to segment skeletonized lines. And then, taking the segmented skeletonized lines as reference lines, and assigning categories to each pixel point of the original mask, so that the classification of the pixel level of the original mask is realized, namely the splitting of the tail of a single sperm is realized.
Continuing with the description of fig. 2, in step S205, the head and tail of each sperm are matched and spliced based on the obtained sperm head data and sperm tail data, thereby obtaining the segmentation result of each sperm.
Specifically, based on the sperm head data obtained by the first segmentation processing, obtaining the long axis of each sperm head; then, based on the sperm tail data obtained by the second segmentation treatment, the front end and the rear end of each sperm tail are obtained according to a preset front end and rear end determining rule; then, according to a preset geometric rule, matching a skeletonized straight line at the front end of the sperm tail with a long axis of the sperm head to obtain matched sperm heads and tail; and then, splicing the head and tail of each matched sperm to obtain the segmentation result of each sperm.
The first example will be described, and the front end skeletonized line is linearized by judging the front and rear ends of the tail of each sperm divided in the previous step through the color change trend and the mask width. And then finding the long axis of the head of the sperm which is segmented before, performing correlation matching between the distance and the included angle of the straight line skeletonized by the tail and the long axis of the head according to the geometric rule, performing head-tail pairing, and then splicing the corresponding heads and tails to obtain a complete sperm segmentation result.
In step S206, segmentation result information of the target sperm image is generated based on the segmentation result of each sperm.
Wherein the segmentation result information comprises, but is not limited to, the number of sperms contained in the target sperm image, the outline and the size of each contained sperm, the position information in the target image and the like.
According to one embodiment, the method further comprises step S208.
In step S208, a classification model for sperm morphology classification is trained based on the segmentation result information of the plurality of sperm images and the corresponding labeling data.
Specifically, acquiring data labels of sperm morphology classification problems corresponding to medical professionals; and then training the classification model by adopting a deep learning mode such as semi-supervised learning and the like to realize classification of qualification/disqualification of the split sperm morphology.
According to one embodiment, the method further comprises step S209.
In step S209, a sperm eligibility analysis is performed on the sperm micrograph of the patient using the trained classification model.
For example, a trained classification model is used to perform eligibility analysis on each sperm after segmentation, and the number of morphologically normal sperm and morphologically abnormal sperm of a plurality of sperm micrographs of a patient is counted. And according to the counted number, calculating to obtain the sperm morphology analysis result of the patient.
According to the method provided by the embodiment of the application, the segmentation model and the clustering algorithm are combined to carry out image segmentation processing on the sperm micrographs in the sperm detection scene, so that mutually overlapped sperm tails are accurately identified and segmented, the whole process does not need manual participation, the automatic segmentation of the sperm is realized, the segmentation efficiency of the sperm in the micrographs is greatly improved, and great convenience is provided for carrying out sperm morphology detection; the method of the embodiment of the application can be combined with the labeling data of medical professionals to judge the normal or abnormal sperm morphology and obtain corresponding detection results, provides a convenient and efficient operation process and accurate and reliable results for the clinical diagnosis of male sperm morphology analysis, is hopeful to thoroughly change the dilemma that the sperm morphology at the present stage depends on manual analysis, and plays an important role in the clinical diagnosis of men.
In addition, the embodiment of the application also provides a device for carrying out segmentation processing on the image, and the structure of the device is shown in fig. 3.
The device comprises: a means for acquiring a target image to be processed (hereinafter referred to as an "image acquisition means 101"); means for performing a segmentation process (hereinafter referred to as "segmentation processing means 102") on each target object included in the target image by using a target segmentation model and a target clustering algorithm; means for generating the division result information of the target image (hereinafter referred to as "result generating means 103").
Referring to fig. 3, an image acquisition apparatus 101 acquires a target image to be processed, the target image containing a plurality of target objects.
Wherein the target object is an object of an elongated structure.
Optionally, the target image includes a plurality of target objects overlapping each other. For example, a sperm image, a blood vessel image, a nerve image in a detection scene, or a street image in a scene, etc.
According to one embodiment, the apparatus further comprises image preprocessing means, the operation of which is performed prior to the operation of the segmentation processing means 102.
The image preprocessing device performs image preprocessing on the acquired target image.
Wherein the image preprocessing includes at least any one of the following operations:
1) Performing standardization processing on the size of the target image;
2) Performing image optimization processing on the target image; for example, python is used to increase the image saturation, image contrast, and/or image green area vividness of the target image;
3) Performing definition improvement processing on the target image; for example, using Color, filters, morphology in Skimage library, etc. functions to simulate haze removal operations in PS to increase the sharpness of key information of the target image;
4) The background and small particle impurities in the target image are turned to pure white.
Continuing with the description of fig. 3, the segmentation processing device 102 recognizes and segments each target object included in the target image by using the target segmentation model and the target clustering algorithm.
Wherein the object segmentation model includes various models that can be used for image segmentation processing.
Optionally, the target segmentation Model is a segmentation cut Model (SEGMENT ANYTHING Model, SAM). The object model may be various variant models based on a SAM model, such as a SAM-Med2D model or TinySAM model, etc.
Wherein the target clustering algorithm includes various clustering algorithms that can be used to process the intersection structure. Optionally, the target clustering algorithm is a multi-manifold spectral clustering algorithm (Spectral Clustering on Multiple Manifolds, SMMC), which is a clustering algorithm that mainly uses a cross-flow structure.
Specifically, the segmentation processing device 102 includes an object recognition device, a mask storage device, a skeleton extraction device, a quantity counting device, a line segmentation device, and a category assignment device.
The object recognition device uses the object segmentation model to recognize the object objects contained in the object image in an iterative mode until all the object objects are recognized.
And the mask storage device stores the mask corresponding to the target object identified in each turn into the target file.
The skeleton extraction device uses a skeleton extraction (Skeletonize) algorithm to carry out skeleton extraction processing on the mask corresponding to the identified target object.
The skeleton extraction processing refers to extracting lines connecting all local extreme points from the binarized image to serve as a skeleton of an object. The skeleton extraction device obtains all lines which are used for identifying all local extreme points connected in each target object and serve as a skeleton of the target object through skeleton extraction processing.
The number statistics device is used for counting the number of endpoints based on each layer of mask obtained by the skeleton extraction process and determining the number of target objects contained in the mask based on a preset endpoint matching rule.
The line segmentation device uses a target clustering algorithm to segment a plurality of skeletonized lines obtained through skeleton extraction processing.
Specifically, sampling pixel points of a plurality of skeletonized lines obtained through skeleton extraction processing, and converting the skeletonized lines into a lattice structure supported by a target clustering algorithm; and then, a target clustering algorithm is called to divide the plurality of skeletonized lines by setting proper parameters and random seeds.
The class allocation device takes the divided skeletonized lines as reference lines and allocates corresponding classes for each pixel point of the original mask.
By distributing corresponding categories for each pixel point of the original mask, classification of the pixel level of the original mask is achieved, and then splitting of each target object in the original mask is achieved.
Continuing with reference to fig. 3, the result generation device 103 generates the division result information of the target object corresponding to the target image.
The segmentation result includes, but is not limited to, quantity information, contour information, position information and the like of all target objects obtained through segmentation processing in the target image.
According to the device provided by the embodiment of the application, the image segmentation processing is carried out on the objects with the long and thin structures overlapped with each other in the image by combining the segmentation model and the clustering algorithm, and the excellent image segmentation effect is achieved.
Fig. 4 is a schematic structural diagram of an apparatus for segmentation processing of sperm images according to an embodiment of the present application.
The device comprises: means for acquiring a target sperm image to be processed (hereinafter referred to as "sperm image acquisition means 201"); means for obtaining sperm head data by performing a first segmentation process on the target sperm image using a target segmentation model (hereinafter referred to as "first segmentation process means 202"); means for performing a head subtraction process on the target sperm image based on the sperm head data to obtain target tail data for performing a second segmentation process (hereinafter referred to as "head subtraction processing means 203"); means for obtaining sperm tail data by performing a second segmentation process on the target tail data using the target model and a target clustering algorithm (hereinafter referred to as "second segmentation processing means 204"); based on the obtained sperm head data and sperm tail data, the head and tail of each sperm are matched and spliced to obtain the segmentation result (hereinafter referred to as a splice processing device 205); means for generating segmentation result information of the target sperm image based on the segmentation result of each sperm (hereinafter referred to as "segmentation result generation means 206").
Referring to fig. 4, a sperm image acquisition device 201 acquires a target sperm image to be processed.
Wherein the sperm image data comprises a plurality of sperm superimposed upon one another, such as a sperm micrograph in a sperm detection scenario.
Alternatively, the sperm image acquisition device 201 collects sperm micrographs and performs preliminary screening to exclude azoospermia pictures, severe color cast pictures, and takes the sperm micrographs after preliminary screening as target sperm images.
Optionally, the device further comprises a sperm image preprocessing device.
The sperm image preprocessing device performs image preprocessing on the acquired target sperm image. The operation of the sperm image preprocessing device is similar to that of the image preprocessing device, and will not be described herein.
The first segmentation processing means 202 obtains sperm head data by performing a first segmentation process on the target sperm image using a target segmentation model.
Wherein the target segmentation model is a SAM model. The object segmentation model is the same as or similar to the object segmentation model described in the segmentation processing device 102, and will not be described here.
Specifically, the first segmentation processing device 202 includes a head recognition device and a head screening device.
The head recognition device recognizes the sperm heads contained in the target sperm image by using the target segmentation model, and head masks corresponding to all the recognized sperm heads are obtained.
The head screening device screens out the effective sperm head according to the preset screening condition based on the head mask file.
Wherein the screening conditions are used to determine whether the identified sperm head is a valid sperm head.
According to one embodiment, the screening conditions are set based on the intersection ratio (Intersection over Union, ioU) of sperm head region to the whole mask. Specifically, in step S2022, a color value range of a sperm head corresponding color corresponding to the sperm head is set based on the head mask file; calculating the intersection ratio of the corresponding color area of the sperm head in each layer of mask and the whole mask; and comparing the effective sperm head in the head mask with a preset cross ratio threshold value.
Optionally, the first segmentation processing device 202 further includes a head mask storage device.
The head mask storage device stores the head mask corresponding to the head of the effective sperm.
Continuing with the description of fig. 4, the head subtraction processing means 203 performs a head subtraction process on the target sperm image based on the sperm head data to obtain target tail data for performing the second segmentation process.
According to a first example of the present application, it is assumed that the staining method employed in the first example stains the sperm head purple. For the sperm photomicrograph subjected to the image preprocessing, the first segmentation process unit 202 performs the first segmentation process using a predetermined SAM model, resulting in Mask files for all the first segmentation processes. Next, for the generated mask, the head screening device sets a purple HSV color value range, and calculates an intersection ratio of a purple region and the entire mask in each mask layer. Then, based on the preset cross ratio threshold value, judging whether the head is a valid sperm head, and screening the valid sperm head into a head file for storage by a head mask storage device. The head deduction processing device 203 extracts the area corresponding to the sperm head in the original image so as to divide the tail subsequently.
Optionally, the apparatus further comprises an impurity treatment device.
The impurity processing device performs impurity removal processing on the target sperm image subjected to the head subtraction processing, thereby taking the image data subjected to the impurity removal as target tail data for performing the second segmentation processing.
Continuing with the first example described above, there are still many colorant block interference model identifications after subtracting the head, and the following rules are set to remove these colorant blocks by the impurity handling device: for each layer of mask, find the two pixel points a and b that are furthest from the mask, calculate the distance d between a and b, and calculate s=dζ2, S represents the theoretical maximum area that contains the two points a, b, S represents the actual area, and calculate q=s/S. The larger the q value is, the closer the layer of mask is to the agglomerate structure rather than the slender sperm tail, and a threshold value of a proper q value is set to screen out agglomerate impurities and the sperm tail which does not meet the requirements, so that a pure tail set is obtained for tail fine segmentation.
Continuing with the description of fig. 4, the second segmentation processing device 204 performs a second segmentation process on the target tail data by using the target model and the target clustering algorithm to obtain sperm tail data.
Optionally, the target clustering algorithm is SMMC. The target clustering algorithm is the same as or similar to the target clustering algorithm described in step 102, and will not be described here.
Specifically, the second segmentation processing device 204 includes a tail recognition device, a tail storage device, a tail skeleton extraction device, a sperm count statistics device, a tail line segmentation device, and a tail category distribution device.
The tail recognition device uses the target segmentation model to recognize the sperm tails contained in the target tail data in an iterative manner until all sperm tails are recognized.
And the tail storage device stores tail mask images corresponding to the tail of the sperm identified in each round.
The tail skeleton extraction device performs skeleton extraction processing on the tail mask image by using a skeleton extraction algorithm.
The sperm count counting device counts the number of endpoints based on each layer of mask obtained by the skeleton extraction process and determines the number of sperm contained in the mask based on a preset endpoint matching rule.
The tail line segmentation device uses a target clustering algorithm to segment a plurality of skeletonized lines obtained through skeleton extraction processing.
Specifically, the tail line segmentation device samples pixel points of a plurality of skeletonized lines obtained through skeleton extraction processing, and converts the skeletonized lines into a lattice structure supported by a target clustering algorithm; and then, a target clustering algorithm is called to divide the plurality of skeletonized lines by setting proper parameters and random seeds.
The tail class distribution device takes the segmented skeletonized line as a datum line, distributes corresponding classes for each pixel point of the original mask, and achieves splitting of single sperm tails in the original mask.
Continuing with the first example described above, based on the clean tail set, the tail recognition device uses the SAM model to iteratively recognize all sperm tails in the picture and stores the sperm tails in a mask format, where many tails overlap and separation of individual sperm tails is not achieved. And erasing the position corresponding to the mask obtained in the step in the original image once for each iteration, wherein the pixel points at the position corresponding to the mask are set to be white in background color, and the edge residual shadows are removed. Then, the tail recognition device carries out the next turn of SAM model processing until the tail of the sperm in the picture is completely recognized, and then the tail storage device gathers the tail masks recognized by the SAM models of all turns and stores the tail masks in a tail file so as to carry out single tail segmentation of the next step.
Then, the tail skeleton extraction device uses Skeletonize algorithm to process the tail mask to extract the skeleton. Then, the sperm count statistics device judges the number of endpoints of each layer of mask after skeletonizing, picks out the mask with the number of endpoints being more than 2 (namely, the mask with the number of endpoints being more than one tail), and deduces the sperm count contained in the mask according to the number of endpoints. The predetermined pairing rules are: number of endpoints 3, 4—number of tails 2; number of endpoints 5, 6-number of tails 3, and so on. And then, the tail line segmentation device samples pixel points of the skeletonized result, converts the result into a lattice structure supported by a clustering algorithm, sets proper parameters and random seeds, and invokes the clustering algorithm to segment the skeletonized line. The tail class distribution device takes the segmented skeletonized lines as reference lines and distributes classes to each pixel point of the original mask, so that the classification of the pixel level of the original mask is realized, namely the splitting of the tail of a single sperm is realized.
Continuing with the description of fig. 4, the stitching device 205 performs the matching and stitching process on the head and tail of each sperm cell based on the obtained sperm cell head data and sperm cell tail data, thereby obtaining the segmentation result of each sperm cell.
Specifically, the splicing processing device 205 obtains the long axis of each sperm head based on the sperm head data obtained by the first segmentation processing; then, based on the sperm tail data obtained by the second segmentation treatment, the front end and the rear end of each sperm tail are obtained according to a preset front end and rear end determining rule; then, according to a preset geometric rule, matching a skeletonized straight line at the front end of the sperm tail with a long axis of the sperm head to obtain matched sperm heads and tail; and then, splicing the head and tail of each matched sperm to obtain the segmentation result of each sperm.
The first example will be described later, and the stitching device 205 determines the front and rear ends of the tail of each sperm divided in the previous step according to the color change trend and the mask width, and linearizes the skeletonized line of the front end. And then finding the long axis of the head of the sperm which is segmented before, performing correlation matching between the distance and the included angle of the straight line skeletonized by the tail and the long axis of the head according to the geometric rule, performing head-tail pairing, and then splicing the corresponding heads and tails to obtain a complete sperm segmentation result.
The segmentation result generation means 206 generates segmentation result information of the target sperm image based on the segmentation result of each sperm.
Wherein the segmentation result information comprises, but is not limited to, the number of sperms contained in the target sperm image, the outline and the size of each contained sperm, the position information in the target image and the like.
According to one embodiment, the apparatus further comprises model training means.
The model training device trains a classification model for classifying sperm morphology based on the segmentation result information of the sperm images and the corresponding labeling data.
Specifically, acquiring data labels of sperm morphology classification problems corresponding to medical professionals; then, the model training device adopts a deep learning mode such as semi-supervised learning to train the classification model, and realizes the classification of qualification/disqualification of the sperm morphology after segmentation.
According to one embodiment, the device further comprises a detection analysis device.
The detection analysis device uses the trained classification model to perform sperm eligibility analysis on the sperm micrographs of the patient.
For example, the detection and analysis device uses a trained classification model to perform eligibility analysis on each sperm after segmentation, and counts the number of morphologically normal sperm and morphologically abnormal sperm in a plurality of sperm micrographs of a patient. And according to the counted number, calculating to obtain the sperm morphology analysis result of the patient.
According to the device provided by the embodiment of the application, the segmentation model and the clustering algorithm are combined to carry out image segmentation processing on the sperm micrographs in the sperm detection scene, so that mutually overlapped sperm tails are accurately identified and segmented, the whole process does not need manual participation, the automatic segmentation of the sperm is realized, the segmentation efficiency of the sperm in the micrographs is greatly improved, and great convenience is provided for carrying out sperm morphology detection; the method of the embodiment of the application can be combined with the labeling data of medical professionals to judge the normal or abnormal sperm morphology and obtain corresponding detection results, provides a convenient and efficient operation process and accurate and reliable results for the clinical diagnosis of male sperm morphology analysis, is hopeful to thoroughly change the dilemma that the sperm morphology at the present stage depends on manual analysis, and plays an important role in the clinical diagnosis of men.
Based on the same inventive concept, the embodiment of the present application further provides an electronic device, where the method corresponding to the electronic device may be the method for performing segmentation processing on the image in the foregoing embodiment, and the principle of solving the problem is similar to that of the method. The electronic equipment provided by the embodiment of the application comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods and/or aspects of the various embodiments of the application described above.
The electronic device may be a user device, or a device formed by integrating the user device and a network device through a network, or may also be an application running on the device, where the user device includes, but is not limited to, a computer, a mobile phone, a tablet computer, a smart watch, a bracelet, and other various terminal devices, and the network device includes, but is not limited to, a network host, a single network server, a plurality of network server sets, or a computer set based on cloud computing, where the network device is implemented, and may be used to implement a part of processing functions when setting an alarm clock. Here, the Cloud is composed of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual computer composed of a group of loosely coupled computer sets.
Fig. 5 shows a structure of a device suitable for implementing the method and/or technical solution in an embodiment of the present application, the device 1200 includes a central processing unit (CPU, central Processing Unit) 1201, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage portion 1208 into a random access Memory (RAM, random Access Memory) 1203. In the RAM 1203, various programs and data required for the system operation are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204. An Input/Output (I/O) interface 1205 is also connected to the bus 1204.
The following components are connected to the I/O interface 1205: an input section 1206 including a keyboard, mouse, touch screen, microphone, infrared sensor, etc.; an output portion 1207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), an LED display, an OLED display, or the like, and a speaker; a storage portion 1208 comprising one or more computer-readable media of hard disk, optical disk, magnetic disk, semiconductor memory, etc.; and a communication section 1209 including a network interface card such as a LAN (local area network ) card, a modem, or the like. The communication section 1209 performs communication processing via a network such as the internet.
In particular, the methods and/or embodiments of the present application may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1201.
Another embodiment of the present application also provides a computer readable storage medium having stored thereon computer program instructions executable by a processor to implement the method and/or the technical solution of any one or more of the embodiments of the present application described above.
In particular, the present embodiments may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple elements or page components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Claims (13)
1. A method for segmentation processing of an image, wherein the method comprises:
Acquiring a target image to be processed, wherein the target image comprises a plurality of target objects;
performing a segmentation process on each target object contained in the target image by using a target segmentation model and a target clustering algorithm;
and generating segmentation result information of the target image.
2. The method according to claim 1, wherein the segmenting each target object contained in the target image by using a target segmentation model and a target clustering algorithm comprises:
Identifying target objects contained in the target image in an iterative mode by using the target segmentation model until all the target objects are identified;
storing the mask corresponding to the target object identified in each round into a target file;
performing skeleton extraction processing on the mask corresponding to the identified target object by using a skeleton extraction algorithm;
based on each layer of mask obtained by the skeleton extraction processing, counting the number of endpoints and determining the number of target objects contained in the mask based on a preset endpoint matching rule;
dividing a plurality of skeletonized lines obtained by skeleton extraction processing by using a target clustering algorithm;
and taking the segmented skeletonized line as a datum line, and distributing corresponding categories for each pixel point of the original mask.
3. The method of claim 2, wherein the segmenting the plurality of skeletonized lines resulting from the skeleton extraction process using the target clustering algorithm comprises:
sampling pixel points of a plurality of skeletonized lines obtained through skeleton extraction processing, and converting the skeletonized lines into a lattice structure supported by a target clustering algorithm;
and (3) calling a target clustering algorithm to segment the plurality of skeletonized lines by setting proper parameters and random seeds.
4. A method according to any one of claims 1 to 3, wherein the method further comprises, prior to the step of performing a segmentation process on each target object contained in the target image by using the target segmentation model and the target clustering algorithm:
performing image preprocessing on the acquired target image;
wherein the image preprocessing includes at least any one of the following operations:
performing standardization processing on the size of the target image;
performing image optimization processing on the target image;
Performing definition improvement processing on the target image;
the background and small particle impurities in the target image are turned to pure white.
5. A method of segmentation of sperm images using the method of any of claims 1 to 4, the method comprising:
Acquiring a target sperm image to be processed;
Performing first segmentation processing on the target sperm image by using a target segmentation model to obtain sperm head data;
performing head subtraction processing on the target sperm image based on the sperm head data to obtain target tail data for performing second segmentation processing;
performing second segmentation processing on the target tail data by using the target model and a target clustering algorithm to obtain sperm tail data;
Based on the obtained sperm head data and sperm tail data, carrying out matching and splicing treatment on the head and tail of each sperm to obtain a segmentation result of each sperm;
and generating segmentation result information of the target sperm image based on the segmentation result of each sperm.
6. The method of claim 5, wherein the obtaining sperm head data by performing a first segmentation process on the target sperm image using a target segmentation model comprises:
identifying the sperm heads contained in the target sperm image by using the target segmentation model to obtain head masks corresponding to all identified sperm heads;
based on the head mask file, the effective sperm head is screened out according to a predetermined screening condition.
7. The method of claim 6, wherein screening for valid sperm heads according to predetermined screening conditions comprises:
setting a color value range of a corresponding color of the sperm head based on the head mask file;
calculating the intersection ratio of the corresponding color area of the sperm head in each layer of mask and the whole mask;
And comparing the effective sperm head in the head mask with a preset cross ratio threshold value.
8. The method of claim 5, wherein the obtaining sperm tail data by performing a second segmentation process on the target tail data using the target model and a target clustering algorithm comprises:
Using a target segmentation model to identify sperm tails contained in target tail data in an iterative manner until all sperm tails are identified;
Storing tail mask images corresponding to the tail of the sperm identified in each round;
Performing skeleton extraction processing on the tail mask image by using a skeleton extraction algorithm;
based on each layer of mask obtained by skeleton extraction processing, counting the number of endpoints and determining the number of sperms contained in the mask based on a preset endpoint matching rule;
dividing a plurality of skeletonized lines obtained by skeleton extraction processing by using a target clustering algorithm;
And taking the segmented skeletonized line as a datum line, and distributing corresponding categories for each pixel point of the original mask so as to realize the splitting of the tail part of a single sperm in the original mask.
9. The method of claim 5, wherein the matching and stitching the head and tail of each sperm based on the resulting sperm head data and sperm tail data comprises;
Obtaining the long axis of each sperm head based on the sperm head data obtained by the first segmentation processing;
Based on the sperm tail data obtained by the second segmentation treatment, the front end and the rear end of each sperm tail are obtained according to a preset front end and rear end determining rule;
matching the skeletonized straight line at the front end of the sperm tail with the long axis of the sperm head according to a preset geometric rule to obtain matched head and tail of each sperm;
And splicing the head and tail of each matched sperm to obtain the segmentation result of each sperm.
10. The method of any one of claims 5 to 9, the method further comprising:
based on the segmentation result information of the sperm images and corresponding labeling data, a classification model for sperm morphology classification is trained.
11. An apparatus for performing segmentation processing on an image, wherein the apparatus comprises:
means for acquiring a target image to be processed, the target image comprising a plurality of target objects;
Means for performing a segmentation process on each target object contained in the target image by using the target segmentation model and a target clustering algorithm;
Means for generating segmentation result information of the target image.
12. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 10.
13. A computer readable medium having stored thereon computer program instructions executable by a processor to perform the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410163502.XA CN118196406A (en) | 2024-02-05 | 2024-02-05 | Method, apparatus and computer readable medium for segmentation processing of images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410163502.XA CN118196406A (en) | 2024-02-05 | 2024-02-05 | Method, apparatus and computer readable medium for segmentation processing of images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118196406A true CN118196406A (en) | 2024-06-14 |
Family
ID=91400692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410163502.XA Pending CN118196406A (en) | 2024-02-05 | 2024-02-05 | Method, apparatus and computer readable medium for segmentation processing of images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118196406A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335277A (en) * | 2019-05-07 | 2019-10-15 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
CN111161275A (en) * | 2018-11-08 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Method and device for segmenting target object in medical image and electronic equipment |
CN112183212A (en) * | 2020-09-01 | 2021-01-05 | 深圳市识农智能科技有限公司 | Weed identification method and device, terminal equipment and readable storage medium |
CN113780145A (en) * | 2021-09-06 | 2021-12-10 | 苏州贝康智能制造有限公司 | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium |
CN114693697A (en) * | 2020-12-29 | 2022-07-01 | 武汉Tcl集团工业研究院有限公司 | Image processing method, device, equipment and computer readable storage medium |
WO2024016812A1 (en) * | 2022-07-19 | 2024-01-25 | 腾讯科技(深圳)有限公司 | Microscopic image processing method and apparatus, computer device, and storage medium |
-
2024
- 2024-02-05 CN CN202410163502.XA patent/CN118196406A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161275A (en) * | 2018-11-08 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Method and device for segmenting target object in medical image and electronic equipment |
CN110335277A (en) * | 2019-05-07 | 2019-10-15 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer readable storage medium and computer equipment |
CN112183212A (en) * | 2020-09-01 | 2021-01-05 | 深圳市识农智能科技有限公司 | Weed identification method and device, terminal equipment and readable storage medium |
CN114693697A (en) * | 2020-12-29 | 2022-07-01 | 武汉Tcl集团工业研究院有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN113780145A (en) * | 2021-09-06 | 2021-12-10 | 苏州贝康智能制造有限公司 | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium |
WO2024016812A1 (en) * | 2022-07-19 | 2024-01-25 | 腾讯科技(深圳)有限公司 | Microscopic image processing method and apparatus, computer device, and storage medium |
Non-Patent Citations (2)
Title |
---|
ROBERT MANZKE等: "Automatic Segmentation of Rotational X-Ray Images for Anatomic Intra-Procedural Surface Generation in Atrial Fibrillation Ablation Procedures", IEEE TRANSACTIONS ON MEDICAL IMAGING, 31 December 2010 (2010-12-31) * |
汪创;: "基于计算机视觉的动物精子形态分析系统", 电子世界, no. 17, 8 September 2018 (2018-09-08) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11935644B2 (en) | Deep learning automated dermatopathology | |
US10650286B2 (en) | Classifying medical images using deep convolution neural network (CNN) architecture | |
US10861156B2 (en) | Quality control for digital pathology slides | |
CN108133476B (en) | Method and system for automatically detecting pulmonary nodules | |
US10828000B2 (en) | Medical image data analysis | |
US9679354B2 (en) | Duplicate check image resolution | |
CN110084289B (en) | Image annotation method and device, electronic equipment and storage medium | |
Khodabakhsh et al. | A generalizable deepfake detector based on neural conditional distribution modelling | |
CN113066080A (en) | Method and device for identifying slice tissue, cell identification model and tissue segmentation model | |
CN112990214A (en) | Medical image feature recognition prediction model | |
WO2019223706A1 (en) | Saturation clustering-based method for positioning bone marrow white blood cells | |
CN112001317A (en) | Lead defect identification method and system based on semantic information and terminal equipment | |
CN115861255A (en) | Model training method, device, equipment, medium and product for image processing | |
CN116433692A (en) | Medical image segmentation method, device, equipment and storage medium | |
CN110060246B (en) | Image processing method, device and storage medium | |
CN113177957B (en) | Cell image segmentation method and device, electronic equipment and storage medium | |
JP2024527831A (en) | System and method for image processing for image matching | |
CN106529455A (en) | Fast human posture recognition method based on SoC FPGA | |
KR20150059860A (en) | Method for processing image segmentation using Morphological operation | |
CN111401102A (en) | Deep learning model training method and device, electronic equipment and storage medium | |
CN118196406A (en) | Method, apparatus and computer readable medium for segmentation processing of images | |
US20220358650A1 (en) | Systems and methods to process electronic images to provide localized semantic analysis of whole slide images | |
CN116091481A (en) | Spike counting method, device, equipment and storage medium | |
CN112348112B (en) | Training method and training device for image recognition model and terminal equipment | |
CN113450351B (en) | Segmentation model training method, image segmentation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |