CN114445376A - Image segmentation method, model training method thereof, related device, equipment and medium - Google Patents

Image segmentation method, model training method thereof, related device, equipment and medium Download PDF

Info

Publication number
CN114445376A
CN114445376A CN202210103093.5A CN202210103093A CN114445376A CN 114445376 A CN114445376 A CN 114445376A CN 202210103093 A CN202210103093 A CN 202210103093A CN 114445376 A CN114445376 A CN 114445376A
Authority
CN
China
Prior art keywords
sample
image
target
pixel point
target organ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210103093.5A
Other languages
Chinese (zh)
Inventor
王娜
刘星龙
陈翼男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210103093.5A priority Critical patent/CN114445376A/en
Publication of CN114445376A publication Critical patent/CN114445376A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30021Catheter; Guide wire
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses an image segmentation method, a model training method thereof, and a related device, equipment and medium, wherein the training method of the image segmentation model comprises the following steps: acquiring a sample medical image of a target organ; wherein, the sample pixel points in the sample medical image are marked with sample marks and sample weights; based on the sample weight of the sample pixel point, performing difference measurement on the sample mark and the prediction mark to obtain model loss; network parameters of the image segmentation model are adjusted based on the model loss. According to the scheme, the segmentation precision of different parts of the target organ can be considered in the image segmentation process.

Description

Image segmentation method, model training method thereof, related device, equipment and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image segmentation method, a model training method thereof, and related apparatuses, devices, and media.
Background
Medical images such as CT (Computed Tomography) images are of great significance in scenes such as auxiliary diagnosis and surgical planning. For example, it is not always sufficient to segment a medical image to obtain a segmentation result of a relevant organ to assist a doctor in cutting a lesion while avoiding a critical point.
However, in a real scene, the object of segmentation is not always regular, and when segmenting organs such as trachea and blood vessels, it is necessary to take into account even the multiple branches having different thicknesses. In this case, after the machine segmentation, manual correction is usually performed depending on user interaction to improve the final segmentation accuracy, which is time-consuming and labor-consuming, and still cannot ensure the segmentation effect. In view of the above, how to combine the segmentation accuracy of different parts of the target organ in the image segmentation process becomes an urgent problem to be solved.
Disclosure of Invention
The application provides an image segmentation method, a model training method thereof, and a related device, equipment and medium.
The first aspect of the present application provides a training method for an image segmentation model, including: acquiring a sample medical image of a target organ; the method comprises the steps that sample pixel points in a sample medical image are marked with sample marks and sample weights, the sample marks represent target categories to which the sample pixel points belong, and the sample weights are set based on sample distances from the sample pixel points to the surface of a target organ; performing target segmentation on the sample medical image by using an image segmentation model to obtain a prediction mark of a sample pixel point; the prediction mark is at least used for representing the possibility that the sample pixel point is predicted to belong to the target category; based on the sample weight of the sample pixel point, performing difference measurement on the sample mark and the prediction mark to obtain model loss; network parameters of the image segmentation model are adjusted based on the model loss.
Therefore, a sample medical image of a target organ is obtained, sample marks and sample weights are marked on sample pixel points in the sample medical image, the sample marks represent target categories to which the sample pixel points belong, and the sample weights are set based on the sample distances from the sample pixel points to the surface of the target organ, namely, the sample weights of the sample pixel points in different categories are different, and the sample pixel points in the same category are also different according to the distance from the sample pixel points to the surface of the target organ, so that the sample pixel points can be differentiated through the sample weights, on the basis, an image segmentation model is used for carrying out target segmentation on the sample medical image to obtain prediction marks of the sample pixel points, the prediction marks are at least used for representing the possibility that the sample pixel points are predicted to belong to the target categories, and the difference measurement is carried out on the sample marks and the prediction marks based on the sample pixel points, the model loss is obtained, network parameters of the image segmentation model are adjusted based on the model loss, so that pixel points of all samples are differentiated through the sample weights, and in the loss measurement process, the bias degree of loss is predicted at the pixel points of different samples through the sample weights of the pixel points of all samples, so that the model can have a learning effect on parts which are difficult to segment and parts which are easy to segment on the target organ, and the segmentation precision of different parts of the target organ can be considered in the image segmentation process.
Wherein, based on the sample weight of the sample pixel point, the difference measurement is carried out on the sample mark and the prediction mark, and the model loss obtaining comprises the following steps: based on the sample weight, carrying out regional difference measurement on the sample mark and the prediction mark to obtain a first loss, and/or based on the sample weight, carrying out distribution difference measurement on the sample mark and the prediction mark to obtain a second loss, and based on the first loss and/or the second loss, obtaining a model loss; wherein the first loss is used for measuring the region coincidence degree between the sample region and the prediction region of the target organ, the sample region is the region actually labeled by the target organ, the prediction region is the region predicted by the model of the target organ, and the second loss is used for measuring the data distribution difference between the sample mark and the prediction mark.
Therefore, based on the sample weight, the area difference measurement is carried out on the sample mark and the prediction mark to obtain a first loss, and/or based on the sample weight, the distribution difference measurement is carried out on the sample mark and the prediction mark to obtain a second loss, and based on the first loss and/or the second loss, the model loss is obtained, the first loss is used for measuring the coincidence degree between the sample area of the target organ and the prediction area, the sample area is the area actually marked by the target organ, the prediction area is the area predicted by the model of the target organ, and the second loss is used for measuring the data distribution difference between the sample mark and the prediction mark, so that the dimensionality of the loss measurement is favorably enriched, and the accuracy and the comprehensiveness of the loss of the model are improved.
The method comprises the following steps of obtaining a sample mark, wherein the sample mark is represented by a mark value, the prediction mark at least comprises a probability value of a sample pixel point which is predicted to belong to a target category, and based on sample weight, carrying out regional difference measurement on the sample mark and the prediction mark to obtain a first loss, and the method comprises the following steps: for each sample pixel point, acquiring a first product of the marking value and the sample weight of the sample pixel point, acquiring a second product of the probability value and the sample weight of the sample pixel point, and acquiring a third product of the marking value, the probability value and the sample weight of the sample pixel point; summing the first products corresponding to each sample pixel point to obtain a first sum value, summing the second products corresponding to each sample pixel point to obtain a second sum value, and summing the third products corresponding to each sample pixel point to obtain a third sum value; obtaining a first loss based on a ratio of the third sum to the reference sum; wherein the reference sum is a sum of the first sum and the second sum, and the ratio is inversely related to the first loss.
Therefore, under the condition that the model loss comprises a first loss, for each sample pixel point, a first product of the mark value and the sample weight is obtained, a second product of the probability value and the sample weight is obtained, a third product of the mark value, the probability value and the sample weight is obtained, on the basis, the first products corresponding to each sample pixel point are summed to obtain a first sum value, the second products corresponding to each sample pixel point are summed to obtain a second sum value, and the third products corresponding to each sample pixel point are summed to obtain a third sum value, so that the first loss is obtained based on the ratio between the third sum value and the reference sum value, the reference sum value is the sum of the first sum value and the second sum value, the ratio is negatively related to the first loss, and the ratio can represent the coincidence degree of the sample area and the prediction area under the condition that the sample weight participates in weighting, and the higher the coincidence degree is, the higher the model segmentation precision is, otherwise, the lower the coincidence degree is, the lower the model segmentation precision is, so that the model segmentation precision can be improved from the optimization dimension of the region coincidence degree by minimizing the first loss.
Wherein, the sample mark is represented by a mark value, the prediction mark at least comprises a probability value of the sample pixel point which is predicted to belong to the target category, and the sample mark and the prediction mark are subjected to distribution difference measurement based on the sample weight to obtain a second loss, which comprises the following steps: for each sample pixel point, obtaining a logarithm value of the probability value of the sample pixel point, obtaining a fourth product of the marking value of the sample pixel point and the sample weight, and obtaining a fifth product of the logarithm value and the fourth product; obtaining a second loss based on a fourth sum value obtained by summing the fifth products corresponding to the pixel points of the samples; wherein the fourth sum is inversely related to the second loss.
Therefore, under the condition that the model loss comprises the second loss, for each sample pixel point, obtaining a logarithm value of the probability value, obtaining a fourth product of the mark value and the sample weight, and obtaining a fifth product of the logarithm value and the fourth product, on the basis, obtaining a second loss value based on a fourth sum value obtained by summing the fifth products corresponding to each sample pixel point, wherein the fourth sum value is inversely related to the second loss.
The sample weight of the sample pixel point belonging to the target organ is higher than the sample weight of the sample pixel point not belonging to the target organ, and the sample weight of the sample pixel point belonging to the target organ is inversely related to the sample distance.
Therefore, the sample weight of the sample pixel point belonging to the target organ is set to be higher than the sample weight of the sample pixel point not belonging to the target organ, and the sample weight of the sample pixel point belonging to the target organ is set to be in negative correlation with the sample distance, so that in the training process, the image segmentation model can focus on the sample pixel point belonging to the target organ and the segmentation precision thereof, and in the sample pixel point belonging to the target organ, the sample pixel point closer to the surface of the target organ can be focused on, and the attention of the image segmentation model to the parts difficult to segment can be promoted.
Wherein, the setting step of the sample weight comprises the following steps: normalizing the distance of each sample pixel point belonging to the target organ to obtain a normalized value, and taking the sum of the difference value of the first numerical value and the normalized value and the second numerical value as the sample weight of the sample pixel point belonging to the target organ; and the first value is not less than 1, and the sample weight of the sample pixel point which does not belong to the target organ is a second value.
Therefore, for each sample pixel point belonging to the target organ, the corresponding distance is normalized to obtain a normalized value, the sum of the difference between the first numerical value and the normalized value and the second numerical value is used as the sample weight of the sample pixel point belonging to the target organ, the first numerical value is not less than 1, and the sample weight of the sample pixel point not belonging to the target organ is directly set as the second numerical value, so that the sample weight of each sample pixel point can be obtained only through simple operations such as normalization, numerical value addition and subtraction, and the like, and the calculation complexity of the sample weight is favorably greatly reduced.
The target organ comprises a main stem section and a plurality of branch sections extending from the main stem section; in the process of training the image segmentation model, the frequency of the sample medical images located in the branch section is selected to be higher than a preset threshold value, and/or the frequency of the sample medical images located in the trunk section is selected to be lower than the preset threshold value.
Therefore, the target organ comprises a main section and a plurality of branch sections extending from the main section, in this case, in the process of training the image segmentation model, the frequency of selecting the sample medical images located in the branch sections is higher than the preset threshold, and the frequency of selecting the sample medical images located in the main section is lower than the preset threshold, so that the difficultly segmented sample of the target organ can be oversampled in the training process, and the easily segmented sample of the target organ can be undersampled, thereby being beneficial to balancing the learning effects of different parts, and ensuring the segmentation precision of different parts of the image segmentation model.
The image segmentation model comprises a coding network and a decoding network, wherein the coding network comprises a plurality of coding layers which are sequentially connected, and the decoding network comprises a plurality of decoding layers which are sequentially connected; the method for carrying out target segmentation on the sample medical image by using the image segmentation model to obtain the prediction mark of the sample pixel point comprises the following steps: the feature graph obtained by coding the last coding layer and the feature graph obtained by respectively decoding each decoding layer except the last decoding layer are both used as reference feature graphs; decoding is carried out on the basis of the reference characteristic graphs to obtain a first decoding result; fusing the first decoding result and the second decoding result to obtain a sample decoding result; and outputting the second decoding result by the last decoding layer, wherein the sample decoding result comprises the prediction mark of each sample pixel point.
The image segmentation model thus comprises an encoding network and a decoding network, and the encoding network comprises a number of encoding layers connected in sequence, the decoding network comprises a number of decoding layers connected in sequence, on the basis, the feature map obtained by coding the last layer of coding layer and the feature maps obtained by respectively decoding the decoding layers except the last layer of decoding layer are used as reference feature maps and then are decoded based on the reference feature maps to obtain a first decoding result, and the first decoding result and the second decoding result are fused to obtain a sample decoding result, the second decoding result is output by the last decoding layer, and the sample decoding result comprises the prediction mark of each sample pixel point, so that the multi-level characteristics can be densely connected in the target segmentation process, the probability of information loss of small parts caused by downsampling is favorably reduced, and the segmentation precision is improved.
Wherein the target organ comprises at least one of a trachea and a blood vessel; and/or the sample distance is the closest distance from the sample pixel point to the surface.
Therefore, set up to trachea, blood vessel at the target organ under at least one's the condition, through setting up the sample weight, can balance the sampling number of times of different thickness pipelines, make the model in the training process, compromise the learning effect of different thickness pipelines, be favorable to promoting the segmentation precision at different positions, and through setting up the sample distance to the nearest distance of sample pixel to surface, be favorable to promoting the accuracy of sample weight.
A second aspect of the present application provides an image segmentation method, including: acquiring a medical image of a target organ; performing target segmentation on the medical image by using an image segmentation model to obtain a target class to which each pixel point in the medical image belongs; wherein, the image segmentation model is obtained by utilizing the training method of the image segmentation model in the first aspect; and obtaining a segmentation result of the target organ based on the target category to which the pixel point belongs.
Therefore, the target segmentation is performed on the medical image of the target organ by using the image segmentation model obtained by training the training method of the image segmentation model in the first aspect, so as to obtain the target category to which each pixel point in the medical image belongs, and the segmentation result of the target organ is obtained based on the target category to which the pixel point belongs, which is beneficial to considering the segmentation accuracy of different parts of the target organ in the image segmentation process.
The method for obtaining the target classification of each pixel point in the medical image comprises the following steps of: respectively taking each two-dimensional image as a current image, and combining the current image with a reference image of the current image to obtain a multi-channel image; the number of image frames between the reference image and the current image is less than the preset number of frames; processing the multi-channel image by using an image segmentation model to obtain a target class to which a pixel point in the current image belongs; and obtaining the target category to which each pixel point in the medical image belongs based on the target category to which each pixel point in the two-dimensional image belongs.
Therefore, under the condition that the medical image is a three-dimensional image and the three-dimensional image is composed of a plurality of two-dimensional images which are arranged in a stacked mode, each two-dimensional image is used as a current image, the current image and a reference image of the current image are combined to obtain a multi-channel image, the number of image frames between the reference image and the current image at intervals is less than a preset number of frames, on the basis, the multi-channel image is processed by using an image segmentation model to obtain a target class to which pixel points in the current image belong, then the target class to which the pixel points in the medical image belong is obtained based on the target class to which the pixel points in each two-dimensional image belong, context image combination can be used as model input, and the connectivity of the model output is enhanced.
The method for obtaining the segmentation result of the target organ based on the target category to which the pixel point belongs comprises the following steps: acquiring a plurality of connected domains consisting of pixel points belonging to a target organ; and obtaining a segmentation result of the target organ based on the maximum connected domain.
Therefore, after the target category to which the pixel point belongs is obtained, the segmentation result of the target organ is obtained by obtaining a plurality of connected domains formed by the pixel points belonging to the target organ and based on the maximum connected domain, so that the false positive region outside the target organ can be removed, and the segmentation precision can be further improved.
The third aspect of the present application provides a training apparatus for an image segmentation model, including: the device comprises a sample acquisition module, a sample segmentation module, a loss measurement module and a parameter adjustment module, wherein the sample acquisition module is used for acquiring a sample medical image of a target organ; the method comprises the steps that sample pixel points in a sample medical image are marked with sample marks and sample weights, the sample marks represent target categories to which the sample pixel points belong, and the sample weights are set based on sample distances from the sample pixel points to the surface of a target organ; the sample segmentation module is used for carrying out target segmentation on the sample medical image by using the image segmentation model to obtain a prediction mark of a sample pixel point; the prediction mark is at least used for representing the possibility that the sample pixel point is predicted to belong to the target category; the loss measurement module is used for carrying out difference measurement on the sample marks and the prediction marks based on the sample weight of the sample pixel points to obtain model loss; and the parameter adjusting module is used for adjusting the network parameters of the image segmentation model based on the model loss.
A fourth aspect of the present application provides an image segmentation apparatus, comprising: the device comprises an image acquisition module, a target segmentation module and a result acquisition module, wherein the image acquisition module is used for acquiring a medical image of a target organ; the target segmentation module is used for carrying out target segmentation on the medical image by utilizing the image segmentation model to obtain a target category to which each pixel point in the medical image belongs; wherein, the image segmentation model is obtained by utilizing the training device of the image segmentation model in the third aspect; and the result acquisition module is used for acquiring the segmentation result of the target organ based on the target category to which the pixel point belongs.
A fifth aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the method for training an image segmentation model in the first aspect or to implement the method for image segmentation in the second aspect.
A sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored, which program instructions, when executed by a processor, implement the method for training an image segmentation model in the above first aspect, or implement the method for image segmentation in the above second aspect.
According to the scheme, the sample medical image of the target organ is obtained, the sample pixel points in the sample medical image are marked with the sample marks and the sample weights, the sample marks represent the target categories to which the sample pixel points belong, and the sample weights are set based on the sample distances from the sample pixel points to the surface of the target organ, namely, the sample weights of the sample pixel points in different categories are different, and the sample pixel points in the same category are also different according to the distances from the sample pixel points to the surface of the target organ, so that the sample pixel points can be differentiated through the sample weights, on the basis, the image segmentation model is used for carrying out target segmentation on the sample medical image to obtain the prediction marks of the sample pixel points, the prediction marks are at least used for representing the possibility that the sample pixel points are predicted to belong to the target categories, and the difference measurement is carried out on the sample marks and the prediction marks based on the sample pixel points, the model loss is obtained, network parameters of the image segmentation model are adjusted based on the model loss, so that pixel points of all samples are differentiated through the sample weights, and in the loss measurement process, the bias degree of loss is predicted at the pixel points of different samples through the sample weights of the pixel points of all samples, so that the model can have a learning effect on parts which are difficult to segment and parts which are easy to segment on the target organ, and the segmentation precision of different parts of the target organ can be considered in the image segmentation process.
Drawings
FIG. 1 is a schematic flowchart of an embodiment of a training method for an image segmentation model according to the present application;
FIG. 2 is a block diagram of an embodiment of an image segmentation model;
FIG. 3 is a schematic flowchart of an embodiment of an image segmentation method according to the present application;
FIG. 4 is a process diagram of an embodiment of the image segmentation method of the present application;
FIG. 5 is a block diagram of an embodiment of an apparatus for training an image segmentation model according to the present application;
FIG. 6 is a block diagram of an embodiment of an image segmentation apparatus according to the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a training method for an image segmentation model according to the present application. Specifically, the method may include the steps of:
step S11: a sample medical image of a target organ is acquired.
In the embodiment of the disclosure, sample pixel points in a sample medical image are marked with sample marks, and the sample marks represent target categories to which the sample pixel points belong. In particular, the target category may be included in several preset categories, and the several preset categories include the target organ. Illustratively, the plurality of preset categories may include a target organ and an image background, that is, the target category to which a sample pixel point in the sample medical image belongs is either the target organ or the image background, and so on in other cases, which is not illustrated herein.
In one implementation scenario, the target organ may be set according to actual application requirements. For example, in case a trachea needs to be segmented, the target organ may be set as a trachea; alternatively, in a case where the blood vessel needs to be segmented, the target organ may be set as the blood vessel, which is not limited herein.
In one implementation scenario, the sample flag may be expressed by a specific flag value, and the flag value of the sample pixel belonging to the target organ may be set as a first flag value, and the flag value of the sample pixel not belonging to the target organ may be set as a second flag value. For example, the first flag value may be set to 1, and the second flag value may be set to 0. Taking the target organ as the trachea as an example, the sample pixel point belonging to the trachea may have the sample weight set to 1, and the sample pixel point not belonging to the trachea may have the sample weight set to 0. Other cases may be analogized, and no one example is given here.
In one implementation scenario, the sample medical image may include, but is not limited to, a CT image, etc., and is not limited thereto.
In the embodiment of the present disclosure, the sample pixel points in the sample medical image may further be labeled with sample weights of the sample pixel points, and the sample weights may be set based on a sample distance from the sample pixel points to the surface of the target organ. To further distinguish sample weights for both sample pixel points belonging to the target organ and sample pixel points not belonging to the target organ, the sample weights may be further set based on the sample markers and the aforementioned sample distances.
In an implementation scenario, the sample distance may specifically be a closest distance from a sample pixel point to a surface of the target organ. Specifically, the surface of the target organ may be regarded as a surface formed by a series of pixels, and for a certain sample pixel, if the distance D1 between one pixel and the sample pixel is not greater than the distance D2 between any other pixel and the sample pixel, the distance D1 may be regarded as the closest distance from the sample pixel to the surface of the target organ. Taking the target organ as the trachea as an example, the sample distance may be the closest distance from the sample pixel point to the surface of the trachea. Other cases may be analogized, and no one example is given here.
In one implementation scenario, as previously described, the sample weight may be further set based on the sample flag and the sample distance. Specifically, according to the sample label of the sample pixel point, it may be determined that the sample weight of the sample pixel point belonging to the target organ is higher than the sample weight of the sample pixel point not belonging to the target organ, and then the sample weights of the sample pixel points not belonging to the target organ may be uniformly set to be a fixed value (e.g., 1), and then the sample weights of the sample pixel points belonging to the target organ may be continuously set based on the sample distance of the sample pixel points belonging to the target organ (which is required to be higher than the fixed value). For example, the sample weights of the sample pixel points not belonging to the target organ may be uniformly set to 1, and then it may be determined that the sample weights of the sample pixel points belonging to the target organ are greater than 1, and the rest may be similar to each other, which is not illustrated herein. Further, according to the sample marks and sample distances of the sample pixel points, the sample weights of the sample pixel points belonging to the sample organ can be set to be inversely related to the sample distances, that is, for each sample pixel point belonging to the target organ, the closer to the surface of the target organ, the higher the sample weight is, and conversely, the farther from the surface of the target organ, the lower the sample weight is. Taking a target organ as an example, for a thick trachea, the sample weight of an edge pixel point is larger, and the sample weight of a central pixel point is smaller; for the thin trachea, the trachea wall is relatively thin, so most of sample pixel points belonging to the thin trachea have larger sample weight, and the sampling times of the thick trachea and the thin trachea can be balanced. Other cases may be analogized, and no one example is given here.
In a specific implementation scenario, for each sample pixel point belonging to the target organ, normalization processing may be performed on a corresponding distance (for example, a closest distance from the sample pixel point to a surface of the target organ) to obtain a normalized value, and a sum of a difference between the first value and the normalized value and the second value is used as a sample weight of the sample pixel point belonging to the target organ, and the first value may be set to be not less than 1, and the sample weight of the sample pixel point not belonging to the target organ may be directly set to be the second value (i.e., the foregoing fixed value). For example, the normalized value corresponding to a certain sample pixel point may be denoted as d, the first value and the second value may both be set to 1, that is, the sample pixel point not belonging to the target organ has a sample weight that is uniformly set to 1, and the sample pixel point belonging to the target organ has a sample weight that may be represented as 1-d +1 (i.e., 2-d). Other cases may be analogized, and no one example is given here.
In an implementation scenario, in order to obtain the sample weight of each sample pixel point in the training process, a distance probability map may be generated based on the sample weight of each sample pixel point. Taking the sample medical image as a three-dimensional image as an example, the pixel value of the pixel point (i, j, k) in the distance probability map represents the sample weight of the sample pixel point (i, j, k) in the sample medical image. Other cases may be analogized, and no one example is given here.
Step S12: and carrying out target segmentation on the sample medical image by using the image segmentation model to obtain a prediction mark of a sample pixel point.
In an embodiment of the present disclosure, the prediction flag is at least used to represent the possibility that a sample pixel point is predicted to belong to the target category. In addition, the prediction flag may further indicate the possibility that the sample pixel point is predicted to belong to other categories, which is not limited herein. Still taking the example that the plurality of preset categories include the target organ and the image background, the prediction flag may indicate the possibility that the sample pixel point is predicted to belong to the target organ, and of course, the prediction flag may also indicate the possibility that the sample pixel point is predicted to belong to the target organ and the image background, respectively. Other cases may be analogized, and no one example is given here.
In one implementation scenario, the sample medical image may be a three-dimensional image, that is, the three-dimensional image may be regarded as being composed of a plurality of two-dimensional images arranged in a stack, and the acquired CT image may be directly used as the sample medical image. It should be noted that, in this case, the image segmentation model is a three-dimensional segmentation model, that is, the three-dimensional segmentation model may include a three-dimensional convolution kernel, and details of the three-dimensional convolution kernel may be referred to in the related art, which is not described herein again.
In one particular implementation scenario, the target organ may include a trunk section and several branch sections extending from the trunk section. Taking the example where the target organ is a trachea, the main trachea branches to create secondary organs, each bronchus creating multiple segmental bronchi, the tracheal tree is composed of approximately 6-8 generation branches from the main trachea to the final bronchioles, and generally speaking, the thinner the trachea the more secondary, and therefore the greater the difficulty of segmentation. In the case where the target organ is another organ such as a blood vessel, the analogy can be made, and no one example is given here. Based on this, in order to improve the segmentation accuracy of the image segmentation model at the hard segmentation part, in the process of training the image segmentation model, the frequency of selecting the sample medical images located in the branch section may be higher than a preset threshold, and the frequency of selecting the sample medical images located in the trunk section may be lower than the preset threshold. It should be noted that the frequency here refers to the percentage of the number of times the sample medical image is used in the training process to the total number of times all the sample medical images are used. For example, the preset threshold may be set to 50%, that is, the frequency of selecting the sample medical images located in the branch segment is higher than the preset threshold, for example, the frequency of selecting the sample medical images located in the branch segment may be set to 70%, and the frequency of selecting the sample medical images located in the trunk segment is lower than the preset threshold, for example, the frequency of selecting the sample medical images located in the trunk segment may be set to 30%. Certainly, in the actual application process, the setting may be performed according to the actual application requirements, if the difficulty of segmenting the target organ in the branch section is much higher than the difficulty of segmenting the target organ in the trunk section, the difference between the frequency of the sample medical image selected to be located in the branch section and the frequency of the sample medical image selected to be located in the trunk section may be set to be larger, or if the difficulty of segmenting the target organ in the branch section is slightly higher than the difficulty of segmenting the target organ in the trunk section, the difference between the frequency of the sample medical image selected to be located in the branch section and the frequency of the sample medical image selected to be located in the trunk section may be set to be smaller, and the specific value of the frequency is not limited here.
In an implementation scenario, considering that the target segmentation is directly performed on the three-dimensional image, the three-dimensional image may be segmented incorrectly due to some bit planes (e.g., coronal plane, sagittal plane, or transverse plane) observing targets similar to the shape of target organs, and in order to further improve the segmentation accuracy, an image segmentation model dedicated to segmenting a specific bit plane may be obtained through training separately by the embodiments of the present disclosure. For example, an image segmentation model for segmenting a coronal medical image, an image segmentation model for segmenting a sagittal medical image, and an image segmentation model for segmenting a transverse medical image may be trained, and then segmentation results of the three image segmentation models may be fused to obtain a final segmentation result. In this case, in training the image segmentation model for a certain bit plane, the sample medical image may be an image extracted from the three-dimensional image along the bit plane. For example, in training the image segmentation model for the coronal plane, the sample medical image may be extracted from the three-dimensional image along the coronal plane, in training the image segmentation model for the sagittal plane, the sample medical image may be extracted from the three-dimensional image along the sagittal plane, and in training the image segmentation model for the transverse plane, the sample medical image may be extracted from the three-dimensional image along the sagittal plane.
In one particular implementation scenario, as previously described, a three-dimensional image may be viewed as being made up of a plurality of two-dimensional images arranged in a stack. Illustratively, taking a three-dimensional image with a resolution of 512 × 80 as an example, the three-dimensional image may be regarded as being composed of 512 × 80 two-dimensional images arranged in a stack from a coronal view, similarly, the three-dimensional image may be regarded as being composed of 512 × 80 two-dimensional images arranged in a stack from a sagittal view, and similarly, the three-dimensional image may be regarded as being composed of 80 512 × 512 two-dimensional images arranged in a stack from a transverse view. Other cases may be analogized, and no one example is given here.
In a specific implementation scenario, for the image segmentation model for any plane, the aforementioned reference information may be combined to determine from which layer of the two-dimensional image the target organ appears from the plane, and the two-dimensional image in which the target organ appears first is used as a starting image, and at the same time, it is also required to determine from the plane, to which layer of the two-dimensional image the target organ appears last, and use the two-dimensional image in which the target organ appears last as an ending image. On the basis, one frame of two-dimensional image is sequentially selected from a starting image to an ending image to serve as a sample current image, the sample current image and a sample reference image of the sample current image are combined to obtain a sample multi-channel image, the sample reference image is located between the starting image and the ending image, and the number of image frames spaced between the sample reference image and the sample current image is required to be less than a preset number of frames (such as 4 frames, 5 frames, 6 frames and the like), so that the sample multi-channel image can be subjected to target segmentation by using an image segmentation model to obtain a prediction mark of a sample pixel point in the sample current image, and the step of sequentially selecting one frame of two-dimensional image to serve as the sample current image and the subsequent steps are re-executed until all the two-dimensional image from the starting image to the ending image are selected. Based on the above, the following steps of loss measurement and parameter adjustment are performed again to segment the model for training the image of the bit plane until the training converges. In addition, in the case where it is necessary to train the image segmentation models of the coronal plane, the sagittal plane, and the transverse plane, respectively, the image segmentation models of the three planes may be trained in parallel in the above manner until the training converges, and the image segmentation models of the three planes may be finally trained.
In a specific implementation scenario, as mentioned above, the target organ includes a trunk segment and a plurality of branch segments extending from the trunk segment, and in a similar manner to the foregoing, in the process of training the image segmentation model at any position plane, the frequency of the sample multi-channel images selected from the branch segments may be set to be higher than the preset threshold, and the frequency of the sample multi-channel images selected from the trunk segment may be set to be lower than the preset threshold. For specific meanings of the frequency and the preset threshold, reference may be made to the foregoing description, and details are not repeated herein.
In one implementation scenario, please refer to fig. 2 in combination, wherein fig. 2 is a schematic diagram of an embodiment of an image segmentation model. As shown in fig. 2, the image segmentation model may include an encoding network that may include several encoding layers connected in sequence and a decoding network that may include several decoding layers connected in sequence. The downward slanted arrows indicate down-sampling and the upward slanted arrows indicate up-sampling. Furthermore, a jump connection (as indicated by the dashed line in fig. 2) may also be provided between the coding layer and the decoding layer. Or, further, a decoding layer can be arranged between the coding layer and the decoding layer. It should be noted that fig. 2 is only one implementation of the image segmentation model, and a specific structure thereof is not limited herein, for example, refer to U-Net, V-Net, U-Net + +, and the like, which is not described herein again. On the basis, in the target segmentation process, a feature map obtained by coding the last coding layer and feature maps obtained by respectively decoding all decoding layers except the last decoding layer are used as reference feature maps and then decoding is carried out based on the reference feature maps to obtain a first decoding result, so that a sample decoding result can be obtained by fusing the first decoding result and the second decoding result, the second decoding result is output by the last decoding layer, the sample decoding result comprises a prediction mark of each sample pixel point, and the fine bronchus information loss possibility caused by downsampling is reduced through dense connection of multiple levels, and the decoding precision is favorably improved.
In a specific implementation scenario, the resolution of the reference feature maps may be adjusted so that the resolution of each reference feature map is the same. For example, the resolution of each reference feature map before the last reference feature map may be adjusted, and the adjusted resolution is the same as the resolution of the last reference feature map. On this basis, the reference feature maps with the consistent resolution adjustment can be fused (such as splicing and the like), and then decoded to obtain a first decoding result.
In a specific implementation scenario, after the first decoding result is obtained, the first decoding result and the second decoding result may be concatenated, and then the first decoding result and the second decoding result are subjected to 1 × 1 convolution, so as to obtain the sample result.
In a specific implementation scenario, as described above, the plurality of preset categories may include a target organ and an image background, the sample decoding result may be a two-channel image, a pixel value of a pixel point in one of the channel images may represent a probability value that a corresponding sample pixel point in the sample medical image is predicted to belong to the target organ, and a pixel value of a pixel point in the other channel image may represent a probability value that a corresponding sample pixel point in the sample medical image is predicted to belong to the image background.
Step S13: and based on the sample weight of the sample pixel point, performing difference measurement on the sample mark and the prediction mark to obtain the model loss.
In one implementation scenario, the model loss may include a first loss, which may be particularly useful for measuring a region overlap ratio between the sample region and the prediction region of the target organ. Specifically, the sample flag and the prediction flag may be subjected to a region difference metric based on the sample weight, resulting in a first loss.
In one particular implementation scenario, the sample markers are represented by marker values, as previously described. Further, the labeled value of the sample pixel belonging to the target organ may be a first labeled value (e.g., 1), and the labeled value of the sample pixel not belonging to the target organ may be a second labeled value (e.g., 0). In addition, the prediction mark at least comprises a probability value of the predicted object category of the sample pixel point, and the probability valueIs the first marker value (e.g., 1) and the lower limit of the probability value is the second marker value (e.g., 0). For convenience of description, for the ith sample pixel point in the sample medical image, the sample weight thereof may be denoted as DiIts mark value can be recorded as liThe probability value of which the prediction belongs to the target class can be denoted as pi
In a specific implementation scenario, for each sample pixel point, a first product of a label value and a sample weight of the sample pixel point may be obtained, a second product of a probability value and the sample weight of the sample pixel point may be obtained, and a third product of the label value, the probability value and the sample weight of the sample pixel point may be obtained. Based on this, the first loss may be obtained based on a ratio of twice the third sum to the reference sum, and the reference sum is a sum of the first sum and the second sum, and the ratio is inversely related to the first loss. Specifically, the first loss L1Can be expressed as:
Figure BDA0003493065880000161
in the above formula (1), Di·liI.e. representing the first product, Di·piI.e. representing the second product, Di·pi·liI.e. representing the third product, sigmai∈VDi·liI.e. representing the first sum, ∑ i∈VDi·piI.e. representing the second sum, ∑ i∈VDi·pi·liI.e. representing the third sum value, ∑ i∈VDi·pi+∑i∈VDi·liI.e. representing a reference sum value, V represents a set of sample pixel points in the sample medical image. As can be seen from the formula (1), the more accurate the accurate measurement result of the image segmentation model for the prediction markThe higher the degree of area coincidence between the sample area and the prediction area, the higher the first loss L1The smaller, and conversely, the less accurate the image segmentation model is with respect to the precise measurement of the prediction marker, the lower the region overlap ratio between the sample region and the prediction region, and the first loss L1The larger, so by minimizing the first loss L1And parameter optimization is carried out, so that the model network precision of the image segmentation model can be improved.
In one implementation scenario, the model penalty may include a second penalty, which is specifically used to measure the difference in data distribution between the sample signature and the prediction signature. Specifically, the sample flag and the prediction flag may be subjected to a distribution difference metric based on the sample weight, resulting in a second loss.
In a specific implementation scenario, the specific setting manner of the sample flag and the prediction flag may refer to the related description in the foregoing disclosed embodiment, and details are not repeated here.
In a specific implementation scenario, for each sample pixel point, a logarithm value of a probability value of the sample pixel point may be obtained, a fourth product of a label value of the sample pixel point and a sample weight and/or a fifth product of the logarithm value and the fourth product may be obtained, and on the basis, a fourth sum value obtained by summing the fifth products corresponding to each sample pixel point is obtained to obtain a second loss, where the fourth sum value is negatively correlated with the second loss. Specifically, the second loss L2Can be expressed as:
L2=-∑i∈VDili logpi……(2)
in the above formula (2), logpiLogarithmic values representing probability values, DiliDenotes the fourth product, DililogpiRepresents the fifth product, Σi∈V Dili logpiRepresenting the fourth sum value, V represents a set of sample pixel points in the sample medical image. As can be seen from the formula (2), the more accurate the image segmentation model is with respect to the accurate measurement result of the prediction marker, the smaller the data distribution difference between the sample marker and the prediction marker is, and the first loss L is2The smaller and vice versaThe less accurate the accurate measurement result of the image segmentation model on the prediction mark, the larger the data distribution difference between the sample mark and the prediction mark, and the first loss L2The larger, so by minimizing the second loss L2And parameter optimization is carried out, so that the model network precision of the image segmentation model can be improved.
In one implementation scenario, where the model loss includes a first loss and a second loss, both the first loss and the second loss may be fused to yield the model loss. It should be noted that the meaning and the calculation process of the first loss and the second loss may refer to the foregoing related description, and are not described herein again. The model loss can be obtained by weighted fusion of the first loss and the second loss, and can be specifically expressed as:
L=β1L12L2……(3)
in the above formula (3), L1、L2Respectively represent a first loss, a second loss, beta1、β2A first weight representing the first penalty and a second weight representing the second penalty. First weight beta1And a second weight beta2The first weight β can be set according to practical application requirements, for example, in the loss measurement process, the influence of the contact ratio of the areas is mainly considered, and then the first weight β can be set1Is set to be greater than the second weight beta2Or, in the loss measurement process, the data distribution difference influence is considered in an important way, the first weight β can be set1Is set to be less than the second weight beta2Or, in the loss measurement process, the region coincidence degree and the data distribution difference are important as well, the first weight β may be set1Is set equal to the second weight beta2And is not limited herein.
Step S14: network parameters of the image segmentation model are adjusted based on the model loss.
In particular, network parameters of the image segmentation model may be adjusted based on model loss, sampling optimization approaches such as gradient descent, and the like. The specific adjustment process of the network parameters may refer to technical details of an optimization manner such as gradient descent, and is not described herein again.
According to the scheme, the sample medical image of the target organ is obtained, the sample pixel points in the sample medical image are marked with the sample marks and the sample weights, the sample marks represent the target categories to which the sample pixel points belong, and the sample weights are set based on the sample distances from the sample pixel points to the surface of the target organ, namely, the sample weights of the sample pixel points in different categories are different, and the sample pixel points in the same category are also different according to the distances from the sample pixel points to the surface of the target organ, so that the sample pixel points can be differentiated through the sample weights, on the basis, the image segmentation model is used for carrying out target segmentation on the sample medical image to obtain the prediction marks of the sample pixel points, the prediction marks are at least used for representing the possibility that the sample pixel points are predicted to belong to the target categories, and the difference measurement is carried out on the sample marks and the prediction marks based on the sample pixel points, the model loss is obtained, network parameters of the image segmentation model are adjusted based on the model loss, so that pixel points of all samples are differentiated through the sample weights, and in the loss measurement process, the bias degree of loss is predicted at the pixel points of different samples through the sample weights of the pixel points of all samples, so that the model can have a learning effect on parts which are difficult to segment and parts which are easy to segment on the target organ, and the segmentation precision of different parts of the target organ can be considered in the image segmentation process.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of an image segmentation method according to the present application.
Specifically, the method may include the steps of:
step S31: a medical image of a target organ is acquired.
In particular, the medical image may include, but is not limited to, a CT image or the like, and furthermore the target organ may include, but is not limited to: trachea, blood vessels, etc., without limitation.
Step S32: and performing target segmentation on the medical image by using the image segmentation model to obtain the target category to which each pixel point in the medical image belongs.
The image segmentation model in the embodiment of the present disclosure is obtained by using the steps in the embodiment of the training method of any one of the image segmentation models. For a specific training process, reference may be made to the foregoing disclosed embodiments, which are not described herein again. In addition, the target category may be specifically included in a plurality of preset categories, and the plurality of preset categories include the target category. For specific meanings of the target organ, the preset category and the target category, reference may also be made to the related descriptions in the foregoing disclosed embodiments, and details are not repeated here.
In an implementation scenario, as described in the foregoing disclosure, the medical image is a three-dimensional image, and the three-dimensional image is composed of a plurality of two-dimensional images arranged in a stacked manner, in order to improve context during the target segmentation process, each two-dimensional image may be used as a current image, and the current image and a reference image of the current image are combined to obtain a multi-channel image, and the number of image frames spaced between the reference image and the current image is not less than a preset number of frames (e.g., 4 frames, 5 frames, 6 frames, etc.). On the basis, the multi-channel image can be processed by using the image segmentation model to obtain the target category to which the pixel points in the current image belong. And repeating the steps until the target segmentation of each two-dimensional image is completed, and obtaining the target category of each pixel point in the medical image based on the target category of the pixel point in each two-dimensional image.
In an implementation scenario, as described in the foregoing disclosure, in view of that the target segmentation is directly performed on the three-dimensional image, the three-dimensional image may be segmented by mistake due to some orientation planes (e.g., coronal plane, sagittal plane, or cross section) observing a target similar to the shape of a target organ, and in order to further improve the segmentation accuracy, the image segmentation models of different orientation planes may be respectively used to perform target segmentation on the medical image, so as to obtain a target class to which each pixel point belongs in the medical image, and on this basis, the target class to which each pixel point finally belongs may be determined based on the target classes to which the pixel points obtained by the image segmentation models of different orientation planes are respectively detected. In addition, the specific training modes of the image segmentation models of different planes can refer to the related descriptions in the foregoing embodiments, and are not described herein again. It should be noted that the image segmentation models of different bit planes may have the same network structure or different network structures. Illustratively, the image segmentation models of different planes can be set to be U-Net + +, or the image segmentation model of a cross section can be set to be U-Net + +, the image segmentation model of a sagittal plane can be set to be U-Net, and the image segmentation model of a coronal plane can be set to be V-Net, which is not limited herein.
In a specific implementation scenario, in the process of performing target segmentation on a medical image by using image segmentation models of different planes, in order to improve context during the target segmentation process, the aforementioned manner of combining multi-channel images may also be adopted. Specifically, for an image segmentation model of a certain bit plane, in the process of performing target segmentation, a three-dimensional image may be segmented at the bit plane to obtain a plurality of two-dimensional images arranged in a stacked manner, and then the above-mentioned step of using each two-dimensional image as a current image and the subsequent steps may be performed. Referring to fig. 4, fig. 4 is a schematic process diagram of an embodiment of the image segmentation method of the present application. As shown in fig. 4, in the training process of the image segmentation model, the image segmentation model of the coronal plane, the image segmentation model of the sagittal plane, and the image segmentation model of the transverse plane may be trained, and the three-dimensional image may be segmented from the coronal plane, the sagittal plane, and the transverse plane, respectively, to obtain a plurality of two-dimensional images stacked in the vertical direction of the coronal plane, a plurality of two-dimensional images stacked in the vertical direction of the sagittal plane, and a plurality of two-dimensional images stacked in the vertical direction of the transverse plane. On the basis, the step of taking each two-dimensional image as the current image and the subsequent steps are respectively executed, and the image segmentation model corresponding to the bit plane is sent to carry out target segmentation.
In a specific implementation scenario, taking a plurality of preset categories including a target organ and an image background as an example, after the target category to which a pixel point in a medical image belongs is obtained through respective detection of image segmentation models of respective planes, category determination may be performed on each pixel point, if the pixel point is detected as belonging to the target organ in the image segmentation model of any plane, it may be determined that the pixel point belongs to the target organ, otherwise, if the image segmentation models of the pixel point in the respective planes are detected as belonging to the image background, it may be determined that the pixel point belongs to the image background, and thus, a final target category of each pixel point may be obtained. With reference to fig. 4, taking the target organ as the trachea as an example, the image segmentation models of the respective location planes can detect the trachea masks respectively, and then the union of the trachea masks can be taken to obtain the final trachea mask. It should be noted that, a pixel point belonging to the trachea in the final trachea mask indicates that the pixel point is finally determined as belonging to the trachea, whereas a pixel point not belonging to the trachea in the final trachea mask indicates that the pixel point is finally determined as not belonging to the trachea. Other cases may be analogized, and no one example is given here.
In an implementation scenario, as described in the foregoing disclosure embodiment, the image segmentation model may include a coding network and a decoding network, the coding network may include several coding layers connected in sequence, and the decoding network may include several decoding layers connected in sequence, in order to reduce the possibility of occurrence of information loss of the bronchioles due to downsampling, a feature map obtained by coding a last coding layer and feature maps obtained by decoding the decoding layers other than the last decoding layer respectively may be used as reference feature maps, on this basis, decoding may be performed based on the respective reference feature maps to obtain a first decoding result, and fuse the first decoding result and a second decoding result to obtain a final decoding result, and the second decoding result is output by the last decoding layer, and the final decoding result includes a target category to which each pixel belongs, and specific processes may refer to relevant descriptions in the foregoing disclosure embodiment, and will not be described in detail herein. In the case where the target organ includes a trunk section such as a trachea or a blood vessel and a plurality of branch sections extending from the trunk section, the possibility of information loss of the bronchioles due to downsampling can be reduced by the dense connection of the multi-level features, and the detection length can be greatly increased.
Step S33: and obtaining a segmentation result of the target organ based on the target category to which the pixel point belongs.
In an implementation scenario, a connected domain composed of pixel points belonging to a target organ can be directly obtained as a segmentation result of the target organ in a medical image.
In an implementation scenario, in order to further improve the segmentation accuracy, a plurality of connected domains formed by pixel points belonging to a target organ may be obtained first, and based on the maximum connected domain, a segmentation result of the target organ is obtained, for example, the maximum connected domain may be directly used as a segmentation result of the target organ in a medical image, so as to remove a false positive region that may exist.
In an implementation scenario, taking the target organ as a trachea as an example, after obtaining the segmentation result of the trachea, the lung lobes and the lung segments in the medical image may be segmented further based on prior information of the segmentation result of the trachea, and the position of a lesion (e.g., a lung nodule, etc.) in the medical image may be detected to further assist a physician.
According to the scheme, the medical image of the target organ is subjected to target segmentation by utilizing the image segmentation model obtained by training in the step of the training method embodiment of the image segmentation model, so that the target class to which each pixel point belongs in the medical image is obtained, and the segmentation result of the target organ is obtained based on the target class to which the pixel point belongs, so that the segmentation precision of different parts of the target organ can be considered in the image segmentation process.
Referring to fig. 5, fig. 5 is a block diagram illustrating an embodiment of a training apparatus 50 for image segmentation models according to the present application. The training device 50 for the image segmentation model includes: the system comprises a sample acquisition module 51, a sample segmentation module 52, a loss measurement module 53 and a parameter adjustment module 54, wherein the sample acquisition module 51 is used for acquiring a sample medical image of a target organ; the method comprises the steps that sample pixel points in a sample medical image are marked with sample marks and sample weights, the sample marks represent target categories to which the sample pixel points belong, and the sample weights are set based on sample distances from the sample pixel points to the surface of a target organ; the sample segmentation module 52 is configured to perform target segmentation on the sample medical image by using the image segmentation model to obtain a prediction flag of a sample pixel point; the prediction mark is at least used for representing the possibility that the sample pixel point is predicted to belong to the target category; a loss measurement module 53, configured to perform difference measurement on the sample flag and the prediction flag based on the sample weight of the sample pixel point, so as to obtain a model loss; and a parameter adjusting module 54, configured to adjust a network parameter of the image segmentation model based on the model loss.
According to the scheme, the sample pixel points are differentiated through the sample weights, the bias degree of loss is predicted at different sample pixel points through the sample weights of the sample pixel points, and the model can be favorably used for learning effects of parts which are difficult to divide and easy to divide on the target organ, so that the division precision of different parts of the target organ can be considered in the image division process.
In some disclosed embodiments, the loss measurement module 53 includes a first degree quantum module configured to perform a region difference measurement on the sample marker and the prediction marker based on the sample weight to obtain a first loss, and the loss measurement module 53 includes a second degree quantum module configured to perform a distribution difference measurement on the sample marker and the prediction marker based on the sample weight to obtain a second loss, and the loss measurement module 53 includes a loss obtaining sub-module configured to obtain a model loss based on the first loss and/or the second loss, wherein the first loss is used to measure a region coincidence degree between the sample region and the prediction region of the target organ, the sample region is a region actually labeled by the target organ, the prediction region is a region predicted by the model of the target organ, and the second loss is used to measure a data distribution difference between the sample marker and the prediction marker.
In some disclosed embodiments, the first-degree quantum module includes a first weighting unit, configured to, for each sample pixel, obtain a first product of a label value and a sample weight of the sample pixel, obtain a second product of a probability value and the sample weight of the sample pixel, and obtain a third product of the label value, the probability value, and the sample weight of the sample pixel; the first-degree quantum module comprises a first unification unit, and is used for summing first products corresponding to each sample pixel point to obtain a first sum value, summing second products corresponding to each sample pixel point to obtain a second sum value, and summing third products corresponding to each sample pixel point to obtain a third sum value; the first-degree quantum module comprises a ratio acquisition unit, a first gain unit and a second gain unit, wherein the ratio acquisition unit is used for acquiring a first loss based on the ratio of twice of the third sum to the reference sum; wherein the reference sum is a sum of the first sum and the second sum, and the ratio is inversely related to the first loss.
In some disclosed embodiments, the second scale quantum module includes a second weighting unit, configured to, for each sample pixel, obtain a logarithmic value of the probability value of the sample pixel, obtain a fourth product of the label value of the sample pixel and the sample weight, and obtain a fifth product of the logarithmic value and the fourth product; the second-degree quantum module comprises a second statistical unit and a second loss calculation unit, wherein the second statistical unit is used for calculating a second loss based on a fourth sum value obtained by summing fifth products corresponding to each sample pixel point; wherein the fourth sum is inversely related to the second loss.
In some disclosed embodiments, the sample weight of a sample pixel belonging to the target organ is higher than the sample weight of a sample pixel not belonging to the target organ, and the sample weight of a sample pixel belonging to the target organ is inversely related to the sample distance.
In some disclosed embodiments, the sample obtaining module 51 includes a normalization submodule, configured to normalize the distance for each sample pixel point belonging to the target organ, and obtain a normalization value; the sample obtaining module 51 includes a numerical operation submodule, configured to use a sum of a difference between the first numerical value and the normalization value and the second numerical value as a sample weight of a sample pixel point belonging to the target organ; and the first value is not less than 1, and the sample weight of the sample pixel point which does not belong to the target organ is a second value.
In some disclosed embodiments, the target organ includes a trunk section and a number of branch sections extending from the trunk section; in the process of training the image segmentation model, the frequency of the sample medical images located in the branch section is selected to be higher than a preset threshold value, and/or the frequency of the sample medical images located in the trunk section is selected to be lower than the preset threshold value.
In some disclosed embodiments, the image segmentation model comprises an encoding network and a decoding network, the encoding network comprising a number of encoding layers connected in sequence, and the decoding network comprising a number of decoding layers connected in sequence; the sample segmentation module 52 includes a feature map acquisition sub-module, which is used to obtain a feature map obtained by encoding the last coding layer and feature maps obtained by decoding the decoding layers other than the last decoding layer, as reference feature maps; the sample segmentation module 52 includes a decoding sub-module, configured to perform decoding based on each reference feature map to obtain a first decoding result; the sample segmentation module 52 includes a fusion submodule, configured to fuse the first decoding result and the second decoding result to obtain a sample decoding result; and outputting the second decoding result by the last decoding layer, wherein the sample decoding result comprises the prediction mark of each sample pixel point.
In some disclosed embodiments, the target organ comprises at least one of a trachea, a blood vessel; and/or the sample distance is the closest distance from the sample pixel point to the surface.
Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an image segmentation apparatus 60 according to the present application. The image segmentation apparatus 60 includes: an image acquisition module 61, a target segmentation module 62 and a result acquisition module 63, the image acquisition module being used for acquiring a medical image of a target organ; the target segmentation module is used for carrying out target segmentation on the medical image by utilizing the image segmentation model to obtain a target category to which each pixel point in the medical image belongs; wherein the image segmentation model is obtained by using the training device of the image segmentation model in the third aspect; and the result acquisition module is used for acquiring the segmentation result of the target organ based on the target category to which the pixel point belongs.
According to the scheme, the medical image of the target organ is subjected to target segmentation by using the image segmentation model obtained by training of the training device in the embodiment of the training device of the image segmentation model, so that the target class to which each pixel point belongs in the medical image is obtained, and the segmentation result of the target organ is obtained based on the target class to which the pixel point belongs, so that the segmentation precision of different parts of the target organ can be considered in the image segmentation process.
In some disclosed embodiments, the medical image is a three-dimensional image, the three-dimensional image is composed of a plurality of two-dimensional images arranged in a stacked manner, and the target segmentation module 62 includes an image combination sub-module for respectively using each two-dimensional image as a current image and combining the current image with a reference image of the current image to obtain a multi-channel image; the number of image frames between the reference image and the current image is less than the preset number of frames; the target segmentation module 62 comprises an image processing submodule, and is configured to process the multi-channel image by using the image segmentation model to obtain a target category to which a pixel point in the current image belongs; the target segmentation module 62 includes a category determination submodule configured to obtain a target category to which each pixel point in the medical image belongs based on the target category to which the pixel point in each two-dimensional image belongs.
In some disclosed embodiments, the result obtaining module 63 includes a connected component obtaining submodule, configured to obtain a plurality of connected components formed by pixel points belonging to the target organ; the result obtaining module 63 includes a connected component screening submodule, configured to obtain a segmentation result of the target organ based on the largest connected component.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an embodiment of an electronic device 70 according to the present application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, and the processor 72 is configured to execute program instructions stored in the memory 71 to implement the steps of any of the above-described embodiments of the image segmentation model training method, or to implement the steps of any of the above-described embodiments of the image segmentation method. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the above-described embodiments of the training method of the image segmentation model, or to implement the steps of any of the above-described embodiments of the image segmentation method. Processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
According to the scheme, the sample pixel points are differentiated through the sample weights, the bias degree of loss is predicted at different sample pixel points through the sample weights of the sample pixel points, and the model can be favorably used for learning effects of parts which are difficult to divide and easy to divide on the target organ, so that the division precision of different parts of the target organ can be considered in the image division process.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer readable storage medium 80 according to the present application. The computer readable storage medium 80 stores program instructions 801 that can be executed by the processor, the program instructions 801 being for implementing the steps of any of the above-described embodiments of the image segmentation model training method, or implementing the steps of any of the above-described embodiments of the image segmentation method.
According to the scheme, the sample pixel points are differentiated through the sample weights, the bias degree of loss is predicted at different sample pixel points through the sample weights of the sample pixel points, and the model can be favorably used for learning effects of parts which are difficult to divide and easy to divide on the target organ, so that the division precision of different parts of the target organ can be considered in the image division process.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.

Claims (16)

1. A training method of an image segmentation model is characterized by comprising the following steps:
acquiring a sample medical image of a target organ; wherein sample pixel points in the sample medical image are labeled with sample labels and sample weights, the sample labels represent target categories to which the sample pixel points belong, and the sample weights are set based on sample distances from the sample pixel points to the surface of the target organ;
performing target segmentation on the sample medical image by using an image segmentation model to obtain a prediction mark of the sample pixel point; wherein the prediction flag is at least used to represent a likelihood that the sample pixel point is predicted to belong to the target class;
based on the sample weight of the sample pixel point, performing difference measurement on the sample mark and the prediction mark to obtain model loss;
adjusting network parameters of the image segmentation model based on the model loss.
2. The method of claim 1, wherein the performing a difference metric on the sample label and the prediction label based on the sample weights of the sample pixel points to obtain a model loss comprises:
based on the sample weight, performing regional difference measurement on the sample mark and the prediction mark to obtain a first loss, and/or based on the sample weight, performing distribution difference measurement on the sample mark and the prediction mark to obtain a second loss;
obtaining the model loss based on the first loss and/or the second loss;
wherein the first loss is used to measure a region coincidence between a sample region and a prediction region of the target organ, the sample region is a region actually labeled by the target organ, the prediction region is a region predicted by the model of the target organ, and the second loss is used to measure a data distribution difference between the sample markers and the prediction markers.
3. The method of claim 2, wherein the sample labels are represented by label values and the prediction labels comprise at least a probability value that the sample pixel is predicted to belong to the target class; performing a regional difference metric on the sample label and the prediction label based on the sample weight to obtain a first loss, including:
for each sample pixel point, obtaining a first product of the label value and the sample weight of the sample pixel point, obtaining a second product of the probability value and the sample weight of the sample pixel point, and obtaining a third product of the label value, the probability value and the sample weight of the sample pixel point;
summing first products corresponding to each sample pixel point to obtain a first sum value, summing second products corresponding to each sample pixel point to obtain a second sum value, and summing third products corresponding to each sample pixel point to obtain a third sum value;
deriving the first loss based on a ratio of twice the third sum to a reference sum; wherein the reference sum is a sum of the first sum and the second sum, and the ratio is inversely related to the first loss.
4. The method of claim 2, wherein the sample label is represented by a label value and the prediction label comprises at least a probability value that the sample pixel is predicted to belong to the target class; the performing a distribution difference metric on the sample label and the prediction label based on the sample weight to obtain a second loss includes:
for each sample pixel point, obtaining a logarithm value of a probability value of the sample pixel point, obtaining a fourth product of the label value of the sample pixel point and the sample weight, and obtaining a fifth product of the logarithm value and the fourth product;
obtaining the second loss based on a fourth sum value obtained by summing fifth products corresponding to the sample pixel points; wherein the fourth sum is inversely related to the second loss.
5. The method of any one of claims 1 to 4, wherein the sample weights of the sample pixels belonging to the target organ are higher than the sample weights of the sample pixels not belonging to the target organ, and wherein the sample weights of the sample pixels belonging to the target organ are inversely related to the sample distance.
6. The method of claim 5, wherein the step of setting the sample weights comprises:
normalizing the distance of each sample pixel point belonging to the target organ to obtain a normalized value, and taking the sum of the difference value of the first numerical value and the normalized value and the second numerical value as the sample weight of the sample pixel point belonging to the target organ;
and the first numerical value is not less than 1, and the sample weight of the sample pixel point which does not belong to the target organ is the second numerical value.
7. The method of any one of claims 1 to 6, wherein the target organ comprises a trunk section and a plurality of branch sections extending from the trunk section;
wherein, in the process of training the image segmentation model, the frequency of the sample medical image located in the branch segment is selected to be higher than a preset threshold, and/or the frequency of the sample medical image located in the trunk segment is selected to be lower than a preset threshold.
8. The method according to any one of claims 1 to 7, wherein the image segmentation model comprises an encoding network and a decoding network, the encoding network comprising a number of encoding layers connected in sequence, and the decoding network comprising a number of decoding layers connected in sequence; the target segmentation is performed on the sample medical image by using the image segmentation model to obtain the prediction mark of the sample pixel point, and the method comprises the following steps:
taking a feature map obtained by coding the last layer of coding layer and feature maps obtained by respectively decoding the decoding layers except the last layer of decoding layer as reference feature maps;
decoding is carried out on the basis of the reference characteristic graphs to obtain a first decoding result;
fusing the first decoding result and the second decoding result to obtain a sample decoding result; and outputting the second decoding result by the last decoding layer, wherein the sample decoding result comprises the prediction mark of each sample pixel.
9. The method of any one of claims 1 to 8, wherein the target organ comprises at least one of a trachea, a blood vessel;
and/or the sample distance is the closest distance from the sample pixel point to the surface.
10. An image segmentation method, comprising:
acquiring a medical image of a target organ;
performing target segmentation on the medical image by using an image segmentation model to obtain a target category to which each pixel point in the medical image belongs; wherein the image segmentation model is obtained by using the training method of the image segmentation model according to any one of claims 1 to 9;
and obtaining a segmentation result of the target organ based on the target category to which the pixel point belongs.
11. The method according to claim 10, wherein the medical image is a three-dimensional image, the three-dimensional image is composed of a plurality of two-dimensional images arranged in a stacked manner, and the target segmentation of the medical image by using an image segmentation model to obtain the target class to which each pixel point in the medical image belongs comprises:
respectively taking each two-dimensional image as a current image, and combining the current image with a reference image of the current image to obtain a multi-channel image; the number of image frames spaced between the reference image and the current image is less than a preset number of frames;
processing the multi-channel image by using the image segmentation model to obtain a target category to which the pixel point belongs in the current image;
and obtaining the target category of each pixel point in the medical image based on the target category of the pixel point in each two-dimensional image.
12. The method according to claim 10, wherein obtaining the segmentation result of the target organ based on the target category to which the pixel belongs comprises:
acquiring a plurality of connected domains consisting of pixel points belonging to the target organ;
and obtaining a segmentation result of the target organ based on the maximum connected domain.
13. An apparatus for training an image segmentation model, comprising:
a sample acquisition module for acquiring a sample medical image of a target organ; wherein sample pixel points in the sample medical image are labeled with sample labels and sample weights, the sample labels represent target categories to which the sample pixel points belong, and the sample weights are based on sample distances from the sample pixel points to the surface of the target organ;
the sample segmentation module is used for carrying out target segmentation on the sample medical image by utilizing an image segmentation model to obtain a prediction mark of the sample pixel point; wherein the prediction flag is at least used to represent a likelihood that the sample pixel point is predicted to belong to the target class;
the loss measurement module is used for carrying out difference measurement on the sample marks and the prediction marks based on the sample weight of the sample pixel points to obtain model loss;
and the parameter adjusting module is used for adjusting the network parameters of the image segmentation model based on the model loss.
14. An image segmentation apparatus, comprising:
an image acquisition module for acquiring a medical image of a target organ;
the target segmentation module is used for carrying out target segmentation on the medical image by utilizing an image segmentation model to obtain a target category to which each pixel point in the medical image belongs; wherein the image segmentation model is obtained by using the training device of the image segmentation model according to claim 13;
and the result acquisition module is used for acquiring the segmentation result of the target organ based on the target category to which the pixel point belongs.
15. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of training an image segmentation model according to any one of claims 1 to 9 or the method of image segmentation according to any one of claims 10 to 12.
16. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement a method of training an image segmentation model according to any one of claims 1 to 9, or a method of image segmentation according to any one of claims 10 to 12.
CN202210103093.5A 2022-01-27 2022-01-27 Image segmentation method, model training method thereof, related device, equipment and medium Pending CN114445376A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210103093.5A CN114445376A (en) 2022-01-27 2022-01-27 Image segmentation method, model training method thereof, related device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210103093.5A CN114445376A (en) 2022-01-27 2022-01-27 Image segmentation method, model training method thereof, related device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114445376A true CN114445376A (en) 2022-05-06

Family

ID=81368959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210103093.5A Pending CN114445376A (en) 2022-01-27 2022-01-27 Image segmentation method, model training method thereof, related device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114445376A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071154A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image segmentation method, training method and apparatus for related model, and device
WO2023230936A1 (en) * 2022-05-31 2023-12-07 北京小米移动软件有限公司 Image segmentation model training method and apparatus, and image segmentation method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071154A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image segmentation method, training method and apparatus for related model, and device
WO2023230936A1 (en) * 2022-05-31 2023-12-07 北京小米移动软件有限公司 Image segmentation model training method and apparatus, and image segmentation method and apparatus

Similar Documents

Publication Publication Date Title
EP3992851A1 (en) Image classification method, apparatus and device, storage medium, and medical electronic device
WO2021003821A1 (en) Cell detection method and apparatus for a glomerular pathological section image, and device
CN110648322B (en) Cervical abnormal cell detection method and system
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN114445376A (en) Image segmentation method, model training method thereof, related device, equipment and medium
CN113077434B (en) Method, device and storage medium for lung cancer identification based on multi-modal information
CN113888475A (en) Image detection method, training method of related model, related device and equipment
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
CN113989293A (en) Image segmentation method and training method, device and equipment of related model
CN114429459A (en) Training method of target detection model and corresponding detection method
Arega et al. Leveraging uncertainty estimates to improve segmentation performance in cardiac MR
CN113240699B (en) Image processing method and device, model training method and device, and electronic equipment
CN114332563A (en) Image processing model training method, related device, equipment and storage medium
CN111128349A (en) GAN-based medical image focus detection marking data enhancement method and device
CN111724360B (en) Lung lobe segmentation method, device and storage medium
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
CN109886320B (en) Human femoral X-ray intelligent recognition method and system
CN111724371A (en) Data processing method and device and electronic equipment
CN114708264B (en) Light spot quality judging method, device, equipment and storage medium
US20130077872A1 (en) Image processing device, method and program
Zhang et al. Multiple morphological constraints-based complex gland segmentation in colorectal cancer pathology image analysis
CN115035133A (en) Model training method, image segmentation method and related device
CN112330603B (en) System and method for estimating motion of target in tissue based on soft tissue surface deformation
CN113554640A (en) AI model training method, use method, computer device and storage medium
CN111612770A (en) Active screening-based focus detection system of semi-supervised focus detection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination