CN117333874B - Image segmentation method, system, storage medium and device - Google Patents
Image segmentation method, system, storage medium and device Download PDFInfo
- Publication number
- CN117333874B CN117333874B CN202311405974.3A CN202311405974A CN117333874B CN 117333874 B CN117333874 B CN 117333874B CN 202311405974 A CN202311405974 A CN 202311405974A CN 117333874 B CN117333874 B CN 117333874B
- Authority
- CN
- China
- Prior art keywords
- model
- segmentation
- value
- teacher
- student
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000003709 image segmentation Methods 0.000 title claims abstract description 36
- 230000011218 segmentation Effects 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 39
- 238000004891 communication Methods 0.000 claims description 12
- 230000005484 gravity Effects 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 101150050759 outI gene Proteins 0.000 claims description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/096—Transfer learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7753—Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/033—Recognition of patterns in medical or anatomical images of skeletal patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an image segmentation method, an image segmentation system, a storage medium and an image segmentation device. Mainly comprises the following steps: acquiring image data to be classified; processing the image data by adopting a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model, and the segmentation model is trained by the following method: and taking an exponential moving average of the student model parameters in the training process to obtain a more robust teacher model, and supervising the learning of the student model by taking a prediction result of the teacher model as a pseudo tag. In the image segmentation method, the student model parameters in the training process are exponentially and movably averaged, so that a more robust teacher model is obtained, and the prediction result of the teacher model is used as a pseudo tag to supervise the learning of the student model. The accuracy of the image segmentation method is improved.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an image segmentation method, system, storage medium, and apparatus.
Background
Image segmentation algorithms are currently in wide use in many fields. The image segmentation algorithm is mainly used for carrying out segmentation processing on the acquired image data so as to obtain a classification result of the image data. For example, the method is widely applied to the fields of automatic driving, medical image aided diagnosis, satellite remote sensing and the like. But the accuracy of conventional image segmentation algorithms needs to be further improved.
Disclosure of Invention
Based on this, an image segmentation method is provided. In the image segmentation method, the student model parameters in the training process are exponentially and movably averaged, so that a more robust teacher model is obtained, and the prediction result of the teacher model is used as a pseudo tag to supervise the learning of the student model. The accuracy of the image segmentation method is improved.
An image segmentation method, comprising:
Acquiring image data to be classified;
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula: wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingAndFor sample X, the student model predicts it asThe teacher model predicts the result as,AndThe method can be obtained by the following formula:
,
,
Wherein the student model and the teacher model adopt the same model structure and use Representation of whereinAndFor different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
The final loss function is: where y represents the real label to which the sample x corresponds, In order to divide the loss function,As a function of the consistency loss of the data,For controllingAndIs a specific gravity of (c).
And training the segmentation model through a final loss function.
In one embodiment, use is made ofTo measure false labelsAnd the distance between the real labels y, thereby adaptively adjusting the weights。
In one of the embodiments of the present invention,
,
Wherein the method comprises the steps ofAndFor a particular weight value to be used,,As a result of the corresponding threshold value,
Is determined by the following wayIs the value of (1): during training, record the past k training stepsIs set as a vector K, and takes a value in the nth percentile as the valueIs used as a reference to the value of (a),WhereinRepresenting a percentile taking function.
In one of the embodiments of the present invention,
UsingTo representV th pixel of (a), its uncertaintyObtained by the following formula: Selecting the threshold value as Finally filter outIs only supervised with the remaining parts,
The final consistency loss function is:
Wherein the method comprises the steps of As a function of the corresponding indication,
Determination was made using the following formulaIs the value of (1):
,
Wherein u represents a pseudo tag The corresponding overall uncertainty map is then displayed,Represents a flattening operation, and the flattening device,Representing the probability corresponding to the ith training round,
Is determined by the following formulaIs of the size of (2):
Representing the initial percentile.
In one of the embodiments of the present invention,
The teacher model and the student model are semantic segmentation models.
An image segmentation system, comprising:
the data acquisition unit is used for acquiring image data to be classified;
The data processing unit is used for processing the image data and specifically comprises the following steps:
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:,
Wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingH andFor sample X, the student model predicts it asThe teacher model predicts the result as,AndThe method can be obtained by the following formula:
,
,
Wherein the student model and the teacher model adopt the same model structure and use Representation of whereinAndFor different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
The final loss function is:,
where y represents the real label to which sample x corresponds, In order to divide the loss function,As a function of the consistency loss of the data,For controllingAndIs a specific gravity of (c).
And training the segmentation model through a final loss function.
A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image segmentation method.
A computer apparatus, comprising: the image segmentation method comprises the steps of a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus, the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image segmentation method.
The beneficial effects of the application are as follows:
According to the application, the index moving average (Exponential Moving Average, EMA) is taken for the student model parameters in the training process, so that a more robust teacher model is obtained, and the prediction result of the teacher model is used as a pseudo tag to supervise the learning of the student model. The application also considers that when excessive noise exists in the real label, the training of the model is supervised by using the pseudo label more, and the weights of the label supervision and the pseudo label supervision are dynamically distributed by measuring the similarity between the pseudo label and the real label. The application also considers that the pseudo tag generated by the teacher model can have pixel-level noise, so the application reduces the influence of noise in the pseudo tag on training based on uncertainty estimation of the pseudo tag. To further improve the accuracy of the image segmentation method of the present application.
In the field of medical image segmentation, particularly in the field of rib image segmentation, the method and the device can remarkably improve the Dice coefficient and rib recall value by using RibSeg data sets and performing a contrast test.
Drawings
Fig. 1 is a flowchart of an image segmentation algorithm according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings.
As shown in fig. 1, an embodiment of the present application provides an image segmentation method, where the image segmentation algorithm specifically includes:
Acquiring image data to be classified;
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:
,
Wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingAndIs a weight of (2).
For sample X from image data, the student model predicts it asThe teacher model predicts the result as,AndThe method can be obtained by the following formula:
,
,
Wherein the student model and the teacher model adopt the same model structure and use The representation, student model and teacher model may be commonly used semantic segmentation models. Wherein the method comprises the steps ofAndFor different Gaussian noise, under different random disturbance, the student model and the teacher model should have the same output for the same sample, and by using this, the output of the teacher model can be used as a pseudo tag to supervise the student model learning,
The final loss function is: Where y represents the true label (the true label is manually calibrated) to which the sample x corresponds, To partition the loss function, it may be implemented as a Dice loss function, a BCE loss function, etc.The prediction of the student model and the prediction of the teacher model are made to be as close as possible to each other as a consistency loss function, and the prediction can be generally a mean square error (Mean Squared Error, MSE) loss function and the like.For controllingAndIs a specific gravity of (c).
And training the segmentation model through a final loss function.
The image segmentation method of the application can be applied to a plurality of fields, such as the automatic driving field, the medical image auxiliary diagnosis, the satellite remote sensing field and the like. Specifically, in the above method of the present application, the image data to be classified may be an environmental image acquired in the autopilot field, an image captured by a medical instrument acquired in the medical image auxiliary diagnosis field, an image acquired in the satellite remote sensing field, or the like.
On the basis of the above, further, when the noise of the real tag is too high, the pseudo tag should be used moreSupervised network learning, the present application adaptively adjusts label supervision by evaluating similarity between pseudo and real labelsPseudo tag supervisionAnd a weight therebetween. Prediction for teacher modelIf the gap from the real tag y is too large, the tag may have noise, and more pseudo tags should be used for supervision at this time, in particularTo measure false labelsAnd the distance between the real labels y, thereby adaptively adjusting the weights。
On the basis of the above, the method, in particular,Can be obtained by the following method.
,
Wherein the method comprises the steps ofAndFor a particular weight value to be used,,As the network trains, the accuracy of the teacher model prediction becomes higher and higher for the corresponding threshold value,The whole becomes smaller, and thereforeAnd should also vary.
Based on the above thought, it is determined by the following wayIs the value of (1): during training, record the past k training stepsIs set as a vector K, and takes a value in the nth percentile as the valueIs used as a reference to the value of (a),
,
Wherein the method comprises the steps ofRepresenting a percentile taking function.
On the basis of the above, further, in order to avoid the noise in the model overfitting pseudo tag, the application adopts an uncertainty estimation mode and utilizes the pseudo tag generated by a teacher model to solve the problem that the pseudo tag also possibly contains noiseThe uncertainty of each pixel probability distribution is estimated by entropy of the pixel probability distribution, so that high-quality pseudo tags are filtered out to more accurately supervise the student model.
In particular, use is made ofTo representV th pixel of (a), its uncertaintyObtained by the following formula:。
The threshold is selected as Finally filter outIs only supervised with the remaining parts,
The final consistency loss function is: Wherein p v represents the v-th pixel in p, For the corresponding indication function, it is obvious that,The value of (2) is related to the uncertainty of the current label overall and is also related to the training round of the model, and the prediction result of the teacher model is more reliable along with the training of the model, so that the uncertainty is gradually reduced. In a specific implementation, the following formula is used for determinationIs the value of (1):,
Wherein u represents a pseudo tag The corresponding overall uncertainty map is then displayed,Represents a flattening operation, and the flattening device,Representing the probability corresponding to the ith training round,
Is determined by the following formulaIs of the size of (2):
,
Representing the initial percentile, total epoch is the total training round.
In one embodiment, the teacher model and the student model are both semantic segmentation models.
The application also provides an image segmentation system, which comprises:
the data acquisition unit is used for acquiring image data to be classified;
The data processing unit is used for processing the image data and specifically comprises the following steps:
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula: wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingAndIs used for the weight of the (c),
For sample X, the student model predicts it asThe teacher model predicts the result as,AndThe method can be obtained by the following formula:
,
,
Wherein the student model and the teacher model adopt the same model structure and use Representation of whereinAndFor different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
The final loss function is:
,
where y represents the real label to which the sample x corresponds, In order to divide the loss function,As a function of the consistency loss of the data,For controllingAndIs a specific gravity of (c).
And training the segmentation model through a final loss function.
Embodiments of the present application also provide a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the image segmentation method.
Embodiments of the present application also provide a computer apparatus, comprising: the image segmentation method comprises the steps of a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus, the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image segmentation method.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (5)
1. An image segmentation method, comprising:
Acquiring image data to be classified;
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:
,
Wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingAndIs used for the weight of the (c),
For the sampleThe student model predicts the result asThe teacher model predicts the result as,AndObtained by the following formula:
,
,
wherein the student model and the teacher model adopt the same model structure and use the representation, wherein AndFor different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
The final loss function is:
,
where y represents the real label to which sample x corresponds, In order to divide the loss function,As a function of the consistency loss of the data,For controllingAndIs characterized by comprising a specific gravity of (2),
The segmentation model is trained by a final loss function,
UsingTo measure false labelsAnd the distance between the real labels y, thereby adaptively adjusting the weights,
,
Wherein the method comprises the steps ofAndFor a particular weight value to be used,,As a result of the corresponding threshold value,
Determined by the following wayValue: during the training process, record the past k training stepsThe value is set as a vector K, and the value in the nth percentile is taken asThe value of the sum of the values,
,
Wherein the method comprises the steps ofRepresenting a function that takes the percentile,
UsingTo representV th pixel of (a), its uncertaintyObtained by the following formula:
,
The threshold is selected as Finally filter outIs only supervised with the remaining parts,
The final consistency loss function is:
,
Wherein the method comprises the steps of For the corresponding indicator function, pv represents the v-th pixel in P,
Determination was made using the following formulaIs the value of (1):
,
Wherein u represents a pseudo tag The corresponding overall uncertainty map is then displayed,Represents a flattening operation, and the flattening device,Representing the probability corresponding to the ith training round,
Is determined by the following formulaIs of the size of (2):
,
Representing the initial percentile.
2. The image segmentation method as set forth in claim 1, wherein the teacher model and the student model are both semantic segmentation models.
3. An image segmentation system, comprising:
the data acquisition unit is used for acquiring image data to be classified;
The data processing unit is used for processing the image data and specifically comprises the following steps:
Processing the image data using a segmentation model to obtain a classification result of the image data, wherein the segmentation model comprises a teacher model and a student model,
The segmentation model is trained by the following method:
parameters of the student model Representing the parameters of the teacher modelThe student model uses an exponential moving average algorithm to update the parameters of the teacher model, specifically applying the following formula:
,
Wherein t represents the corresponding training step, ,For the corresponding smoothing coefficients to be used,For adjustingAndIs used for the weight of the (c),
For the sampleThe student model predicts the result asThe teacher model predicts the result as,AndObtained by the following formula:
,
,
wherein the student model and the teacher model adopt the same model structure and use the representation, wherein AndFor different Gaussian noise, the output of the teacher model is used as a pseudo tag to supervise the learning of the student model,
The final loss function is:
,
where y represents the real label to which sample x corresponds, In order to divide the loss function,As a function of the consistency loss of the data,For controllingAndIs characterized by comprising a specific gravity of (2),
The segmentation model is trained by a final loss function,
UsingTo measure false labelsAnd the distance between the real labels y, thereby adaptively adjusting the weights,
,
Wherein the method comprises the steps ofAndFor a particular weight value to be used,,As a result of the corresponding threshold value,
Is determined by the following wayIs the value of (1): during training, record the past k training stepsIs set as a vector K, and takes a value in the nth percentile as the valueIs used as a reference to the value of (a),
,
Wherein the method comprises the steps ofRepresenting a function that takes the percentile,
UsingTo representV th pixel of (a), its uncertaintyObtained by the following formula:
,
The threshold is selected as Finally filter outIs only supervised with the remaining parts,
The final consistency loss function is:
,
Wherein the method comprises the steps of For the corresponding indicator function, pv represents the v-th pixel in P,
Determination was made using the following formulaIs the value of (1):
,
Wherein u represents a pseudo tag The corresponding overall uncertainty map is then displayed,Represents a flattening operation, and the flattening device,Representing the probability corresponding to the ith training round,
Is determined by the following formulaIs of the size of (2):
,
Representing the initial percentile.
4. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image segmentation method according to any one of claims 1 to 2.
5. A computer apparatus, comprising: the image segmentation method according to any one of claims 1 to 2, wherein the processor, the memory, the communication interface and the communication bus are used for completing communication among each other through the communication bus, and the memory is used for storing at least one executable instruction which enables the processor to execute the operation corresponding to the image segmentation method according to any one of claims 1 to 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311405974.3A CN117333874B (en) | 2023-10-27 | 2023-10-27 | Image segmentation method, system, storage medium and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311405974.3A CN117333874B (en) | 2023-10-27 | 2023-10-27 | Image segmentation method, system, storage medium and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117333874A CN117333874A (en) | 2024-01-02 |
CN117333874B true CN117333874B (en) | 2024-07-30 |
Family
ID=89290247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311405974.3A Active CN117333874B (en) | 2023-10-27 | 2023-10-27 | Image segmentation method, system, storage medium and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117333874B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115661459A (en) * | 2022-11-02 | 2023-01-31 | 安徽大学 | 2D mean teacher model using difference information |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112150478B (en) * | 2020-08-31 | 2021-06-22 | 温州医科大学 | Method and system for constructing semi-supervised image segmentation framework |
US11823381B2 (en) * | 2020-12-27 | 2023-11-21 | Ping An Technology (Shenzhen) Co., Ltd. | Knowledge distillation with adaptive asymmetric label sharpening for semi-supervised fracture detection in chest x-rays |
CN113256646B (en) * | 2021-04-13 | 2024-03-22 | 浙江工业大学 | Cerebrovascular image segmentation method based on semi-supervised learning |
CN114120319A (en) * | 2021-10-09 | 2022-03-01 | 苏州大学 | Continuous image semantic segmentation method based on multi-level knowledge distillation |
CN115131366A (en) * | 2021-11-25 | 2022-09-30 | 北京工商大学 | Multi-mode small target image full-automatic segmentation method and system based on generation type confrontation network and semi-supervision field self-adaptation |
CN115131565B (en) * | 2022-07-20 | 2023-05-02 | 天津大学 | Histological image segmentation model based on semi-supervised learning |
CN116228639A (en) * | 2022-12-12 | 2023-06-06 | 杭州电子科技大学 | Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception |
CN115797637A (en) * | 2022-12-29 | 2023-03-14 | 西北工业大学 | Semi-supervised segmentation model based on uncertainty between models and in models |
CN116543162B (en) * | 2023-05-09 | 2024-07-12 | 山东建筑大学 | Image segmentation method and system based on feature difference and context awareness consistency |
-
2023
- 2023-10-27 CN CN202311405974.3A patent/CN117333874B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115661459A (en) * | 2022-11-02 | 2023-01-31 | 安徽大学 | 2D mean teacher model using difference information |
Also Published As
Publication number | Publication date |
---|---|
CN117333874A (en) | 2024-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860573B (en) | Model training method, image category detection method and device and electronic equipment | |
CN108229489B (en) | Key point prediction method, network training method, image processing method, device and electronic equipment | |
CN108154222B (en) | Deep neural network training method and system and electronic equipment | |
Deng et al. | Unsupervised segmentation of synthetic aperture radar sea ice imagery using a novel Markov random field model | |
CN109086811B (en) | Multi-label image classification method and device and electronic equipment | |
CN112580439A (en) | Method and system for detecting large-format remote sensing image ship target under small sample condition | |
CN108229675B (en) | Neural network training method, object detection method, device and electronic equipment | |
CN113221903B (en) | Cross-domain self-adaptive semantic segmentation method and system | |
CN108229522B (en) | Neural network training method, attribute detection device and electronic equipment | |
CN116596916B (en) | Training of defect detection model and defect detection method and device | |
CN113435587A (en) | Time-series-based task quantity prediction method and device, electronic equipment and medium | |
CN108154153B (en) | Scene analysis method and system and electronic equipment | |
CN113988357B (en) | Advanced learning-based high-rise building wind induced response prediction method and device | |
EP4170561A1 (en) | Method and device for improving performance of data processing model, storage medium and electronic device | |
CN116385879A (en) | Semi-supervised sea surface target detection method, system, equipment and storage medium | |
CN108399430A (en) | A kind of SAR image Ship Target Detection method based on super-pixel and random forest | |
CN112861940A (en) | Binocular disparity estimation method, model training method and related equipment | |
CN112884721A (en) | Anomaly detection method and system and computer readable storage medium | |
CN114596440A (en) | Semantic segmentation model generation method and device, electronic equipment and storage medium | |
CN113033356B (en) | Scale-adaptive long-term correlation target tracking method | |
CN111985439B (en) | Face detection method, device, equipment and storage medium | |
CN113822144A (en) | Target detection method and device, computer equipment and storage medium | |
CN117333874B (en) | Image segmentation method, system, storage medium and device | |
CN113313179A (en) | Noise image classification method based on l2p norm robust least square method | |
CN111814653A (en) | Method, device, equipment and storage medium for detecting abnormal behaviors in video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |