CN113378736A - Remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization - Google Patents
Remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization Download PDFInfo
- Publication number
- CN113378736A CN113378736A CN202110678330.6A CN202110678330A CN113378736A CN 113378736 A CN113378736 A CN 113378736A CN 202110678330 A CN202110678330 A CN 202110678330A CN 113378736 A CN113378736 A CN 113378736A
- Authority
- CN
- China
- Prior art keywords
- network
- transformation
- representing
- remote sensing
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000011218 segmentation Effects 0.000 title claims abstract description 19
- 238000010586 diagram Methods 0.000 claims description 10
- 238000000844 transformation Methods 0.000 claims description 9
- 239000004576 sand Substances 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image depth network semi-supervised semantic segmentation method and system based on transformation consistency regularization, and belongs to an image data processing method. According to the remote sensing image depth network semantic segmentation framework based on transformation consistency regularization, under the condition of limited labeled samples, output consistency constraint under different random transformation disturbances is applied to a large number of unlabeled samples, and potential information provided by the unlabeled samples is fully utilized to improve the performance of a depth network. The parameters of the network may be updated by optimizing a weighted sum of the supervised losses computed with labeled samples and the regularization consistency losses computed with unlabeled samples. The method can improve the performance of the network by utilizing information contained in a large number of unmarked samples under the condition of limited marked samples, and is suitable for scenes with few marked samples in practice.
Description
Technical Field
The invention belongs to an image data processing method, and particularly relates to a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization.
Background
Semantic segmentation is a high-level task in image processing, the goal of which is to assign a semantic label to each pixel. Semantic segmentation is an important and challenging task in the fields of computer vision and remote sensing. In the field of remote sensing, classification of remote sensing images is one of the most fundamental research problems, and is also the basis of other remote sensing research and application. In the past, conventional machine learning methods typically combined with human a priori knowledge and intuitive experience to design and select several features and characteristics that are closely related to a task, which are typically used for classification and recognition of remotely sensed images. In recent years, with the benefit of the development of deep learning techniques and hardware computing power, deep neural networks have become the mainstream method, and Convolutional Neural Networks (CNNs) have enjoyed great success in image processing. With a large number of data sets, the CNN model can be trained end-to-end to achieve more powerful, powerful feature representations and impressive performance on a variety of data sets.
However, most networks today are data driven and are trained by means of supervision. The performance of the network is highly dependent on a large number of marked samples, which means that more large-scale data sets need to be created. But collecting large amounts of accurate marking data is very time consuming and laborious, especially accurate pixel-level marking data. And tagging data requires a certain amount of expert knowledge and is difficult to obtain for security or privacy concerns. In the field of remote sensing, there are typically no large numbers of tagged samples, although recent developments in sensors and earth observation technology have led to explosive growth in remote sensing data. For example, high precision, high quality surface coverage data is difficult to obtain and must be collected and annotated by a telemetric expert. Thus, for many practical problems and applications, the lack of a sufficiently large signature data set limits the widespread use of deep learning techniques in the field of remote sensing. In this case, how to fully utilize the unmarked data to improve the performance of the existing model is a great challenge.
Disclosure of Invention
Semi-supervised learning, which is relatively easy to obtain for unlabeled data compared to labeled data, is a machine learning technique that is intermediate between supervised and unsupervised learning. It typically trains the model using a small amount of labeled data and a large amount of unlabeled data. It has been found that an application combining large amounts of unlabeled data together with a small amount of labeled data can significantly improve learning performance. For supervised learning, obtaining data annotations is expensive and time consuming, and it is difficult to obtain large amounts of labeled data, while the acquisition of unlabeled data is relatively inexpensive. Therefore, in practical application, the application of semi-supervised learning is wider and the prospect is wide.
Therefore, the invention provides a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization, which is used for solving the problem of how to further improve the model performance by using unmarked samples under the condition of few marked samples in a remote sensing image.
The technical scheme adopted by the invention is as follows: the remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization comprises the following steps:
step 1: firstly, dividing a remote sensing image data set into marked samplesAnd unlabeled samplesWhereinThe image of the label is shown,label corresponding to the label image, NLThe number of the marked images is shown,representing unmarked images, NUIndicating the number of unmarked images.
Step 2: in the training phase, a student network S and a teacher network T are constructed, their parameters being represented by thetasAnd thetat。
And step 3: and randomly selecting m sample data from the marked samples and the unmarked samples respectively.
And 4, step 4: inputting the selected marking sample into the student network, and calculating the loss of the supervision part
And 5: back propagation and updating of student network parameter theta by gradient descent algorithms。
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
And step 9: inputting the original unmarked sample into the teacher network to obtain the output characteristic diagramReuse of random transformationsFor characteristic diagramPerforming the same transformation to obtain another feature map
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithms。
Step 12: and updating the parameters of the teacher network by using the parameters of the student network.
Step 13: and 3, repeating iteration until the training is finished.
Step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
Preferably, the loss of the supervision part in step 4It can be defined as the pixel-by-pixel cross entropy as a loss function:
wherein,representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, xLA sample of the annotation is represented as,representing input of annotated samples to studentsNetwork derived profile, yLRepresenting the corresponding authentic label of the specimen.
Preferably, the random transformation in step 6Affine transformations, grid shuffle, and cutmix transformations may be used. Wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]Rotating randomly within the range. The grid shuffle transform uses a 3 x 3 grid.
Preferably, the unsupervised portion of step 10 is loss of consistency regularizationThe mean square error can be taken as the loss:
wherein x isUThe non-annotated sample is represented by,representing a random transformation, θsAnd thetatRepresenting parameters of the student network and the teacher network respectively,andrespectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
Preferably, in step 12, the parameters of the teacher network may be updated by an exponential moving average method:
θ′t=αEMAθt+(1-αEMA)θs (3)
wherein alpha isEMARepresents a smoothing coefficient in an exponential moving average method of θ'tRepresenting updated parameters of the teacher network.
Compared with the prior art, the invention has the advantages and beneficial effects that: the remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization utilizes potential information of non-labeled sample data, and further improves the identification precision of the model under the conditions of limited labeled samples and a large amount of non-labeled sample data, so that the method is more suitable for scenes with few labeled remote sensing images in practice.
Drawings
FIG. 1: the invention designs a semi-supervised framework;
FIG. 2: three different random transformations are used in the present invention;
FIG. 3: some visualization of the results of the method of the invention;
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to a flow chart of fig. 1, the invention provides a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization, which comprises the following steps:
step 1: firstly, dividing a remote sensing image data set into marked samplesAnd unlabeled samplesWhereinThe image of the label is shown,label corresponding to the label image, NLThe number of the marked images is shown,representing unmarked images, NUIndicating the number of unmarked images.
Step 2: in the training phase, a student network S and a teacher network T are constructed, their parameters being represented by thetasAnd thetat。
And step 3: and randomly selecting m sample data from the marked samples and the unmarked samples respectively.
And 4, step 4: the selected marking sample is input into the student network, and the loss of the supervision part is calculatedCan be defined as the pixel-by-pixel cross entropy:
wherein, the first and second guide rollers are arranged in a row,representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, xLA sample of the annotation is represented as,representing a feature map, y, obtained by inputting annotated samples into a student networkLRepresenting the corresponding authentic label of the specimen.
And 5: back propagation and updating of student network parameter theta by gradient descent algorithms。
Step 6: selecting a random transformAs the perturbation, affine transformation, grid shuffle, and cutmix transformation may be used. Wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]Within range ofThe machine rotates. The grid shuffle transform uses a 3 x 3 grid.
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
And step 9: inputting the original unmarked sample into the teacher network to obtain the output characteristic diagramReuse of random transformationsPerforming the same transformation on the characteristic diagram to obtain another characteristic diagram
Step 10: feature map from two outputsAndcomputing consistency regularization loss for unsupervised portionsThe mean square error can be taken as the loss:
wherein x isUThe non-annotated sample is represented by,representing a random transformation, θsAnd thetatRepresenting parameters of the student network and the teacher network respectively,andrespectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithms。
Step 12: parameters of the teacher network are updated by using parameters of the student network, and parameters of the teacher network can be updated by adopting an exponential moving average method:
θ′t=αEMAθt+(1-αEMA)θs (3)
wherein alpha isEMARepresents a smoothing coefficient in an exponential moving average method of θ'tRepresenting updated parameters of the teacher network.
Step 13: and 3, repeating iteration until the training is finished.
Step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (5)
1. A remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization is characterized by comprising the following steps:
step 1: firstly, dividing a remote sensing image data set into marked samplesAnd unlabeled samplesWhereinThe image of the label is shown,label corresponding to the label image, NLThe number of the marked images is shown,representing unmarked images, NURepresenting the number of the unmarked images;
step 2: in the training phase, a student network S and a teacher network T are constructed, their parameters being represented by thetasAnd thetat;
And step 3: randomly selecting m sample data from the marked sample and the unmarked sample respectively;
and 4, step 4: the selected marking sample is input into the student network, and the loss of the supervision part is calculated
And 5: back propagation and updating of student network parameter theta by gradient descent algorithms;
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
And step 9: inputting the original unmarked sample into the teacher network to obtain the output characteristic diagramReuse of random transformationsFor characteristic diagramPerforming the same transformation to obtain another feature map
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithms;
Step 12: updating the parameters of the teacher network by using the parameters of the student network;
step 13: 3, repeating iteration until the training is finished;
step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
2. The remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, characterized in that: loss of supervision part in step 4Defining pixel-by-pixel cross entropy as a loss function;
wherein,representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, xLA sample of the annotation is represented as,representing a feature map, y, obtained by inputting annotated samples into a student networkLRepresenting the corresponding authentic label of the specimen.
3. The transform-based consistency of claim 1The regularized remote sensing image depth network semi-supervised semantic segmentation method is characterized by comprising the following steps of: random transformation in step 6Affine transformation or grid shuffle or cutmix transformation is used; wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]The range is randomly rotated, and the grid shuffle transform uses a 3 × 3 grid.
4. The remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, characterized in that: loss of consistency regularization for unsupervised portions in step 10Taking the mean square error as the loss:
wherein x isUThe non-annotated sample is represented by,representing a random transformation, θsAnd thetatRepresenting parameters of the student network and the teacher network respectively,andrespectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
5. The remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, characterized in that: in step 12, parameters of the teacher network are updated by an exponential moving average method:
θ′t=αEMAθt+(1-αEMA)θs (3)
wherein alpha isEMARepresents a smoothing coefficient in an exponential moving average method of θ'tRepresenting updated parameters of the teacher network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110678330.6A CN113378736B (en) | 2021-06-18 | 2021-06-18 | Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110678330.6A CN113378736B (en) | 2021-06-18 | 2021-06-18 | Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113378736A true CN113378736A (en) | 2021-09-10 |
CN113378736B CN113378736B (en) | 2022-08-05 |
Family
ID=77577716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110678330.6A Active CN113378736B (en) | 2021-06-18 | 2021-06-18 | Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113378736B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114189416A (en) * | 2021-12-02 | 2022-03-15 | 电子科技大学 | Digital modulation signal identification method based on consistency regularization |
CN114332135A (en) * | 2022-03-10 | 2022-04-12 | 之江实验室 | Semi-supervised medical image segmentation method and device based on dual-model interactive learning |
CN114792349A (en) * | 2022-06-27 | 2022-07-26 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation confrontation network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036335A (en) * | 2020-09-03 | 2020-12-04 | 南京农业大学 | Deconvolution-guided semi-supervised plant leaf disease identification and segmentation method |
CN112347930A (en) * | 2020-11-06 | 2021-02-09 | 天津市勘察设计院集团有限公司 | High-resolution image scene classification method based on self-learning semi-supervised deep neural network |
WO2021062536A1 (en) * | 2019-09-30 | 2021-04-08 | Musashi Auto Parts Canada Inc. | System and method for ai visual inspection |
CN112885468A (en) * | 2021-01-26 | 2021-06-01 | 深圳大学 | Teacher consensus aggregation learning method based on random response differential privacy technology |
-
2021
- 2021-06-18 CN CN202110678330.6A patent/CN113378736B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021062536A1 (en) * | 2019-09-30 | 2021-04-08 | Musashi Auto Parts Canada Inc. | System and method for ai visual inspection |
CN112036335A (en) * | 2020-09-03 | 2020-12-04 | 南京农业大学 | Deconvolution-guided semi-supervised plant leaf disease identification and segmentation method |
CN112347930A (en) * | 2020-11-06 | 2021-02-09 | 天津市勘察设计院集团有限公司 | High-resolution image scene classification method based on self-learning semi-supervised deep neural network |
CN112885468A (en) * | 2021-01-26 | 2021-06-01 | 深圳大学 | Teacher consensus aggregation learning method based on random response differential privacy technology |
Non-Patent Citations (2)
Title |
---|
BIN ZHANG 等;: "《SEMI-SUPERVISED SEMANTIC SEGMENTATION NETWORK VIA LEARNING CONSISTENCY FOR REMOTE SENSING LAND-COVER CLASSIFICATION 》", 《REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》 * |
陈铭林: "《基于深度学习的颅内出血CT影像分析》", 《硕士学位论文库》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114189416A (en) * | 2021-12-02 | 2022-03-15 | 电子科技大学 | Digital modulation signal identification method based on consistency regularization |
CN114189416B (en) * | 2021-12-02 | 2023-01-10 | 电子科技大学 | Digital modulation signal identification method based on consistency regularization |
CN114332135A (en) * | 2022-03-10 | 2022-04-12 | 之江实验室 | Semi-supervised medical image segmentation method and device based on dual-model interactive learning |
CN114332135B (en) * | 2022-03-10 | 2022-06-10 | 之江实验室 | Semi-supervised medical image segmentation method and device based on dual-model interactive learning |
CN114792349A (en) * | 2022-06-27 | 2022-07-26 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation confrontation network |
CN114792349B (en) * | 2022-06-27 | 2022-09-06 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN113378736B (en) | 2022-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113378736B (en) | Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization | |
CN105893968B (en) | The unrelated person's handwriting recognition methods end to end of text based on deep learning | |
CN110472737B (en) | Training method and device for neural network model and medical image processing system | |
CN114511728B (en) | Method for establishing intelligent detection model of esophageal lesion of electronic endoscope | |
CN108805102A (en) | A kind of video caption detection and recognition methods and system based on deep learning | |
CN111401156B (en) | Image identification method based on Gabor convolution neural network | |
CN115410059B (en) | Remote sensing image part supervision change detection method and device based on contrast loss | |
CN113988147A (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
CN115760869A (en) | Attention-guided non-linear disturbance consistency semi-supervised medical image segmentation method | |
CN115482387A (en) | Weak supervision image semantic segmentation method and system based on multi-scale class prototype | |
Shi et al. | Improved metric learning with the CNN for very-high-resolution remote sensing image classification | |
Dovbysh et al. | Decision-making support system for diagnosis of oncopathologies by histological images | |
CN107292268A (en) | The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model | |
CN116486273B (en) | Method for extracting water body information of small sample remote sensing image | |
CN117611901A (en) | Small sample image classification method based on global and local contrast learning | |
CN105809200A (en) | Biologically-inspired image meaning information autonomous extraction method and device | |
CN117151162A (en) | Cross-anatomical-area organ incremental segmentation method based on self-supervision and specialized control | |
Lodhi et al. | Deep Neural Network for Recognition of Enlarged Mathematical Corpus | |
CN111897988A (en) | Hyperspectral remote sensing image classification method and system | |
CN115131610B (en) | Robust semi-supervised image classification method based on data mining | |
CN118154555B (en) | Plant multi-organ CT image phenotype analysis method based on label efficient learning | |
CN109711456A (en) | A kind of semi-supervised image clustering method having robustness | |
CN111046869B (en) | Salient region extraction method and system based on deep learning | |
CN116226623B (en) | Mark layer division method and device based on SegNet segmentation model and computer equipment | |
CN115082770B (en) | Image center line structure extraction method based on machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |