CN113111716B - Remote sensing image semiautomatic labeling method and device based on deep learning - Google Patents

Remote sensing image semiautomatic labeling method and device based on deep learning Download PDF

Info

Publication number
CN113111716B
CN113111716B CN202110275234.7A CN202110275234A CN113111716B CN 113111716 B CN113111716 B CN 113111716B CN 202110275234 A CN202110275234 A CN 202110275234A CN 113111716 B CN113111716 B CN 113111716B
Authority
CN
China
Prior art keywords
remote sensing
sensing image
labeling
neural network
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110275234.7A
Other languages
Chinese (zh)
Other versions
CN113111716A (en
Inventor
赵江华
惠健
王学志
周园春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Computer Network Information Center of CAS
Original Assignee
Peking University
Computer Network Information Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Computer Network Information Center of CAS filed Critical Peking University
Priority to CN202110275234.7A priority Critical patent/CN113111716B/en
Publication of CN113111716A publication Critical patent/CN113111716A/en
Application granted granted Critical
Publication of CN113111716B publication Critical patent/CN113111716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a remote sensing image semiautomatic labeling method and device based on deep learning. The method comprises the following steps: based on the public remote sensing data set, pre-training the full convolutional neural network by using a cross entropy loss function; predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network, and outputting category attribute probability; calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability, and setting a threshold value to extract the uncertainty pixel; screening the super pixels as recommended labeling areas according to the minimum percentage of uncertain pixels in the super pixels segmented by the remote sensing images, and manually labeling the recommended labeling areas; and combining the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area to obtain a final labeling result. The invention can avoid the burden of heavy manual drawing of accurate boundaries for manual annotators, improves the manual annotation efficiency, and reduces the annotation workload and subjective assumption of manual annotation.

Description

Remote sensing image semiautomatic labeling method and device based on deep learning
Technical Field
The invention belongs to a remote sensing image labeling method in the field of remote sensing images, relates to an efficient semiautomatic labeling and recommended labeling strategy, and is mainly applied to remote sensing image information extraction, pixel-level labeling data acquisition and other applications.
Background
In recent years, the automatic remote sensing image information extraction technology has greatly advanced, and the diversity of remote sensing images is too large to enable a single machine learning model to cover the remote sensing images completely. That is, a deep learning model that performs well on some data sets may perform poorly on others. Such as farmlands, areas of different topography have completely different image representations in different regions and seasons, and thus still face significant challenges.
The information extraction of the remote sensing image is mainly carried out by depending on the color and the shape of an image target, aiming at the large-scale high-precision remote sensing image, the training data used by the remote sensing image is collected, and particularly, the semantic segmentation needs a large amount of pixel-level marking data, so that the workload is high. In view of the high quality and efficient annotation requirements of real-world applications, semi-automatic annotation with human-computer interaction is a more practical solution than manual annotation, and for complex boundaries that are difficult to manually track, the cost of pixel-level annotation can be reduced by automating part of the task.
Human-computer interaction type annotation shortens annotation time by providing functions such as automatic translation, scaling and super-pixel segmentation. Where super-pixel segmentation is a method of combining pixels in an image together to create a meaningful region. Super-pixels are made up of multiple contiguous pixels in an image that are similar in color or brightness level. The method based on the super pixel can bring the spatial information into the perceptually relevant region, so that not only can the common salt and pepper phenomena in the remote sensing image be effectively eliminated, but also the calculation efficiency can be remarkably improved by the algorithm based on the super pixel because the number of image elements is greatly reduced. Super-pixel annotation allows a worker to immediately mark a set of visually relevant pixels, which reduces the annotation time for background areas and objects with complex boundaries. To alleviate the common burden of manual annotation, there is also a series of weakly supervised segmentation algorithms, i.e. using weaker annotations such as bounding boxes, graffiti and class labels at the image level, instead of pixel level annotations. The main challenge in training a weakly supervised based model is the step of generating a pixelated label map from incomplete information by self supervision. The most popular choice for this task uses discrimination targets to identify local image regions associated with each semantic category, but while this strategy is useful with respect to roughly locating objects, it typically focuses on only slightly differentiated parts of the object, not enough to cover the entire object region. This results in poor segmentation compared to fully supervised methods.
Active learning allows the learning model to select training data, providing a way to address this need. The semi-supervised recommendation labeling algorithm taking active learning as a framework has the main idea that pool-based learning is common in active learning, namely, the active learning algorithm selects samples from unlabeled sample pools to be labeled by experts, and then the labeled data are added into a training data set. The process is iterative such that the classifier is retrained each time a new annotation sample is obtained. The deep learning model used by the remote sensing image processing task has huge parameters, long training time and difficult convergence, so that the semi-supervised recommendation marking algorithm based on active learning is difficult to apply.
Disclosure of Invention
Aiming at the problem of high cost of image annotation acquisition in the process of remote sensing image information extraction, the invention provides a remote sensing image semiautomatic annotation method adopting a combination of a deep neural network probability density function and super-pixel segmentation based on an uncertainty sampling idea.
The technical scheme adopted by the invention is as follows:
a remote sensing image semi-automatic labeling method based on deep learning comprises the following steps:
based on the public remote sensing data set, pre-training the full convolutional neural network by using a cross entropy loss function;
predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network, and outputting the category attribute probability of the remote sensing image;
calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty measurement value based on the uncertainty measurement value distribution, and extracting the uncertainty pixel according to the threshold value;
performing super-pixel segmentation on the remote sensing image, screening the super-pixels to be used as recommended labeling areas according to the minimum percentage of uncertain pixels in the super-pixels, and performing manual labeling on the recommended labeling areas;
and combining the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
Further, the pre-training of the full convolutional neural network with the cross entropy loss function comprises:
and calculating the cross entropy between the ground truth value label t and the output s of the full convolution neural network, wherein the ground truth value label t is a single-heat coding vector with a positive class and c-1 negative classes, and then adjusting network parameters to reduce the cross entropy through back propagation.
Further, the predicting the remote sensing image to be marked by adopting the pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image comprises the following steps:
class score s output by the last layer of the full convolution neural network is converted into probability at a pixel level through a softmax layer, class attribute probability is generated, namely each pixel sample outputs a K1 vector through the softmax layer, the K values are all between 0 and 1, and the sum of the K values is 1.
Further, the calculating the uncertainty metric value of the remote sensing image pixel according to the category attribute probability of the remote sensing image includes:
and calculating the difference between the maximum value and the second maximum value of the class attribute probability as an uncertainty measurement value of the remote sensing image pixel.
Further, the uncertainty metric values are ordered sequentially, and a percentile value is adopted as a threshold value of the uncertainty metric values.
Further, the full convolution neural network prediction result of the residual region of the remote sensing image is obtained by adopting the following steps:
and for the residual region of the remote sensing image, reading the class score vector output by the full convolution neural network by taking the pixel as a unit, wherein the class corresponding to the maximum numerical value is the class to which the full convolution neural network predicts the pixel.
The remote sensing image semiautomatic labeling device based on deep learning adopting the method comprises the following steps:
the pre-training module is used for pre-training the full convolution neural network by using the cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty measurement value based on the uncertainty measurement value distribution, and extracting the uncertain pixel according to the threshold value;
the recommended labeling area labeling module is used for carrying out super-pixel segmentation on the remote sensing image, screening the super-pixels to be used as recommended labeling areas according to the minimum percentage of uncertain pixels in the super-pixels, and carrying out manual labeling on the recommended labeling areas;
and the merging module is used for merging the manual annotation of the recommended annotation region and the full convolution neural network prediction result of the residual region of the remote sensing image to obtain a final annotation result.
The invention provides a remote sensing image semiautomatic labeling method combining a deep neural network probability density function and super-pixel segmentation based on an uncertainty sampling idea. The method and the system not only can avoid the heavy burden of manually drawing the accurate boundary for the manual annotators and improve the manual annotation efficiency, but also reduce the annotation workload and subjective speculation of manual annotation.
Drawings
Fig. 1 is a schematic flow chart of a semi-automatic labeling method of a remote sensing image. Wherein: (1) pre-training a full convolutional neural network; (2) calculating the attribute probability of the category; (3) calculating an uncertainty measure; (4) performing super-pixel segmentation on the image, and screening uncertain super-pixels; (5) and combining the manual annotation with the depth network prediction result.
Detailed Description
The invention will be further described by way of specific embodiments with reference to the accompanying drawings.
1) Semantic segmentation network model training
The semantic segmentation network is a full convolution neural network, and the whole full convolution neural network represents a distinguishable fraction function: class Scores from one original image pixel to the other end, for each pixel, typically output a respective class score vector s (score) arranged along the depth dimension at a respective pixel location, which score (probability) provides a highly reliable uncertainty measure of semantic recognition.
The neural network is optimized by adopting a cross entropy loss function, namely, cross entropy between a ground truth value tag t and an output s of the neural network is calculated, wherein the ground truth value tag (ground truth) t is a single thermal coding vector with a positive class and c-1 negative classes (c is the class number), and then network parameters are adjusted to reduce the cross entropy through back propagation.
The cross entropy function is defined as follows:
Figure BDA0002976362130000041
wherein K is the number of categories, t i Is a one-hot encoding (one-hot encoding) vector element, s, of a real label (group trunk) i Is a score of the corresponding category.
Theoretically, any semantic segmentation network can be used. Here, a Unet network based on an encoder-decoder architecture is selected, which combines a high-level semantic feature map of a decoder and a low-level detailed feature map of an encoder using a jump connection, is widely used in image segmentation, and can obtain a better segmentation effect and robustness when the amount of data is small.
2) Calculating class attribute probabilities
After a trained network is obtained, predicting the remote sensing image, converting class score s output by the last layer into probability through a Softmax layer at a pixel level, and generating class attribute probability p, namely, each pixel sample outputs a vector of K x 1 through the Softmax layer, wherein the K values are all 0 to 1, and the sum of the K values is 1, so that the class attribute probability p can be regarded as probability distribution, namely, the class attribute probability p is regarded as the probability that the pixel sample belongs to each class, namely, the class attribute probability. The Softmax formula is as follows:
Figure BDA0002976362130000042
wherein Softmax (s i ) I=1, …, K, s=(s) for the output vector of Softmax layer 1 ,…,s k )∈R K
3) Calculating uncertainty metric values
The difference BvSB (Best versus Second Best) between the probability values of the two categories with the largest category attribute probability values is employed as the measure of uncertainty. I.e. assuming that the estimated probability distribution for a certain pixel is identified by p, where p i Representing the attribute probability of class i in p, and assuming that the distribution p has the maximum value for class m and class n has the second highest probability value, the invention adopts the class attribute probability p m -p n The difference value of (2) represents a measure of uncertainty for the current full convolutional neural network for the pixel class, i.e., the uncertainty score is the difference between the highest probability value and the second highest probability value in the class attribute probabilities.
The percentile value is used as a method of uncertainty pixel threshold. The uncertainty pixels are extracted, for example, by selecting the BvSB probability difference of 15% percentile as the threshold. Specifically, the BvSB probability difference value on the 15 th percentile of each pixel is obtained through ascending order arrangement as an uncertain pixel threshold value, and the threshold value is used for extracting the uncertain pixels.
4) Superpixel-based uncertainty region propagation
The method comprises the steps of performing super-pixel segmentation on a remote sensing image, calculating the number of uncertain pixels in each super-pixel, removing the number of uncertain pixels in each super-pixel as noise data, and screening out the super-pixels with at least n% (Min-Ratio-n) of the super-pixels as uncertain pixels, wherein if n is 30 as a recommended labeling area, the number of uncertain pixels in the super-pixels is larger than or equal to 30%.
5) Human-machine result superposition
Finally, manually marking the recommended marking area by taking the super pixel as a unit; and for the rest determination area, taking the pixel as a unit, reading the class score vector s output by the full convolution neural network, wherein the class corresponding to the maximum numerical value is the class of the full convolution neural network predicted by the pixel.
And combining the artificial labeling result and the prediction result of the full convolution neural network to obtain labeling data of the whole scene image.
Experimental data: the ISPRS Potsdam dataset is a public dataset and comprises remote sensing images with a resolution of 5cm of 38 scenes 6000 x 6000 and corresponding pixel-level annotation data, and the annotation categories comprise ground, house, vehicle, tree, grassland and others. The invention adopts the data set as experimental data set, and carries out slice division according to the size of 512 x 512, the overlapping area between slices is 50 pixels, and finally 6422 slices with the size of 512 x 512 are obtained. According to 8:2: the ratio of 1 randomly divides the data set into a training set, a validation set and a test set.
The invention selects a Unet full convolution neural network, and the network training parameters are shown in table 1.
TABLE 1 full convolutional neural network training parameter list
Super parameter name Description of the invention
Learning rate 0.01
Batch size 4
Weight decay 0.0005
Optim method AdamW
Filters number 64
scheduler poly
Loss type Cross entropy
According to the invention, 200 iterations are performed by using the Tesla V100 SXM2 32GB GPU, and the total precision is finally 0.875. The BvSB probability difference of 20% percentile is selected as the threshold to extract the uncertainty pixel. And (3) dividing the remote sensing image slice with the size of 512 x 512 into 400 superpixels by adopting an ERGC superpixel dividing method, and screening the superpixels with the uncertain pixel ratio exceeding 30% as recommended labeling areas.
Assuming that the labeling of the region to be labeled is the category with the largest area occupation ratio in each super pixel, labeling the rest region by using a predicted value of a full convolution neural network to obtain pixel-level labeling data of the whole scene image, comparing the pixel-level labeling data with real data, calculating the overall precision, and obtaining an experimental result shown in a table 2.
TABLE 2 results of comparison experiments of recommended labeling algorithm
Recommending labeling areas Labeling the number of super pixels Overall accuracy
Randomly screening superpixels for marking 113 0.902
The method of the invention 113 0.937
Experimental results show that by adopting the method provided by the invention to label 113 super pixels, the labeling data with the precision of 93.7% can be obtained, and the precision is improved by 3.5% compared with the labeling by randomly screening 113 super pixels.
Based on the same inventive concept, another embodiment of the present invention provides a remote sensing image semiautomatic labeling device based on deep learning using the method of the present invention, which includes:
the pre-training module is used for pre-training the full convolution neural network by using the cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty measurement value based on the uncertainty measurement value distribution, and extracting the uncertain pixel according to the threshold value;
the recommended labeling area labeling module is used for carrying out super-pixel segmentation on the remote sensing image, screening the super-pixels to be used as recommended labeling areas according to the minimum percentage of uncertain pixels in the super-pixels, and carrying out manual labeling on the recommended labeling areas;
and the merging module is used for merging the manual annotation of the recommended annotation region and the full convolution neural network prediction result of the residual region of the remote sensing image to obtain a final annotation result.
Based on the same inventive concept, another embodiment of the present invention provides an electronic device (computer, server, smart phone, etc.) comprising a memory storing a computer program configured to be executed by the processor, and a processor, the computer program comprising instructions for performing the steps in the inventive method.
Based on the same inventive concept, another embodiment of the present invention provides a computer readable storage medium (e.g., ROM/RAM, magnetic disk, optical disk) storing a computer program which, when executed by a computer, implements the steps of the inventive method.
The above-disclosed embodiments of the present invention are intended to aid in understanding the contents of the present invention and to enable the same to be carried into practice, and it will be understood by those of ordinary skill in the art that various alternatives, variations and modifications are possible without departing from the spirit and scope of the invention. The invention should not be limited to what has been disclosed in the examples of the specification, but rather by the scope of the invention as defined in the claims.

Claims (8)

1. A remote sensing image semiautomatic labeling method based on deep learning is characterized by adopting a remote sensing image semiautomatic labeling method combining a deep neural network probability density function and super-pixel segmentation, and comprises the following steps:
based on the public remote sensing data set, pre-training the full convolutional neural network by using a cross entropy loss function;
predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network, and outputting the category attribute probability of the remote sensing image;
calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty measurement value based on the uncertainty measurement value distribution, and extracting the uncertainty pixel according to the threshold value; the calculating the uncertainty measurement value of the remote sensing image pixel according to the category attribute probability of the remote sensing image comprises the following steps: calculating the difference between the maximum value and the second maximum value of the class attribute probability to be used as an uncertainty measurement value of the remote sensing image pixel; sequencing the uncertainty measurement values in sequence, and taking a percentile value as a threshold value of the uncertainty measurement values;
performing super-pixel segmentation on the remote sensing image, screening the super-pixels to be used as recommended labeling areas according to the minimum percentage of uncertain pixels in the super-pixels, and performing manual labeling on the recommended labeling areas;
and combining the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
2. The method of claim 1, wherein the pre-training the full convolutional neural network with a cross entropy loss function comprises:
and calculating the cross entropy between the ground truth value label t and the output s of the full convolution neural network, wherein the ground truth value label t is a single-heat coding vector with a positive class and c-1 negative classes, and then adjusting network parameters to reduce the cross entropy through back propagation.
3. The method of claim 2, wherein the cross entropy loss function is defined as follows:
Figure QLYQS_1
wherein the method comprises the steps ofKFor the number of categories to be considered,t i is the one-hot coded vector element of the real tag,s i is a score of the corresponding category.
4. The method according to claim 1, wherein predicting the remote sensing image to be annotated using the pre-trained full convolutional neural network, outputting a category attribute probability of the remote sensing image, comprises:
class score s output by the last layer of the full convolution neural network is converted into probability at a pixel level through a softmax layer, class attribute probability is generated, namely each pixel sample outputs a K1 vector through the softmax layer, the K values are all between 0 and 1, and the sum of the K values is 1.
5. The method of claim 1, wherein the step of obtaining a full convolutional neural network prediction of the remote sensing image residual region comprises:
and for the residual region of the remote sensing image, reading the class score vector output by the full convolution neural network by taking the pixel as a unit, wherein the class corresponding to the maximum numerical value is the class to which the full convolution neural network predicts the pixel.
6. A deep learning-based remote sensing image semiautomatic labeling device adopting the method of any one of claims 1 to 5, wherein the device adopts a remote sensing image semiautomatic labeling mode combining a deep neural network probability density function and super pixel segmentation for labeling, and the device comprises:
the pre-training module is used for pre-training the full convolution neural network by using the cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be marked by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty measurement value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty measurement value based on the uncertainty measurement value distribution, and extracting the uncertain pixel according to the threshold value; the calculating the uncertainty measurement value of the remote sensing image pixel according to the category attribute probability of the remote sensing image comprises the following steps: calculating the difference between the maximum value and the second maximum value of the class attribute probability to be used as an uncertainty measurement value of the remote sensing image pixel; sequencing the uncertainty measurement values in sequence, and taking a percentile value as a threshold value of the uncertainty measurement values;
the recommended labeling area labeling module is used for carrying out super-pixel segmentation on the remote sensing image, screening the super-pixels to be used as recommended labeling areas according to the minimum percentage of uncertain pixels in the super-pixels, and carrying out manual labeling on the recommended labeling areas;
and the merging module is used for merging the manual annotation of the recommended annotation region and the full convolution neural network prediction result of the residual region of the remote sensing image to obtain a final annotation result.
7. An electronic device comprising a memory and a processor, the memory storing a computer program configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1-5.
8. A computer readable storage medium storing a computer program which, when executed by a computer, implements the method of any one of claims 1 to 5.
CN202110275234.7A 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning Active CN113111716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275234.7A CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275234.7A CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN113111716A CN113111716A (en) 2021-07-13
CN113111716B true CN113111716B (en) 2023-06-23

Family

ID=76711426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275234.7A Active CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN113111716B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487617A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN114299290B (en) * 2021-12-24 2023-04-07 腾晖科技建筑智能(深圳)有限公司 Bare soil identification method, device, equipment and computer readable storage medium
CN114648683B (en) * 2022-05-23 2022-09-13 天津所托瑞安汽车科技有限公司 Neural network performance improving method and device based on uncertainty analysis

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153822A (en) * 2017-05-19 2017-09-12 北京航空航天大学 A kind of smart mask method of the semi-automatic image based on deep learning
CN109446369A (en) * 2018-09-28 2019-03-08 武汉中海庭数据技术有限公司 The exchange method and system of the semi-automatic mark of image
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110533086A (en) * 2019-08-13 2019-12-03 天津大学 The semi-automatic mask method of image data
CN110689026A (en) * 2019-09-27 2020-01-14 联想(北京)有限公司 Method and device for labeling object in image and electronic equipment
CN110826555A (en) * 2019-10-12 2020-02-21 天津大学 Man-machine cooperative image target detection data semi-automatic labeling method
CN110910401A (en) * 2019-10-31 2020-03-24 五邑大学 Semi-automatic image segmentation data annotation method, electronic device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11195313B2 (en) * 2016-10-14 2021-12-07 International Business Machines Corporation Cross-modality neural network transform for semi-automatic medical image annotation
CN110245716B (en) * 2019-06-20 2021-05-14 杭州睿琪软件有限公司 Sample labeling auditing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153822A (en) * 2017-05-19 2017-09-12 北京航空航天大学 A kind of smart mask method of the semi-automatic image based on deep learning
CN109446369A (en) * 2018-09-28 2019-03-08 武汉中海庭数据技术有限公司 The exchange method and system of the semi-automatic mark of image
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110533086A (en) * 2019-08-13 2019-12-03 天津大学 The semi-automatic mask method of image data
CN110689026A (en) * 2019-09-27 2020-01-14 联想(北京)有限公司 Method and device for labeling object in image and electronic equipment
CN110826555A (en) * 2019-10-12 2020-02-21 天津大学 Man-machine cooperative image target detection data semi-automatic labeling method
CN110910401A (en) * 2019-10-31 2020-03-24 五邑大学 Semi-automatic image segmentation data annotation method, electronic device and storage medium

Also Published As

Publication number Publication date
CN113111716A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN108256562B (en) Salient target detection method and system based on weak supervision time-space cascade neural network
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN113111716B (en) Remote sensing image semiautomatic labeling method and device based on deep learning
CN110929622B (en) Video classification method, model training method, device, equipment and storage medium
CN111369581A (en) Image processing method, device, equipment and storage medium
CN110728295B (en) Semi-supervised landform classification model training and landform graph construction method
CN113298815A (en) Semi-supervised remote sensing image semantic segmentation method and device and computer equipment
CN112347995B (en) Unsupervised pedestrian re-identification method based on fusion of pixel and feature transfer
CN112966691A (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN113780149A (en) Method for efficiently extracting building target of remote sensing image based on attention mechanism
CN109271957B (en) Face gender identification method and device
CN112836625A (en) Face living body detection method and device and electronic equipment
Guo et al. Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds
CN112990331A (en) Image processing method, electronic device, and storage medium
Zhou et al. Attention transfer network for nature image matting
CN116994140A (en) Cultivated land extraction method, device, equipment and medium based on remote sensing image
CN113920148B (en) Building boundary extraction method and equipment based on polygon and storage medium
CN112070181B (en) Image stream-based cooperative detection method and device and storage medium
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN113920147B (en) Remote sensing image building extraction method and device based on deep learning
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN115410081A (en) Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium
CN114743109A (en) Multi-model collaborative optimization high-resolution remote sensing image semi-supervised change detection method and system
CN114202694A (en) Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant