CN113111716A - Remote sensing image semi-automatic labeling method and device based on deep learning - Google Patents

Remote sensing image semi-automatic labeling method and device based on deep learning Download PDF

Info

Publication number
CN113111716A
CN113111716A CN202110275234.7A CN202110275234A CN113111716A CN 113111716 A CN113111716 A CN 113111716A CN 202110275234 A CN202110275234 A CN 202110275234A CN 113111716 A CN113111716 A CN 113111716A
Authority
CN
China
Prior art keywords
remote sensing
sensing image
labeling
neural network
convolution neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110275234.7A
Other languages
Chinese (zh)
Other versions
CN113111716B (en
Inventor
赵江华
惠健
王学志
周园春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Computer Network Information Center of CAS
Original Assignee
Peking University
Computer Network Information Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Computer Network Information Center of CAS filed Critical Peking University
Priority to CN202110275234.7A priority Critical patent/CN113111716B/en
Publication of CN113111716A publication Critical patent/CN113111716A/en
Application granted granted Critical
Publication of CN113111716B publication Critical patent/CN113111716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a remote sensing image semi-automatic labeling method and device based on deep learning. The method comprises the following steps: pre-training a full convolution neural network by utilizing a cross entropy loss function based on a public remote sensing data set; predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network, and outputting class attribute probability; calculating an uncertainty metric value of the remote sensing image pixel according to the category attribute probability, and setting a threshold value to extract an uncertain pixel; screening the superpixels according to the minimum percentage of uncertain pixels in the superpixels segmented by the remote sensing image as a recommended marking area, and manually marking the recommended marking area; and merging the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area to obtain a final labeling result. The invention can avoid the heavy burden of manually drawing the accurate boundary for the manual annotator, improve the manual marking efficiency and reduce the marking workload and the subjective assumption of manual marking.

Description

Remote sensing image semi-automatic labeling method and device based on deep learning
Technical Field
The invention belongs to a remote sensing image labeling method in the field of remote sensing images, relates to a high-efficiency semi-automatic labeling and recommended labeling strategy, and is mainly applied to the applications of remote sensing image information extraction, pixel-level labeling data acquisition and the like.
Background
In recent years, the automatic remote sensing image information extraction technology has greatly improved, and the remote sensing image has too great diversity to be covered by a single machine learning model. That is, deep learning models that perform better on some data sets may not perform well on other data sets. For example, in the field, in different regions and seasons, different regions with different landforms have completely different image representations, and thus, still face a great challenge.
The information extraction of the remote sensing image is mainly carried out by depending on the color and the shape of an image target, and the used training data, especially the marking data of a large number of pixel levels required by semantic segmentation, is collected for large-scale high-precision remote sensing images, so that the workload is large. Considering the requirement of real-world applications for high-quality and efficient annotation, semi-automatic annotation with human-computer interaction function is a more practical scheme than manual annotation, and for complex boundaries which are difficult to track manually, the cost of pixel-level annotation can be reduced by automating part of tasks.
The man-machine interactive annotation shortens the annotation time by providing the functions of automatic translation, zooming, super-pixel segmentation and the like. Where superpixel segmentation is a method of grouping pixels in an image together to create a meaningful region. A super-pixel consists of a number of connected pixels in an image that are similar in color or brightness level. The method based on the super-pixel can bring spatial information into perceptually relevant areas, not only can effectively eliminate common salt and pepper phenomena in remote sensing images, but also can obviously improve the calculation efficiency due to the fact that the number of image elements is greatly reduced. Superpixel annotation allows a worker to immediately mark a set of visually relevant pixels, which can reduce the annotation time for background regions and objects with complex borders. To alleviate the common burden of manual annotation, there are also a series of weakly supervised segmentation algorithms, i.e. using weaker annotations, such as borders, graffiti and class labels at the image level, instead of pixel-level annotations. The main challenge in training a weakly supervised based model is the step of generating a pixilated label map from incomplete information by self-supervision. The most popular choices of this task use discriminant objects to identify local image regions associated with each semantic category, but while this strategy is very useful with respect to roughly locating objects, it usually focuses only on a slightly differentiated portion of the object, not enough to cover the entire object region. This results in poor segmentation compared to fully supervised methods.
Active learning, which allows the learning model to select training data, provides a solution to this need. The semi-supervised recommendation labeling algorithm with active learning as a framework has the main idea of pool-based learning common in active learning, namely the active learning algorithm selects samples from an unlabelled sample pool to be labeled by an expert, and then the labeled data is added into a training data set. The process is iterative, such that the classifier is retrained each time a new labeled sample is obtained. And the deep learning model used by the remote sensing image processing task has huge parameters, long training time and difficult convergence, so that the semi-supervised recommendation labeling algorithm based on active learning is difficult to apply.
Disclosure of Invention
Aiming at the problem of high cost of image labeling acquisition in the process of extracting remote sensing image information, the invention provides a remote sensing image semi-automatic labeling method combining a deep neural network probability density function and superpixel segmentation based on an uncertainty sampling thought.
The technical scheme adopted by the invention is as follows:
a remote sensing image semi-automatic labeling method based on deep learning comprises the following steps:
pre-training a full convolution neural network by utilizing a cross entropy loss function based on a public remote sensing data set;
predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network, and outputting the class attribute probability of the remote sensing image;
calculating an uncertainty metric value of a pixel of the remote sensing image according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty metric value based on the uncertainty metric value distribution, and extracting an uncertain pixel according to the threshold value;
performing superpixel segmentation on the remote sensing image, screening superpixels according to the minimum percentage of uncertain pixels in the superpixels to serve as recommended labeling areas, and performing manual labeling on the recommended labeling areas;
and merging the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
Further, the pre-training of the full convolution neural network using the cross-entropy loss function includes:
and calculating the cross entropy between a ground truth label t and the output s of the full convolution neural network, wherein the ground truth label t is a one-hot coded vector with a positive class and c-1 negative classes, and then adjusting network parameters to reduce the cross entropy through back propagation.
Further, the predicting the remote sensing image to be labeled by adopting the pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image comprises the following steps:
and (3) converting the class score s output by the last layer of the full convolution neural network into probability at the pixel level through a softmax layer, and generating class attribute probability, namely outputting a vector of K x 1 by each pixel sample through the softmax layer, wherein the K values are all between 0 and 1, and the sum of the K values is 1.
Further, the calculating the uncertainty metric of the remote sensing image pixel according to the class attribute probability of the remote sensing image includes:
and calculating the difference between the maximum value of the category attribute probability and the second maximum value as the uncertainty metric value of the remote sensing image pixel.
Further, the uncertainty metric values are sorted in order, and percentile values are used as thresholds of the uncertainty metric values.
Further, obtaining a full convolution neural network prediction result of the remote sensing image residual region by adopting the following steps:
and for the residual area of the remote sensing image, reading the class score vector output by the full convolution neural network by taking the pixel as a unit, wherein the class corresponding to the maximum numerical value is the class of the pixel prediction of the full convolution neural network.
The remote sensing image semi-automatic labeling device based on deep learning, which adopts the method, comprises the following steps:
the pre-training module is used for pre-training the full convolution neural network by utilizing a cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty metric value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty metric value based on the uncertainty metric value distribution, and extracting the uncertain pixel according to the threshold value;
the recommendation labeling area labeling module is used for carrying out superpixel segmentation on the remote sensing image, screening superpixels according to the minimum percentage of uncertain pixels in the superpixels to serve as recommendation labeling areas, and carrying out manual labeling on the recommendation labeling areas;
and the merging module is used for merging the artificial labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
The invention provides a remote sensing image semi-automatic labeling method based on an uncertainty sampling thought, wherein the remote sensing image semi-automatic labeling method is characterized in that a deep neural network probability density function and superpixel segmentation are combined. The invention not only can avoid the heavy burden of manually drawing the accurate boundary for the manual annotator, improve the manual annotation efficiency, but also reduce the annotation workload and the subjective assumption of manual annotation.
Drawings
FIG. 1 is a schematic flow chart of a method for semi-automatically labeling a remote sensing image according to the present invention. Wherein: pre-training a full convolution neural network; calculating class attribute probability; calculating uncertainty measurement values; fourthly, performing superpixel segmentation on the image, and screening uncertain superpixels; and fifthly, combining the manual labeling and the depth network prediction result.
Detailed Description
The present invention will be further described with reference to the following detailed description and accompanying drawings.
1) Semantic segmentation network model training
The semantic segmentation network is a full convolution neural network, and the whole full convolution neural network represents a distinguishable fractional function: class scores from the original image pixels at one end to the other end, for each pixel, typically output a respective class score vector s (scores) arranged along the depth dimension at the respective pixel location, which score (probability) provides a very reliable measure of uncertainty for semantic recognition.
Optimizing the neural network by adopting a cross entropy loss function, namely calculating the cross entropy between a ground truth label t and the output s of the neural network, wherein the ground truth label (ground route) t is a one-hot coded vector with a positive class and c-1 negative classes (c is the number of classes), and then adjusting network parameters to reduce the cross entropy through back propagation.
The cross entropy function is defined as follows:
Figure BDA0002976362130000041
where K is the number of classes, tiIs a one-hot encoding vector element, s, of a real tag (ground route)iIs a score of the corresponding category.
In theory, any semantic segmentation network may be used. Here, a Unet network based on an encoder-decoder architecture is selected, which combines a high-level semantic feature map of a decoder and a low-level detailed feature map of an encoder using a skip connection, is widely used in image segmentation, and can obtain a better segmentation effect and robustness when the amount of data is small.
2) Computing class attribute probabilities
After a trained network is obtained, predicting the remote sensing image, converting the class score s output from the last layer into probability at the pixel level through a Softmax layer, and generating a class attribute probability p, namely, each pixel sample outputs a vector of K x 1 through the Softmax layer, the K values are all between 0 and 1, and the sum of the K values is 1, so that the class attribute probability p can be regarded as probability distribution, namely, the Softmax output vector is used as the probability that the pixel sample belongs to each class, namely the class attribute probability. The Softmax formula is as follows:
Figure BDA0002976362130000042
wherein, Softmax(s)i) Is the output vector of Softmax layer, i is 1, …, K, s is(s)1,…,sk)∈RK
3) Calculating an uncertainty metric
The difference bvsb (best overturs Second best) between the probability values of the two classes with the maximum class attribute probability value is used as the measure of uncertainty. I.e. assuming that the estimated probability distribution of a certain pixel is identified by p, where piRepresenting the probability of an attribute for a class i in p, and assuming that the distribution p has a maximum value for class m and class n has a second probability value, the present invention uses the class attribute probability pm-pnThe difference value of (a) represents a measure of uncertainty of the current full convolution neural network for the pixel class, i.e., the uncertainty score is the difference between the highest probability value and the second highest probability value of the class attribute probabilities.
A percentile value is used as the uncertain pixel threshold. And if the BvSB probability difference of 15 percent is selected as a threshold value, extracting uncertain pixels. Specifically, the BvSB probability difference in the 15 th percentile of each pixel is obtained as an indeterminate pixel threshold value by ascending order, and the indeterminate pixel is extracted using this threshold value.
4) Superpixel based propagation of uncertain regions
The remote sensing image is subjected to superpixel segmentation, the number Ratio of uncertain pixels in the superpixel is calculated for each superpixel, the number Ratio is particularly low, the superpixel is taken as noise data and is removed, only at least n% (Min-Ratio-n) of the superpixels are screened out to be superpixels of uncertain pixels, the superpixels are taken as a recommended marking area, and if n is 30, the number Ratio of uncertain pixels in the superpixels is more than or equal to 30%.
5) Human-machine result superposition
Finally, manually labeling the recommended labeling area by taking the super-pixel as a unit; and for the rest determined areas, reading class fraction vectors s output by the full convolution neural network by taking the pixels as units, wherein the class corresponding to the maximum numerical value is the class of the pixel prediction of the full convolution neural network.
And merging the artificial labeling result and the prediction result of the full convolution neural network to obtain the labeling data of the whole scene image.
Experimental data: the ISPRS Potsdam dataset is a public dataset comprising 38 scenes 6000 x 6000 of 5cm resolution remote sensing images and corresponding pixel level labeling data, and the labeling categories include ground, house, car, tree, grassland and others. The invention adopts the data set as an experimental data set, slices are divided according to 512 × 512 sizes, the overlapping area between the slices is 50 pixels, and finally 6422 512 × 512 slices are obtained. According to the following steps of 8: 2: a ratio of 1 randomly divides the data set into a training set, a validation set, and a test set.
The invention selects a Unet full convolution neural network, and the network training parameters are shown in Table 1.
TABLE 1 full convolution neural network training parameter List
Name of hyper-parameter Description of the invention
Learning rate 0.01
Batch size 4
Weight decay 0.0005
Optim method AdamW
Filters number 64
scheduler poly
Loss type Cross entropy
The invention uses Tesla V100 SXM 232 GB GPU to iterate for 200 times, and finally reaches the overall precision of 0.875. And selecting the BvSB probability difference value of 20 percent as a threshold value to extract uncertain pixels. And (3) dividing the remote sensing image slice with the size of 512 by 512 into 400 superpixels by adopting an ERGC superpixel division method, and screening the superpixels with uncertain pixel percentage exceeding 30% as recommended marking areas.
Assuming that the labeling of the region to be labeled is the category with the largest area ratio in each super pixel, labeling the residual regions by using the predicted value of the full convolution neural network to obtain the pixel-level labeling data of the whole scene image, comparing the pixel-level labeling data with the real data, and calculating the overall precision, wherein the experimental result is shown in table 2.
TABLE 2 comparison of the results of the recommended labeling algorithm
Recommending annotation regions Labeling superpixel numbers Overall accuracy
Randomly screening superpixels for labeling 113 0.902
The method of the invention 113 0.937
Experimental results show that by adopting the method disclosed by the invention, 113 superpixels are labeled, 93.7% precision labeling data can be obtained, and the precision is improved by 3.5% compared with that of labeling by randomly screening 113 superpixels.
Based on the same inventive concept, another embodiment of the present invention provides a remote sensing image semi-automatic labeling device based on deep learning, which comprises:
the pre-training module is used for pre-training the full convolution neural network by utilizing a cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty metric value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty metric value based on the uncertainty metric value distribution, and extracting the uncertain pixel according to the threshold value;
the recommendation labeling area labeling module is used for carrying out superpixel segmentation on the remote sensing image, screening superpixels according to the minimum percentage of uncertain pixels in the superpixels to serve as recommendation labeling areas, and carrying out manual labeling on the recommendation labeling areas;
and the merging module is used for merging the artificial labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
Based on the same inventive concept, another embodiment of the present invention provides an electronic device (computer, server, smartphone, etc.) comprising a memory storing a computer program configured to be executed by the processor, and a processor, the computer program comprising instructions for performing the steps of the inventive method.
Based on the same inventive concept, another embodiment of the present invention provides a computer-readable storage medium (e.g., ROM/RAM, magnetic disk, optical disk) storing a computer program, which when executed by a computer, performs the steps of the inventive method.
The particular embodiments of the present invention disclosed above are illustrative only and are not intended to be limiting, since various alternatives, modifications, and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The invention should not be limited to the disclosure of the embodiments in the present specification, but the scope of the invention is defined by the appended claims.

Claims (10)

1. A remote sensing image semi-automatic labeling method based on deep learning is characterized by comprising the following steps:
pre-training a full convolution neural network by utilizing a cross entropy loss function based on a public remote sensing data set;
predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network, and outputting the class attribute probability of the remote sensing image;
calculating an uncertainty metric value of a pixel of the remote sensing image according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty metric value based on the uncertainty metric value distribution, and extracting an uncertain pixel according to the threshold value;
performing superpixel segmentation on the remote sensing image, screening superpixels according to the minimum percentage of uncertain pixels in the superpixels to serve as recommended labeling areas, and performing manual labeling on the recommended labeling areas;
and merging the manual labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
2. The method of claim 1, wherein the pre-training of the full convolution neural network with the cross-entropy loss function comprises:
and calculating the cross entropy between a ground truth label t and the output s of the full convolution neural network, wherein the ground truth label t is a one-hot coded vector with a positive class and c-1 negative classes, and then adjusting network parameters to reduce the cross entropy through back propagation.
3. The method of claim 2, wherein the cross-entropy loss function is defined as follows:
Figure FDA0002976362120000011
where K is the number of classes, tiIs a one-hot coded vector element, s, of a real tagiIs a score of the corresponding category.
4. The method according to claim 1, wherein the predicting the remote sensing image to be labeled by adopting the pre-trained full convolution neural network and outputting the class attribute probability of the remote sensing image comprises:
and (3) converting the class score s output by the last layer of the full convolution neural network into probability at the pixel level through a softmax layer, and generating class attribute probability, namely outputting a vector of K x 1 by each pixel sample through the softmax layer, wherein the K values are all between 0 and 1, and the sum of the K values is 1.
5. The method of claim 1, wherein calculating the uncertainty metric for the remote sensing image pixels based on the class attribute probabilities of the remote sensing image comprises:
and calculating the difference between the maximum value of the category attribute probability and the second maximum value as the uncertainty metric value of the remote sensing image pixel.
6. The method of claim 1, wherein the uncertainty metric values are ordered in order, with percentile values being used as thresholds for the uncertainty metric values.
7. The method according to claim 1, wherein the full convolution neural network prediction of the residual region of the remote sensing image is obtained by the following steps:
and for the residual area of the remote sensing image, reading the class score vector output by the full convolution neural network by taking the pixel as a unit, wherein the class corresponding to the maximum numerical value is the class of the pixel prediction of the full convolution neural network.
8. The remote sensing image semi-automatic labeling device based on deep learning, which adopts the method of any one of claims 1 to 7, is characterized by comprising the following steps:
the pre-training module is used for pre-training the full convolution neural network by utilizing a cross entropy loss function based on the public remote sensing data set;
the category attribute probability calculation module is used for predicting the remote sensing image to be labeled by adopting a pre-trained full convolution neural network and outputting the category attribute probability of the remote sensing image;
the uncertain pixel extraction module is used for calculating an uncertainty metric value of a remote sensing image pixel according to the category attribute probability of the remote sensing image, determining a threshold value of the uncertainty metric value based on the uncertainty metric value distribution, and extracting the uncertain pixel according to the threshold value;
the recommendation labeling area labeling module is used for carrying out superpixel segmentation on the remote sensing image, screening superpixels according to the minimum percentage of uncertain pixels in the superpixels to serve as recommendation labeling areas, and carrying out manual labeling on the recommendation labeling areas;
and the merging module is used for merging the artificial labeling of the recommended labeling area and the full convolution neural network prediction result of the residual area of the remote sensing image to obtain a final labeling result.
9. An electronic apparatus, comprising a memory and a processor, the memory storing a computer program configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a computer, implements the method of any one of claims 1 to 7.
CN202110275234.7A 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning Active CN113111716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275234.7A CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275234.7A CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN113111716A true CN113111716A (en) 2021-07-13
CN113111716B CN113111716B (en) 2023-06-23

Family

ID=76711426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275234.7A Active CN113111716B (en) 2021-03-15 2021-03-15 Remote sensing image semiautomatic labeling method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN113111716B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487617A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN114299290A (en) * 2021-12-24 2022-04-08 腾晖科技建筑智能(深圳)有限公司 Bare soil identification method, device, equipment and computer readable storage medium
CN114648683A (en) * 2022-05-23 2022-06-21 天津所托瑞安汽车科技有限公司 Neural network performance improving method and device based on uncertainty analysis

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153822A (en) * 2017-05-19 2017-09-12 北京航空航天大学 A kind of smart mask method of the semi-automatic image based on deep learning
US20180108124A1 (en) * 2016-10-14 2018-04-19 International Business Machines Corporation Cross-modality neural network transform for semi-automatic medical image annotation
CN109446369A (en) * 2018-09-28 2019-03-08 武汉中海庭数据技术有限公司 The exchange method and system of the semi-automatic mark of image
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110245716A (en) * 2019-06-20 2019-09-17 杭州睿琪软件有限公司 Sample labeling auditing method and device
CN110533086A (en) * 2019-08-13 2019-12-03 天津大学 The semi-automatic mask method of image data
CN110689026A (en) * 2019-09-27 2020-01-14 联想(北京)有限公司 Method and device for labeling object in image and electronic equipment
CN110826555A (en) * 2019-10-12 2020-02-21 天津大学 Man-machine cooperative image target detection data semi-automatic labeling method
CN110910401A (en) * 2019-10-31 2020-03-24 五邑大学 Semi-automatic image segmentation data annotation method, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180108124A1 (en) * 2016-10-14 2018-04-19 International Business Machines Corporation Cross-modality neural network transform for semi-automatic medical image annotation
CN107153822A (en) * 2017-05-19 2017-09-12 北京航空航天大学 A kind of smart mask method of the semi-automatic image based on deep learning
CN109446369A (en) * 2018-09-28 2019-03-08 武汉中海庭数据技术有限公司 The exchange method and system of the semi-automatic mark of image
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN110211087A (en) * 2019-01-28 2019-09-06 南通大学 The semi-automatic diabetic eyeground pathological changes mask method that can share
CN110245716A (en) * 2019-06-20 2019-09-17 杭州睿琪软件有限公司 Sample labeling auditing method and device
CN110533086A (en) * 2019-08-13 2019-12-03 天津大学 The semi-automatic mask method of image data
CN110689026A (en) * 2019-09-27 2020-01-14 联想(北京)有限公司 Method and device for labeling object in image and electronic equipment
CN110826555A (en) * 2019-10-12 2020-02-21 天津大学 Man-machine cooperative image target detection data semi-automatic labeling method
CN110910401A (en) * 2019-10-31 2020-03-24 五邑大学 Semi-automatic image segmentation data annotation method, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕博: "一种机器学习数据集半自动标注方法研究", 《信息通信技术与政策》 *
陈哲 等: "基于Web 应用的医学图像半自动标注系统", 《计算机应用与软件》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487617A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN114299290A (en) * 2021-12-24 2022-04-08 腾晖科技建筑智能(深圳)有限公司 Bare soil identification method, device, equipment and computer readable storage medium
CN114648683A (en) * 2022-05-23 2022-06-21 天津所托瑞安汽车科技有限公司 Neural network performance improving method and device based on uncertainty analysis

Also Published As

Publication number Publication date
CN113111716B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN107766894B (en) Remote sensing image natural language generation method based on attention mechanism and deep learning
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110428428B (en) Image semantic segmentation method, electronic equipment and readable storage medium
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN113111716B (en) Remote sensing image semiautomatic labeling method and device based on deep learning
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN111369581A (en) Image processing method, device, equipment and storage medium
CN113298815A (en) Semi-supervised remote sensing image semantic segmentation method and device and computer equipment
CN107506792B (en) Semi-supervised salient object detection method
CN112017192B (en) Glandular cell image segmentation method and glandular cell image segmentation system based on improved U-Net network
WO2020077940A1 (en) Method and device for automatic identification of labels of image
Guo et al. Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN113269224A (en) Scene image classification method, system and storage medium
CN115410081A (en) Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
Zhou et al. Attention transfer network for nature image matting
CN114419406A (en) Image change detection method, training method, device and computer equipment
CN116994140A (en) Cultivated land extraction method, device, equipment and medium based on remote sensing image
CN114387270B (en) Image processing method, image processing device, computer equipment and storage medium
CN111008570B (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
CN112070181B (en) Image stream-based cooperative detection method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant