CN113409322A - Deep learning training sample enhancement method for semantic segmentation of remote sensing image - Google Patents

Deep learning training sample enhancement method for semantic segmentation of remote sensing image Download PDF

Info

Publication number
CN113409322A
CN113409322A CN202110676098.2A CN202110676098A CN113409322A CN 113409322 A CN113409322 A CN 113409322A CN 202110676098 A CN202110676098 A CN 202110676098A CN 113409322 A CN113409322 A CN 113409322A
Authority
CN
China
Prior art keywords
cutting
image
window
sliding
vertical direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110676098.2A
Other languages
Chinese (zh)
Other versions
CN113409322B (en
Inventor
曾喆
吕波涛
谭文霞
刘善伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202110676098.2A priority Critical patent/CN113409322B/en
Publication of CN113409322A publication Critical patent/CN113409322A/en
Application granted granted Critical
Publication of CN113409322B publication Critical patent/CN113409322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a deep learning training sample enhancement method for remote sensing image semantic segmentation, which relates to the technical field of remote sensing image semantic segmentation, and designs three cutting modes based on remote sensing images, maximally utilizes original data and labeled data, avoids waste of the original images and the labeled data in the cutting process, performs sliding cutting on the original images and the labeled images in the horizontal direction, the vertical direction, the horizontal direction and the vertical direction with optimal overlapping degree, and then performs mirroring, horizontal and rotation, fully utilizes sample data, reduces data loss in the sliding cutting process, can finally obtain a large amount of training data, is beneficial to subsequent deep learning training for pixel semantic segmentation, and provides a large amount of data bases for the deep learning training.

Description

Deep learning training sample enhancement method for semantic segmentation of remote sensing image
Technical Field
The invention relates to the technical field of semantic segmentation of remote sensing images, in particular to a data enhancement processing method of deep learning training samples for semantic segmentation of remote sensing images.
Background
Semantic segmentation is always an important field in remote sensing images and an important means for understanding the remote sensing images. With the development of deep learning, the combined application of semantic segmentation and deep neural networks achieves remarkable results. However, training of deep networks often requires a large number of training samples to meet the model's requirements for feature representation capability. When the training set is small, the trained network model cannot well fit the abstract features of the training set, and therefore the performance is poor. The data enhancement method of the training sample is to expand the training set by a certain method under the condition of keeping the original characteristics of the sample, improve the generalization capability of the data and enhance the context relation of the remote sensing image. Even with less training data, data enhancement can be used to increase the amount or partial features of the training data, so that the network model is more robust. In this case, how to perform enhancement processing on a certain training sample is a key.
The existing remote sensing image data enhancement method comprises turning, rotating, zooming and random cutting. In the semi-supervised full volume network method for semantic segmentation of high-resolution remote sensing images, such as Gunn Lei, and the like, data are enhanced by rotation, left-right turning and up-down turning. In the Wandne and other 'neural network-based remote sensing image semantic segmentation methods', sliding cutting is carried out at a certain step length, and then turning transformation and rotation are carried out to enhance data. A high-resolution remote sensing image semantic segmentation method based on U-Net of Sujian and the like intercepts each original image and label in a training set into 5 sub-images, and then turns over image blocks (horizontally, leftwards and rightwards and along diagonal lines), adjusts colors (brightness, contrast and saturation), and performs noise processing to enhance data. However, the above operations of rotation, horizontal and inversion have limited data enhancement effect, and in magnitude, the expansion effect of the training samples obtained by rotation, horizontal and inversion is not obvious, and more basic images are still needed in the case of needing a large number of training samples. Secondly, the training samples lack correlation, the context relationship among the training samples cannot be accurately expressed, the mixed pixels cannot be accurately classified in the semantic segmentation process, and the trained network cannot well fit the abstract features of the training set.
Disclosure of Invention
In order to solve the problems that when a training set is few, a training sample lacks context connection, and a trained network model cannot well fit abstract features of the training set, the invention provides a deep learning training sample enhancement method for pixel semantic segmentation.
The invention adopts the following technical scheme:
a deep learning training sample enhancement method for remote sensing image semantic segmentation comprises the following steps:
step 1: the method comprises the steps of obtaining a high-resolution satellite remote sensing image of a target area, preprocessing the satellite remote sensing image to obtain a preprocessed image, wherein the preprocessing comprises radiometric calibration and atmospheric correction, and manually marking the preprocessed image by a visual interpretation method and converting the preprocessed image into a grid image as a marked image;
step 2: selecting the dimension of the training model input layer as a cutting window, calculating the optimal overlapping degree of the cutting window in the horizontal direction, and then respectively carrying out sliding cutting in the horizontal direction on the preprocessed image and the marked image in the horizontal direction in a mode that the cutting window is not overlapped in the horizontal direction and the cutting window is not overlapped in the vertical direction to obtain a plurality of first slices;
and step 3: calculating the optimal overlapping degree of the cutting windows in the vertical direction, and then respectively carrying out sliding cutting in the vertical direction on the preprocessed image and the marked image in the vertical direction in a mode that the cutting windows are not overlapped in the vertical direction and the cutting windows are not overlapped in the horizontal direction to obtain a plurality of second slices;
and 4, step 4: respectively performing sliding clipping on the preprocessed image and the marked image according to the optimal overlapping degree of the clipping window in the horizontal direction obtained in the step 2 and the optimal overlapping degree of the clipping window in the vertical direction obtained in the step 3, and obtaining a plurality of third slices;
and 5: and carrying out horizontal, rotation and mirror image operation on the first slice, the second slice and the third slice, and summarizing the first slice, the second slice, the third slice, the slice subjected to horizontal operation, the slice subjected to rotation operation and the slice subjected to mirror image operation to be used as a final training sample.
Preferably, step 2 specifically comprises:
step 2.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NK=W-[W×C]
wherein N isKW is the dimension of the clipping window for the offset of each clipping window;
step 2.2: judging whether the Kth sliding in each line of the cutting window exceeds the image boundary or not, and ensuring that the Kth cutting meets the following requirements:
K′=(K-1)×NK+W≤X
in the formula, K' is the pixel position of the right side of the cutting window when the cutting window slides for the Kth time, and X is the number of pixels in the horizontal direction of the image;
three situations arise at this time:
w is less than X-K', then continuing to perform the next sliding cutting of the line, and continuing to judge the sliding situation of K +1 times;
(K 'X), the best overlap of clipping windows in the horizontal direction C'K=C;
③0<X-K′<W, reallocating the residual pixels of the line to the horizontal offset of the clipping window to obtain a horizontal new sliding quantity N'KC 'at the same time'KThe range of the reference overlapping degree of the cutting window is met;
Figure BDA0003120638130000021
C′K=(W-N′K)/W∈α
wherein α is the reference overlap when
Figure BDA0003120638130000031
When, CK' is the optimal overlap of the cropping windows in the horizontal direction;
step 2.3: optimum overlap C of cropping windows in horizontal directionKAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the horizontal direction in a mode that cutting windows are not overlapped in the vertical direction, so that a plurality of first slices are obtained.
Preferably, step 3 specifically comprises:
step 3.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NL=W-[W×C]
in the formula, NLThe offset of each window clipping;
step 3.2: judging whether the L-th sliding in each row of the cutting window exceeds the image boundary, and ensuring that the L-th cutting meets the following requirements:
L′=(L-1)×NL+W≤Y
in the formula, L' is the pixel position of the lower side of the cutting window when the L-th cutting window slides, and Y is the number of pixels in the vertical direction of the image;
three situations arise at this time:
w < Y-L', continuing to perform the next sliding cutting of the sequence and continuously judging the L +1 sliding condition.
(ii) L' is Y, the optimum overlap C of the cropping windows in the vertical directionL′=C;
③0<Y-L′<W, re-distributing the residual pixels of the column to the horizontal offset of the clipping window to obtain a new horizontal sliding quantity N', and CL' a range that corresponds to the clipping window reference overlap;
Figure BDA0003120638130000032
C′L=(W-NL′)/W∈α
when in use
Figure BDA0003120638130000033
Of C'LThe optimal overlapping degree of the cutting windows in the vertical direction;
step 3.3: optimum overlap C 'of cropping windows in vertical direction'LAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the vertical direction in a mode that cutting windows are not overlapped in the horizontal direction, so that a plurality of second slices are obtained.
The invention has the beneficial effects that:
the invention provides a training sample enhancement method facing pixel semantic segmentation based on satellite images, provides an optimal overlapping range between adjacent clipping windows, designs three clipping modes, maximally utilizes original data and marking data, avoids waste of the original image and the marking data in the clipping process, performs sliding clipping on the original image and the marking image in the horizontal direction, the vertical direction, the horizontal direction and the vertical direction with optimal overlapping degree, and then performs mirroring, horizontal and rotation, fully utilizes sample data, reduces data loss in the sliding clipping process, can finally obtain a large amount of training data, is beneficial to subsequent deep learning training facing pixel semantic segmentation, and provides a large amount of data bases for the training sample data.
Drawings
FIG. 1 is a flow chart of the steps performed in the present invention.
FIG. 2 is a schematic diagram of the horizontal, vertical and the main parameters of the present invention.
Fig. 3 is a schematic view of slices with sliding overlap in the horizontal direction.
Fig. 4 is a schematic view of slices with sliding overlap in the vertical direction.
Fig. 5 is a schematic view of slices with sliding overlap in both the horizontal and vertical directions.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings:
with reference to fig. 1 to 5, a deep learning training sample enhancement method for semantic segmentation of remote sensing images includes the following steps:
step 1: the method comprises the steps of obtaining a high-resolution satellite remote sensing image of a target area, preprocessing the satellite remote sensing image to obtain a preprocessed image, wherein the preprocessing comprises radiometric calibration and atmospheric correction, and manually marking the preprocessed image by a visual interpretation method and converting the preprocessed image into a raster image as a marked image.
Step 2: selecting the dimension of the training model input layer as a cutting window, calculating the optimal overlapping degree of the cutting window in the horizontal direction, and then respectively carrying out sliding cutting in the horizontal direction on the preprocessed image and the marked image in the horizontal direction in a mode that the cutting window is not overlapped in the horizontal direction and the cutting window is not overlapped in the vertical direction to obtain a plurality of first slices.
The step 2 specifically comprises the following steps:
step 2.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NK=W-[W×C]
wherein N isKW is the dimension of the clipping window for the offset of each clipping window;
step 2.2: judging whether the Kth sliding in each line of the cutting window exceeds the image boundary or not, and ensuring that the Kth cutting meets the following requirements:
K′=(K-1)×NK+W≤X
in the formula, K' is the pixel position of the right side of the cutting window when the cutting window slides for the Kth time, and X is the number of pixels in the horizontal direction of the image;
three situations arise at this time:
w is less than X-K', then continuing to perform the next sliding cutting of the line, and continuing to judge the sliding situation of K +1 times;
(K 'X), the best overlap of clipping windows in the horizontal direction C'K=C;
③0<X-K′<W, reallocating the residual pixels of the line to the horizontal offset of the clipping window to obtain a horizontal new sliding quantity N'KC 'at the same time'KCutting-in fitWindow reference overlap range;
Figure BDA0003120638130000051
C′K=(W-N′K)/W∈α
wherein α is the reference overlap when
Figure BDA0003120638130000052
When, CK' is the optimal overlap of the cropping windows in the horizontal direction;
step 2.3: optimum overlap C of cropping windows in horizontal directionKAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the horizontal direction in a mode that cutting windows are not overlapped in the vertical direction, so that a plurality of first slices are obtained. As shown in fig. 3.
And step 3: and calculating the optimal overlapping degree of the cutting window in the vertical direction, and then respectively carrying out sliding cutting in the vertical direction on the preprocessed image and the marked image in the vertical direction in a mode that the cutting windows are not overlapped in the vertical direction and the cutting windows are not overlapped in the horizontal direction to obtain a plurality of second slices.
The step 3 specifically comprises the following steps:
step 3.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NL=W-[W×C]
in the formula, NLThe offset of each window clipping;
step 3.2: judging whether the L-th sliding in each row of the cutting window exceeds the image boundary, and ensuring that the L-th cutting meets the following requirements:
L′=(L-1)×NL+W≤Y
in the formula, L' is the pixel position of the lower side of the cutting window when the L-th cutting window slides, and Y is the number of pixels in the vertical direction of the image;
three situations arise at this time:
w < Y-L', continuing to perform the next sliding cutting of the sequence and continuously judging the L +1 sliding condition.
(ii) L' is Y, the optimum overlap C of the cropping windows in the vertical directionL′=C;
③0<Y-L′<W, re-distributing the residual pixels of the column to the horizontal offset of the clipping window to obtain a new horizontal sliding quantity N', and CL' a range that corresponds to the clipping window reference overlap;
Figure BDA0003120638130000053
C′L=(W-NL′)/W∈α
when in use
Figure BDA0003120638130000054
Of C'LThe optimal overlapping degree of the cutting windows in the vertical direction;
step 3.3: optimum overlap C 'of cropping windows in vertical direction'LAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the vertical direction in a mode that cutting windows are not overlapped in the horizontal direction, so that a plurality of second slices are obtained. As shown in fig. 4.
And 4, step 4: and (3) respectively performing sliding cutting on the preprocessed image and the marked image according to the optimal overlapping degree of the cutting window in the horizontal direction obtained in the step (2) and the optimal overlapping degree of the cutting window in the vertical direction obtained in the step (3) and the optimal overlapping degree of the cutting window in the horizontal direction and the optimal overlapping degree of the cutting window in the vertical direction, so as to obtain a plurality of third slices. As shown in fig. 5.
And 5: and carrying out horizontal, rotation and mirror image operation on the first slice, the second slice and the third slice, and summarizing the first slice, the second slice, the third slice, the slice subjected to horizontal operation, the slice subjected to rotation operation and the slice subjected to mirror image operation to be used as a final training sample.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (3)

1. A deep learning training sample enhancement method for remote sensing image semantic segmentation is characterized by comprising the following steps:
step 1: the method comprises the steps of obtaining a high-resolution satellite remote sensing image of a target area, preprocessing the satellite remote sensing image to obtain a preprocessed image, wherein the preprocessing comprises radiometric calibration and atmospheric correction, and manually marking the preprocessed image by a visual interpretation method and converting the preprocessed image into a grid image as a marked image;
step 2: selecting the dimension of the training model input layer as a cutting window, calculating the optimal overlapping degree of the cutting window in the horizontal direction, and then respectively carrying out sliding cutting in the horizontal direction on the preprocessed image and the marked image in the horizontal direction in a mode that the cutting window is not overlapped in the horizontal direction and the cutting window is not overlapped in the vertical direction to obtain a plurality of first slices;
and step 3: calculating the optimal overlapping degree of the cutting windows in the vertical direction, and then respectively carrying out sliding cutting in the vertical direction on the preprocessed image and the marked image in the vertical direction in a mode that the cutting windows are not overlapped in the vertical direction and the cutting windows are not overlapped in the horizontal direction to obtain a plurality of second slices;
and 4, step 4: respectively performing sliding clipping on the preprocessed image and the marked image according to the optimal overlapping degree of the clipping window in the horizontal direction obtained in the step 2 and the optimal overlapping degree of the clipping window in the vertical direction obtained in the step 3, and obtaining a plurality of third slices;
and 5: and carrying out horizontal, rotation and mirror image operation on the first slice, the second slice and the third slice, and summarizing the first slice, the second slice, the third slice, the slice subjected to horizontal operation, the slice subjected to rotation operation and the slice subjected to mirror image operation to be used as a final training sample.
2. The remote sensing image semantic segmentation-oriented deep learning training sample enhancement method according to claim 1, wherein the step 2 specifically comprises:
step 2.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NK=W-[W×C]
wherein N isKW is the dimension of the clipping window for the offset of each clipping window;
step 2.2: judging whether the Kth sliding in each line of the cutting window exceeds the image boundary or not, and ensuring that the Kth cutting meets the following requirements:
K′=(K-1)×NK+W≤X
in the formula, K' is the pixel position of the right side of the cutting window when the cutting window slides for the Kth time, and X is the number of pixels in the horizontal direction of the image;
three situations arise at this time:
w is less than X-K', then continuing to perform the next sliding cutting of the line, and continuing to judge the sliding situation of K +1 times;
(K 'X), the best overlap of clipping windows in the horizontal direction C'K=C;
③0<X-K′<W, reallocating the residual pixels of the line to the horizontal offset of the clipping window to obtain a horizontal new sliding quantity N'KC 'at the same time'KThe range of the reference overlapping degree of the cutting window is met;
Figure FDA0003120638120000021
C′K=(W-N′K)/W∈α
wherein α is the reference overlap when
Figure FDA0003120638120000022
When, CK' is the optimal overlap of the cropping windows in the horizontal direction;
step 2.3: optimum overlap C of cropping windows in horizontal directionKAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the horizontal direction in a mode that cutting windows are not overlapped in the vertical direction, so that a plurality of first slices are obtained.
3. The remote sensing image semantic segmentation-oriented deep learning training sample enhancement method according to claim 1, wherein the step 3 specifically comprises:
step 3.1: and randomly selecting a sliding overlapping degree as C in the reference overlapping degree range, and then cutting the window offset at each time:
NL=W-[W×C]
in the formula, NLThe offset of each window clipping;
step 3.2: judging whether the L-th sliding in each row of the cutting window exceeds the image boundary, and ensuring that the L-th cutting meets the following requirements:
L′=(L-1)×NL+W≤Y
in the formula, L' is the pixel position of the lower side of the cutting window when the L-th cutting window slides, and Y is the number of pixels in the vertical direction of the image;
three situations arise at this time:
w < Y-L', continuing to perform the next sliding cutting of the sequence and continuously judging the L +1 sliding condition.
(ii) (. L ' is Y), the optimum overlap C ' of the cropping windows in the vertical direction 'L=C;
③0<Y-L′<W, reallocating the residual pixels of the line to the horizontal offset of the clipping window to obtain a horizontal new sliding quantity N 'and C'LThe range of the reference overlapping degree of the cutting window is met;
Figure FDA0003120638120000023
C′L=(W-N′L)/W∈α
when in use
Figure FDA0003120638120000024
Of C'LThe optimal overlapping degree of the cutting windows in the vertical direction;
step 3.3: optimum overlap C 'of cropping windows in vertical direction'LAnd the preprocessed image and the marked image are respectively subjected to sliding cutting in the vertical direction in a mode that cutting windows are not overlapped in the horizontal direction, so that a plurality of second slices are obtained.
CN202110676098.2A 2021-06-18 2021-06-18 Deep learning training sample enhancement method for semantic segmentation of remote sensing image Active CN113409322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110676098.2A CN113409322B (en) 2021-06-18 2021-06-18 Deep learning training sample enhancement method for semantic segmentation of remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110676098.2A CN113409322B (en) 2021-06-18 2021-06-18 Deep learning training sample enhancement method for semantic segmentation of remote sensing image

Publications (2)

Publication Number Publication Date
CN113409322A true CN113409322A (en) 2021-09-17
CN113409322B CN113409322B (en) 2022-03-08

Family

ID=77685104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110676098.2A Active CN113409322B (en) 2021-06-18 2021-06-18 Deep learning training sample enhancement method for semantic segmentation of remote sensing image

Country Status (1)

Country Link
CN (1) CN113409322B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610141A (en) * 2017-09-05 2018-01-19 华南理工大学 A kind of remote sensing images semantic segmentation method based on deep learning
CN110287932A (en) * 2019-07-02 2019-09-27 中国科学院遥感与数字地球研究所 Route denial information extraction based on the segmentation of deep learning image, semantic
CN110942439A (en) * 2019-12-05 2020-03-31 北京华恒盛世科技有限公司 Image restoration and enhancement method based on satellite picture defects
WO2020091096A1 (en) * 2018-10-30 2020-05-07 Samsung Electronics Co., Ltd. Methods for determining a planes, methods for displaying augmented reality display information and corresponding devices
CN111210435A (en) * 2019-12-24 2020-05-29 重庆邮电大学 Image semantic segmentation method based on local and global feature enhancement module
CN112183360A (en) * 2020-09-29 2021-01-05 上海交通大学 Lightweight semantic segmentation method for high-resolution remote sensing image
CN112508977A (en) * 2020-12-29 2021-03-16 天津科技大学 Deep learning-based semantic segmentation method for automatic driving scene
CN112507338A (en) * 2020-12-21 2021-03-16 华南理工大学 Improved system based on deep learning semantic segmentation algorithm
CN112580645A (en) * 2020-12-08 2021-03-30 江苏海洋大学 Unet semantic segmentation method based on convolutional sparse coding
CN112766155A (en) * 2021-01-19 2021-05-07 山东华宇航天空间技术有限公司 Deep learning-based mariculture area extraction method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610141A (en) * 2017-09-05 2018-01-19 华南理工大学 A kind of remote sensing images semantic segmentation method based on deep learning
WO2020091096A1 (en) * 2018-10-30 2020-05-07 Samsung Electronics Co., Ltd. Methods for determining a planes, methods for displaying augmented reality display information and corresponding devices
CN110287932A (en) * 2019-07-02 2019-09-27 中国科学院遥感与数字地球研究所 Route denial information extraction based on the segmentation of deep learning image, semantic
CN110942439A (en) * 2019-12-05 2020-03-31 北京华恒盛世科技有限公司 Image restoration and enhancement method based on satellite picture defects
CN111210435A (en) * 2019-12-24 2020-05-29 重庆邮电大学 Image semantic segmentation method based on local and global feature enhancement module
CN112183360A (en) * 2020-09-29 2021-01-05 上海交通大学 Lightweight semantic segmentation method for high-resolution remote sensing image
CN112580645A (en) * 2020-12-08 2021-03-30 江苏海洋大学 Unet semantic segmentation method based on convolutional sparse coding
CN112507338A (en) * 2020-12-21 2021-03-16 华南理工大学 Improved system based on deep learning semantic segmentation algorithm
CN112508977A (en) * 2020-12-29 2021-03-16 天津科技大学 Deep learning-based semantic segmentation method for automatic driving scene
CN112766155A (en) * 2021-01-19 2021-05-07 山东华宇航天空间技术有限公司 Deep learning-based mariculture area extraction method

Also Published As

Publication number Publication date
CN113409322B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN108876743B (en) Image rapid defogging method, system, terminal and storage medium
CN102063706B (en) Rapid defogging method
CN106251301A (en) A kind of single image defogging method based on dark primary priori
US7840069B2 (en) Methods and systems for alternative, complexity-based process selection
EP3200147B1 (en) Image magnification method, image magnification apparatus and display device
CN105654438A (en) Gray scale image fitting enhancement method based on local histogram equalization
CN103606137A (en) Histogram equalization method for maintaining background and detail information
EP1392047A3 (en) Digital document processing for image enhancement
JP4350778B2 (en) Image processing apparatus, image processing program, and recording medium
CN104574328A (en) Color image enhancement method based on histogram segmentation
CN104809709A (en) Single-image self-adaptation defogging method based on domain transformation and weighted quadtree decomposition
CN114549543A (en) Building three-dimensional model construction method and device, terminal and storage medium
Fu et al. An improved algorithm based on CLAHE for ultrasonic well logging image enhancement
CN107564078A (en) A kind of grid block plan Automatic Vector method with interference pixel
US20070019881A1 (en) Adaptive contrast control systems and methods
CN113409322B (en) Deep learning training sample enhancement method for semantic segmentation of remote sensing image
CN116580220A (en) Digital government platform data optimization transmission method and system
US20090041344A1 (en) Methods and Systems for Determining a Background Color in a Digital Image
US20040169891A1 (en) Selective smoothing including bleed-through reduction
RU2310911C1 (en) Method for interpolation of images
KR100537827B1 (en) Method for the Separation of text and Image in Scanned Documents using the Distribution of Edges
CN107979711A (en) Based on the method for optimizing distortion modification prevention hided transmission
KR100537829B1 (en) Method for segmenting Scan Image
CN103281474B (en) Image and text separation method for scanned image of multifunctional integrated printer
CN116523793A (en) Image map defogging method combining dark channel and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant