CN113158855A - Remote sensing image auxiliary processing method and device based on online learning - Google Patents

Remote sensing image auxiliary processing method and device based on online learning Download PDF

Info

Publication number
CN113158855A
CN113158855A CN202110378930.0A CN202110378930A CN113158855A CN 113158855 A CN113158855 A CN 113158855A CN 202110378930 A CN202110378930 A CN 202110378930A CN 113158855 A CN113158855 A CN 113158855A
Authority
CN
China
Prior art keywords
data set
auxiliary
image
network model
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110378930.0A
Other languages
Chinese (zh)
Other versions
CN113158855B (en
Inventor
熊文轩
王磊
周海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Guoxing Aerospace Technology Co ltd
Original Assignee
Chengdu Guoxing Aerospace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Guoxing Aerospace Technology Co ltd filed Critical Chengdu Guoxing Aerospace Technology Co ltd
Priority to CN202110378930.0A priority Critical patent/CN113158855B/en
Publication of CN113158855A publication Critical patent/CN113158855A/en
Application granted granted Critical
Publication of CN113158855B publication Critical patent/CN113158855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a remote sensing image auxiliary processing method and device based on online learning, wherein the method comprises the following steps: acquiring a remote sensing image data set to be marked, wherein the remote sensing image data set comprises a plurality of subdata sets, and each subdata set comprises a plurality of tile images; determining an auxiliary annotation algorithm network model and a training algorithm according to the current annotation task type and the remote sensing image data set, and initializing the auxiliary annotation algorithm network model; labeling each subdata set through an auxiliary labeling algorithm network model, performing parameter updating on the auxiliary labeling algorithm network model by using a training algorithm after the labeling is completed, and processing unmarked subdata sets by using the auxiliary labeling algorithm network model after the iterative parameter updating; and acquiring the labeling result corresponding to each subdata set as the image labeling result of the remote sensing image data set. According to the scheme of the embodiment, the auxiliary annotation on the multi-type remote sensing images is efficiently realized, so that the labor cost and the time cost are reduced.

Description

Remote sensing image auxiliary processing method and device based on online learning
Technical Field
The present disclosure relates to an online remote sensing image annotation processing technology, and more particularly, to a remote sensing image auxiliary processing method and apparatus based on online learning.
Background
In recent years, with the development of remote sensing technology, the number and quality of remote sensing images are remarkably improved, and the data have great application values in the aspects of smart city construction, road planning, disaster prediction and the like, and the data are generally required to be labeled in order to better utilize the remote sensing data.
At present, a main stream auxiliary labeling mode of a remote sensing image usually adopts a mode of combining offline model prediction with manual correction. Firstly, training a labeled sample by using a machine learning technology to obtain model parameters, then testing an image to be labeled by using a trained offline model, and finally obtaining a result by manual correction.
This approach has the following problems:
1. remote sensors are different in remote sensing imaging, different remote sensor imaging data have different distributions, data used for training a model are often composed of certain fixed remote sensor imaging data, and a finally obtained network algorithm model can only be tested on images which are independent and distributed with the training data, so that the auxiliary labeling effect of the algorithm on large-batch and multi-type remote sensing data is greatly limited.
2. The model is updated slowly, a large amount of early-stage labeling data are required to be input in order to obtain an effective model, and a long time is required from manual data labeling to a model training stage.
Disclosure of Invention
The embodiment of the application provides a remote sensing image auxiliary processing method and device based on online learning, and the method and device can be used for efficiently carrying out auxiliary labeling on various remote sensing images, so that the labor cost and the time cost are reduced.
The embodiment of the application provides a remote sensing image auxiliary processing method based on online learning, which can comprise the following steps:
acquiring a remote sensing image data set to be marked, wherein the remote sensing image data set comprises a plurality of subdata sets, and each subdata set comprises a plurality of tile images;
acquiring the labeling task type of the current labeling task, determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the remote sensing image data set to be labeled, and initializing the auxiliary labeling algorithm network model;
processing each sub data set through the auxiliary labeling algorithm network model to output a labeling result, performing parameter updating on the auxiliary labeling algorithm network model by using the training algorithm after each sub data set is processed so as to realize iterative parameter updating of the auxiliary labeling algorithm network model, and processing the unmarked sub data set by using the auxiliary labeling algorithm network model after the iterative parameter updating;
and acquiring the labeling result corresponding to each subdata set as the image labeling result of the remote sensing image data set.
In an exemplary embodiment of the present application, the set of remotely sensed image data may include: a first task data set; the sub data set may include: a primary subtask data set;
the acquiring the remote sensing image data set to be labeled may include:
the method comprises the steps of obtaining a first task data set to be marked, dividing the first task data set to be marked into a plurality of first-level subtask data sets according to a first preset rule, wherein the first task data set to be marked comprises a plurality of tile images.
In an exemplary embodiment of the present application, the obtaining of the first task data set to be annotated may include:
the method comprises the steps of obtaining an image data set P to be marked, splitting the image data set P to be marked into a plurality of first task data sets according to data information of tile images in the image data set P, wherein the data information comprises the type of equipment for collecting images.
In an exemplary embodiment of the present application, the determining an auxiliary annotation algorithm network model and a preset training algorithm according to the annotation task type and the remote sensing image data set to be annotated may include:
and determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the first task data set to be labeled.
In an exemplary embodiment of the present application, the sub data set may include: a secondary subtask data set; after the dividing the first task data set to be labeled into a plurality of primary subtask data sets, the method may further include:
and dividing each primary subtask data into a plurality of secondary subtask data sets according to a second preset rule, and taking each secondary subtask data set as a sub data set input into the auxiliary labeling algorithm network model.
In an exemplary embodiment of the present application, processing each sub data set through the auxiliary labeling algorithm network model to output a labeling result, and updating parameters of the auxiliary labeling algorithm network model by using the training algorithm after each sub data set is processed may include:
inputting any subdata set into an initialized auxiliary labeling algorithm network model to obtain a labeled subdata set, and updating parameters of the auxiliary labeling algorithm network model by using the training algorithm according to a tile image in the labeled subdata set;
for any subdata set which is not input into the auxiliary labeling algorithm network model in the plurality of subdata sets, executing the following operations until the plurality of subdata sets are all input into the auxiliary labeling algorithm network model:
inputting the updated auxiliary labeling algorithm network model to obtain a labeled subdata set; and updating parameters of the auxiliary labeling algorithm network model by utilizing the training algorithm according to the tile image in the labeled subdata set.
In an exemplary embodiment of the present application, the method may further include: and acquiring an image annotation result output after each subdata set is input into the auxiliary annotation algorithm network model, correcting annotation information in the image annotation result, and taking the corrected image annotation result as a final image annotation result corresponding to the subdata set.
In an exemplary embodiment of the present application, the performing, according to the tile image in the labeled subdata set, parameter updating on the network model of the auxiliary labeling algorithm by using the training algorithm may include:
acquiring a training sample pair set according to the sub data set and the final image labeling result corresponding to the sub data set;
inputting the training sample pair set into the auxiliary labeling algorithm network model, and updating parameters of the auxiliary labeling algorithm network model by adopting the training algorithm.
In an exemplary embodiment of the present application, the obtaining a training sample pair set according to the sub data set and the final image labeling result corresponding to the sub data set may include:
determining the position of a seed point on each tile image in the subdata set according to the predetermined distance and number of the seed points;
setting seed points at the positions of the seed points, and setting attributes of the seed points according to the final image labeling result corresponding to the subdata set; the attributes include: a correct attribute and an incorrect attribute; the correct attribute is: the labeling of the pixel position of the tile image where the current seed point is located and the labeling of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary labeling algorithm network model are not corrected; the error attribute refers to: revising the mark of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary marking algorithm network model;
correspondingly setting parameters corresponding to the selected probability of the subsequent seed points according to the attribute of each seed point;
determining seed points to be collected according to the preset sample pair sampling number and parameters corresponding to the selected probability of each seed point;
determining the geographic coordinates of the seed points to be acquired according to the pixel positions of the tile images of the seed points to be acquired, the geographic space range included by the tile images of the seed points to be acquired, and the size of the tile images of the seed points to be acquired;
and determining a target tile image containing the seed point to be acquired and an image which is correspondingly marked by the target tile image as a group of training sample pairs according to the geographic coordinate of the seed point to be acquired, the size of the tile image, the remote sensing image data set to be marked and all the obtained final image marking results, and forming the training sample pairs by all the groups of training sample pairs.
In an exemplary embodiment of the present application, the initializing the network model of the auxiliary annotation algorithm may include:
and initializing the parameters of the network model of the auxiliary labeling algorithm by adopting random parameters or previously accumulated model parameters.
The embodiment of the application further provides a remote sensing image auxiliary processing device based on online learning, which may include a processor and a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, the remote sensing image auxiliary processing method based on online learning described in any one of the above is implemented.
Compared with the related art, the embodiment of the application can comprise the following steps: acquiring a remote sensing image data set to be marked, wherein the remote sensing image data set comprises a plurality of subdata sets, and each data set comprises a plurality of tile images; acquiring the labeling task type of the current labeling task, determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the remote sensing image data set to be labeled, and initializing the auxiliary labeling algorithm network model; marking each sub data set through an auxiliary marking algorithm network model, updating parameters of the auxiliary marking algorithm network model by using a training algorithm after each sub data set is processed, and processing the unmarked sub data set by using the auxiliary marking algorithm network model after iterative parameter updating; and acquiring the labeling result corresponding to each subdata set as the image labeling result of the remote sensing image data set. Through the scheme of the embodiment, the auxiliary labeling of the multi-type remote sensing images is efficiently realized, so that the labor cost and the time cost are reduced.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. Other advantages of the present application may be realized and attained by the instrumentalities and combinations particularly pointed out in the specification and the drawings.
Drawings
The accompanying drawings are included to provide an understanding of the present disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the examples serve to explain the principles of the disclosure and not to limit the disclosure.
FIG. 1 is a flowchart of a remote sensing image auxiliary processing method based on online learning according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a secondary set of subtasks according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a comparison between an image to be annotated and an annotated image in a secondary subtask set according to an embodiment of the present application;
FIG. 4 is a flowchart of a secondary subtask set-based online learning-based remote sensing image auxiliary processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an image comparison between labeled tile images and manually corrected tile images of a secondary subtask set according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for updating parameters of a network model of an auxiliary labeling algorithm according to an input last secondary subtask data set and a training algorithm according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a seed point distribution according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a seed point attribute according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a sample pair according to an embodiment of the present application;
fig. 10 is a block diagram of a remote sensing image auxiliary processing device based on online learning according to an embodiment of the present application.
Detailed Description
The present application describes embodiments, but the description is illustrative rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or instead of any other feature or element in any other embodiment, unless expressly limited otherwise.
The present application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The embodiments, features and elements disclosed in this application may also be combined with any conventional features or elements to form a unique inventive concept as defined by the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive aspects to form yet another unique inventive aspect, as defined by the claims. Thus, it should be understood that any of the features shown and/or discussed in this application may be implemented alone or in any suitable combination. Accordingly, the embodiments are not limited except as by the appended claims and their equivalents. Furthermore, various modifications and changes may be made within the scope of the appended claims.
Further, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other orders of steps are possible as will be understood by those of ordinary skill in the art. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Further, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present application.
The embodiment of the application provides a remote sensing image auxiliary processing method based on online learning, as shown in fig. 1, the method can include steps S101-S104:
s101, obtaining a remote sensing image data set to be marked, wherein the remote sensing image data set comprises a plurality of sub data sets, and each sub data set comprises a plurality of tile images;
s102, acquiring an annotation task type of a current annotation task, determining an auxiliary annotation algorithm network model and a preset training algorithm according to the annotation task type and the remote sensing image data set to be annotated, and initializing the auxiliary annotation algorithm network model;
s103, processing each sub data set through the auxiliary labeling algorithm network model to output a labeling result, performing parameter updating on the auxiliary labeling algorithm network model by using the training algorithm after each sub data set is processed so as to realize iterative parameter updating on the auxiliary labeling algorithm network model, and processing the unmarked sub data set by using the auxiliary labeling algorithm network model after the iterative parameter updating;
s104, obtaining the labeling result corresponding to each subdata set as the image labeling result of the remote sensing image data set.
In the exemplary embodiment of the application, an algorithm and a device for assisting in processing remote sensing images based on online learning are provided, and the method and the device can be used for efficiently assisting in marking the multi-type remote sensing images, so that the remote sensing image data can be artificially marked in an assisting manner, and the marking efficiency is improved.
In an exemplary embodiment of the present application, the set of remotely sensed image data may include: a first task data set; the sub data set may include: a primary subtask data set;
the acquiring the remote sensing image data set to be labeled may include:
the method comprises the steps of obtaining a first task data set to be marked, dividing the first task data set to be marked into a plurality of first-level subtask data sets according to a first preset rule, wherein the first task data set to be marked comprises a plurality of tile images.
In an exemplary embodiment of the present application, each of the plurality of primary subtask data sets may be used as a sub data set, the auxiliary annotation algorithm network model is sequentially input, an image annotation result is obtained, and after an image annotation result corresponding to the sub data set that is processed last is obtained, a parameter of the auxiliary annotation algorithm network model is updated once according to the image annotation result corresponding to the sub data set, so as to implement parameter updating of the auxiliary annotation algorithm network model in an iterative manner. The first task data set is divided into a plurality of primary subtask data sets, the updating times of the auxiliary labeling algorithm network model parameters are improved, and each time the auxiliary labeling algorithm network model parameters are the image labeling results corresponding to the sub data sets completed by the last processing, so that the image identification accuracy of the auxiliary labeling algorithm network model is improved, the time cost of subsequent manual correction is reduced, and the labeling efficiency is improved.
In an exemplary embodiment of the present application, the obtaining of the first task data set to be annotated may include:
the method comprises the steps of obtaining an image data set P to be marked, splitting the image data set P to be marked into a plurality of first task data sets according to data information of tile images in the image data set P, wherein the data information comprises the type of equipment for collecting images.
In an exemplary embodiment of the present application, a remote sensing data set P to be labeled may be obtained first, and the remote sensing data set P is split into a plurality of first task data sets according to a type of an apparatus for acquiring an image, for example, a (or called first task data set), B (or called second task data set), and the like; one of the first task data sets can be selected for labeling, for example, a subtask data set A (first task data set) to be labeled is determined, the remote sensing data set P is split according to the type of the device for collecting the image, and the first task data set after the split subsequently can be conveniently selected from a more proper auxiliary labeling algorithm network model for labeling, so that the accuracy of the image after labeling is improved conveniently, and the manual correction cost is reduced.
In an exemplary embodiment of the present application, the determining an auxiliary annotation algorithm network model and a preset training algorithm according to the annotation task type and the remote sensing image data set to be annotated may include:
and determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the first task data set to be labeled.
In the exemplary embodiment of the application, the remote sensing data set P can be split into a plurality of first task data sets, the auxiliary labeling algorithm network model and the preset training algorithm are determined according to the labeling task type and the first task data sets, the most suitable auxiliary labeling algorithm network model can be conveniently selected, the accuracy of the labeling result after the auxiliary labeling algorithm network model is output is improved, manual correction is reduced, and the labeling efficiency is improved.
In an exemplary embodiment of the present application, the sub data set may include: a secondary subtask data set; after the dividing the first task data set to be labeled into a plurality of primary subtask data sets, the method may further include:
and dividing each primary subtask data into a plurality of secondary subtask data sets according to a second preset rule, and taking each secondary subtask data set as a sub data set input into the auxiliary labeling algorithm network model.
In an exemplary embodiment of the present application, each primary subtask data set may be further divided into a plurality of secondary subtask data sets, and each secondary subtask data set is used as a sub data set, and the sub data sets are sequentially input to the auxiliary annotation algorithm network model to obtain an image annotation result, and after an image annotation result corresponding to a last processed sub data set is obtained, the auxiliary annotation algorithm network model is subjected to parameter updating once according to the image annotation result corresponding to the sub data set, so as to implement parameter updating of the auxiliary annotation algorithm network model in an iterative manner. After the tile images in all the secondary subtask data in the current primary subtask data set are processed by using the auxiliary labeling algorithm network model, the updated auxiliary labeling algorithm network model is used for processing the secondary subtask data in the next primary subtask data set to be processed.
In the exemplary embodiment of the application, the second-level subtask set data set may be further split to obtain a third-level subtask data set, and a fourth-level subtask data set, a fifth-level subtask data set, and the like may be further obtained, and the second-level subtask set data set may be split into subtask data sets of different levels according to different accuracy requirements. In addition, for one first task data set, for example, a, only a plurality of primary subtask data sets may be divided, and the parameters of the network model C for the auxiliary labeling algorithm are updated according to the images in the plurality of primary subtask data sets, so that the images in all the primary subtask data sets in one first task data set are subjected to auxiliary labeling.
In the exemplary embodiment of the present application, the following may illustrate a case where the sub data set includes a secondary sub task data set.
In exemplary embodiments of the present application, each first task data set (e.g., a) may be partitioned into different primary subtask data sets (or primary subtask sets), e.g., a1,A2.....,A={A1,A2,...AnAnd each primary subtask represents an area to be marked.
In an exemplary embodiment of the present application, dividing each first task data set may utilize performing a level one set of subtasks A1Then the auxiliary labeling algorithm network model C after parameter updating executes another primary subtask set A2In order to utilize the previous stageSubtask set A1The updated result of the auxiliary annotation algorithm network model C continues to execute the annotation task, so that the subsequent manual correction cost is reduced.
In an exemplary embodiment of the present application, a set of primary subtasks to be labeled, such as A, is selected1For the primary subtask set A, the task can be executed1Further splitting, according to the initialization parameters, dividing the primary set to be labeled into non-overlapping image regions as a secondary subtask data set (or called secondary subtask set), as shown in fig. 2, obtaining a1、a2.1={a1,a2,...an}。
In an exemplary embodiment of the present application, each secondary set of subtasks, e.g., a1The number of the included tile pictures can be preset or can be defined by a user, and the smaller the defined number of the tile pictures is, the more the number of times of updating the auxiliary annotation algorithm network model C is, so that the image identification accuracy of the auxiliary annotation algorithm network model C is higher, and the accuracy of the output auxiliary standard information is higher.
In an exemplary embodiment of the present application, the network model C of the auxiliary annotation algorithm can be utilized to sequentially pair the image areas a1Is processed to obtain a1The labeled image (as shown in FIG. 3, the upper part is a to be labeled1The lower part is marked with a1As noted by a1The white part of the building is subjected to image recognition through the auxiliary labeling algorithm network model C).
In an exemplary embodiment of the present application, as shown in fig. 4, the method for processing the remote sensing image assisted based on online learning for the secondary subtask data set may include steps S201 to S206:
s201, obtaining an image data set P to be labeled, and splitting the image data set P into a plurality of first task data sets according to the type of image acquisition equipment adopted by tile images in the image data set P; dividing a first task data set to be labeled into a multi-level subtask data set; wherein the multi-level subtask data set includes: the task scheduling system comprises a primary subtask data set and a plurality of secondary subtask data sets divided by each primary subtask data set;
s202, determining an auxiliary annotation algorithm network model and a preset training algorithm according to a first task data set to be annotated and an annotation task type, and initializing the auxiliary annotation algorithm network model;
s203, randomly selecting a primary subtask data set to be labeled from the first task data set, sequentially inputting each secondary subtask data set corresponding to the selected primary subtask data set to be labeled into the initialized auxiliary labeling algorithm network model for processing, outputting a labeled image corresponding to each secondary subtask data set, updating parameters of the auxiliary labeling algorithm network model in an iterative mode according to the image labeled corresponding to the previously processed secondary subtask data, and processing the next unlabeled secondary subtask data set by using the auxiliary labeling algorithm network model after the iterative parameter updating;
s204, sequentially processing each secondary subtask data set in the next unmarked primary subtask data set in the first task data set according to the auxiliary marking algorithm network model updated by the primary subtask data after the previous processing is finished;
s205, repeating the steps S203-S204 until the secondary subtask data in all the primary subtask data in the first task data set are processed;
s206, obtaining an output result obtained after each secondary subtask data set is input into the auxiliary annotation algorithm network model, and taking the output result as an image auxiliary annotation result corresponding to the first task data set.
In an exemplary embodiment of the present application, the annotation task type includes a building, a water body, and the like, and the initialization parameter is determined according to the determined first task data set a and the annotation task type, and determining the initialization parameter may include, but is not limited to: selecting an auxiliary annotation algorithm network model C (e.g., UNet + + \ Deeplab V3, etc.), setting seed point distance and number, etc., and determining the annotation task type according to the determined first task data set A.
In an exemplary embodiment of the present application, different secondary annotation algorithm network models C may be selected for different subtask data sets.
In an exemplary embodiment of the present application, the initializing the network model of the auxiliary annotation algorithm may include:
and initializing the parameters of the network model of the auxiliary labeling algorithm by adopting random parameters or previously accumulated model parameters.
In an exemplary embodiment of the present application, the parameters of the selected auxiliary annotation algorithm network model C may be initialized randomly or by using the previously accumulated model parameters, so that the auxiliary annotation algorithm network model C can start to process and output the annotated image.
In an exemplary embodiment of the present application, after initializing the auxiliary tagging algorithm network model C, both the auxiliary tagging algorithm network model C and a preset training algorithm (which is subsequently used for updating parameters of the auxiliary tagging algorithm network model C) may be deployed in a server.
In the exemplary embodiment of the present application, the parameters of the auxiliary annotation algorithm network model C can be updated in time by using the auxiliary annotation result of the tile image (and the result obtained by modifying the auxiliary annotation result by the artificial auxiliary annotation), so that the updated auxiliary annotation algorithm network model C can more accurately identify the image, the accuracy of annotation is improved, and thus the artificial modification is reduced.
In the exemplary embodiment of the present application, a parameter updating scheme of the network model C of the auxiliary annotation algorithm is described in detail below.
In an exemplary embodiment of the present application, processing each sub data set through the auxiliary labeling algorithm network model to output a labeling result, and updating parameters of the auxiliary labeling algorithm network model by using the training algorithm after each sub data set is processed may include:
inputting any subdata set into an initialized auxiliary labeling algorithm network model to obtain a labeled subdata set, and updating parameters of the auxiliary labeling algorithm network model by using the training algorithm according to a tile image in the labeled subdata set;
for any subdata set which is not input into the auxiliary labeling algorithm network model in the plurality of subdata sets, executing the following operations until the plurality of subdata sets are all input into the auxiliary labeling algorithm network model:
inputting the updated auxiliary labeling algorithm network model to obtain a labeled subdata set; and updating parameters of the auxiliary labeling algorithm network model by utilizing the training algorithm according to the tile image in the labeled subdata set.
In an exemplary embodiment of the present application, the following operations are respectively performed for each secondary subtask data set corresponding to the primary subtask data set: before each secondary subtask data set is input into the auxiliary annotation algorithm network model, parameters of the auxiliary annotation algorithm network model are updated according to the input tile image labeled in the last secondary subtask data set and the training algorithm until all the secondary subtask data sets are input into the auxiliary annotation algorithm network model.
In an exemplary embodiment of the present application, the following operations are respectively performed for each primary subtask data set corresponding to the first task data set: and acquiring the auxiliary annotation algorithm network model updated according to the previous primary subtask data set, and sequentially processing each secondary subtask data set in the unprocessed primary subtask data set in the first task data by using the auxiliary annotation algorithm network model updated according to the previous primary subtask data set until all the primary subtask data sets are input into the auxiliary annotation algorithm network model.
In the exemplary embodiment of the present application, the first task data set a is divided into the primary subtask data sets a1 and a2 … … An, and then any one of the primary subtask data sets An is divided into a plurality of secondary subtask data sets a1,a2,...an(ii) a Auxiliary annotation algorithm network model C on a per-execution basisBefore the first-level subtask set is executed, task labeling is carried out by adopting an auxiliary labeling algorithm network model C (namely, parameter iterative updating) updated by the last executed first-level subtask set; and before each secondary subtask set in any one primary subtask set, task labeling is carried out by adopting an auxiliary labeling algorithm network model C updated by the last executed secondary subtask set.
In an exemplary embodiment of the present application, the method may further include: and modifying the labeled information in the tile image in the labeled subdata set, and taking the modified image labeling result as the labeled tile image in the training sample.
In an exemplary embodiment of the present application, an image annotation result output after each secondary subtask data set is input to the auxiliary annotation algorithm network model may be obtained, annotation information in the image annotation result is modified, and the modified image annotation result is used as a final image annotation result corresponding to the secondary subtask data set.
In the exemplary embodiment of the present application, after it is determined that the image annotation result output after any secondary subtask data set is input into the auxiliary annotation algorithm network model does not need to be manually corrected, when the remaining unmarked secondary subtask data sets are input into the image annotation result output after the auxiliary annotation algorithm network model, the image output after the auxiliary annotation algorithm network model is directly used as the image annotation result after the corresponding secondary subtask data set is annotated, so that it is not necessary to manually correct the image output after the subsequent remaining unmarked secondary subtask data sets are input into the auxiliary annotation algorithm network model.
In an exemplary embodiment of the present application, modifying annotation information in the image annotation result may include:
and according to the comparison between the tile image in each secondary subtask data set and the corresponding labeling result of the tile image, modifying, adding and deleting wrong labels in the labeling result corresponding to the tile image.
Illustrative embodiments in the present applicationIn the embodiment, a can be manually corrected1Marking data area, manually submitting or automatically submitting correction data to complete task a1
In an exemplary embodiment of the present application, as shown in fig. 5, the upper part is a to be labeled1The lower part is a after artificial correction1
In an exemplary embodiment of the present application, the next sub-area a to be marked is continued2And updating the parameters of the network model C of the auxiliary labeling algorithm so as to more accurately label the task to be processed.
In an exemplary embodiment of the present application, obtaining an annotation result corresponding to each of the sub data sets as an image annotation result of the remote sensing image data set may include:
obtaining a final image labeling result corresponding to each sub data set, converting the final image labeling result corresponding to each tile image into a vector diagram according to the geographic space range included by each tile image in each sub data set, and taking the converted vector diagram as an image labeling result to be submitted corresponding to the corresponding tile image;
and acquiring the vector diagram converted from the final image labeling result corresponding to each sub data set as the image labeling result of the remote sensing image data set.
In an exemplary embodiment of the present application, the final image labeling result corresponding to each tile image is converted into a vector diagram according to the geographic spatial range included in each tile image in each sub data set, and the adopted technology is the prior art, so that expansion is not performed here. In an exemplary embodiment of the present application, as shown in fig. 6, the performing parameter update on the network model of the auxiliary labeling algorithm by using the training algorithm according to the tile image in the labeled subdata set may include steps S301 to S302:
s301, acquiring a training sample pair set according to the sub data sets and the final image labeling results corresponding to the sub data sets;
s302, inputting the training sample pair set into the auxiliary labeling algorithm network model, and updating parameters of the auxiliary labeling algorithm network model by adopting the training algorithm.
In an exemplary embodiment of the present application, it should be noted that the training sample pair set is input to the auxiliary labeling algorithm network model, and the training algorithm is adopted to update parameters of the auxiliary labeling algorithm network model. The training sample pair set can be used for realizing one-time parameter updating of the auxiliary labeling algorithm network model and also can realize multiple parameter updating of the auxiliary labeling algorithm network model.
In an exemplary embodiment of the present application, before inputting the training sample pair set to the auxiliary annotation algorithm network model, and updating parameters of the auxiliary annotation algorithm network model using the training algorithm, the method further includes: when judging that the labeling result output after the tile image in the previous subdata set is input into the auxiliary labeling algorithm network model does not need subsequent manual correction, updating the parameters of the auxiliary labeling algorithm network model subsequently. Therefore, the current auxiliary labeling algorithm network model can be utilized to process the subsequent unmarked subdata sets.
In an exemplary embodiment of the present application, inputting the training sample pair to the auxiliary labeling algorithm network model, and updating the parameters of the auxiliary labeling algorithm network model by using the training algorithm, includes inputting the training sample pair to the auxiliary labeling algorithm network model according to a preset updating rule, and updating the parameters of the auxiliary labeling algorithm network model by using the training algorithm at least once.
In an exemplary embodiment of the present application, the updating according to the preset updating rule may include: and updating the parameters of the auxiliary labeling algorithm network model once by adopting the training algorithm when each preset group of training sample pairs in the training sample pair set are input into the auxiliary labeling algorithm network model.
In an exemplary embodiment of the present application, the obtaining a training sample pair set according to the sub data set and the final image labeling result corresponding to the sub data set may include:
determining the position of a seed point on each tile image in the subdata set according to the predetermined distance and number of the seed points;
setting seed points at the positions of the seed points, and setting attributes of the seed points according to the final image labeling result corresponding to the subdata set; the attributes include: a correct attribute and an incorrect attribute; the correct attribute is: the labeling of the pixel position of the tile image where the current seed point is located and the labeling of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary labeling algorithm network model are not corrected; the error attribute refers to: revising the mark of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary marking algorithm network model;
correspondingly setting parameters corresponding to the selected probability of the subsequent seed points according to the attribute of each seed point;
determining seed points to be collected according to the preset sample pair sampling number and parameters corresponding to the selected probability of each seed point;
forming a group of training sample pairs by the tile image where the seed point to be acquired is located and the tile image after the corresponding labeling of the tile where the seed point to be acquired is located;
all groups of training sample pairs are combined together to form a training sample pair set.
In an exemplary embodiment of the present application, the obtaining a training sample pair set according to the sub data set and the final image labeling result corresponding to the sub data set may include:
determining the position of a seed point on each tile image in the subdata set according to the predetermined distance and number of the seed points;
setting seed points at the positions of the seed points, and setting attributes of the seed points according to the final image labeling result corresponding to the subdata set; the attributes include: a correct attribute and an incorrect attribute; the correct attribute is: the labeling of the pixel position of the tile image where the current seed point is located and the labeling of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary labeling algorithm network model are not corrected; the error attribute refers to: revising the mark of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary marking algorithm network model;
correspondingly setting parameters corresponding to the selected probability of the subsequent seed points according to the attribute of each seed point;
determining seed points to be collected according to the preset sample pair sampling number and parameters corresponding to the selected probability of each seed point;
determining the geographic coordinates of the seed points to be acquired according to the pixel positions of the tile images of the seed points to be acquired, the geographic space range included by the tile images of the seed points to be acquired, and the size of the tile images of the seed points to be acquired;
and determining a target tile image containing the seed point to be acquired and an image which is correspondingly marked by the target tile image as a group of training sample pairs according to the geographic coordinate of the seed point to be acquired, the size of the tile image, the remote sensing image data set to be marked and all the obtained final image marking results, and forming the training sample pairs by all the groups of training sample pairs.
In an exemplary embodiment of the present application, determining a target tile image including the seed point to be collected and a correspondingly labeled image of the target tile image according to the geographic coordinate of the seed point to be collected, the size of the tile image, the remote sensing image dataset to be labeled and all the obtained final image labeling results includes:
the determination of the target tile includes:
determining the size of a geographic space range included by the size of any tile image as the size of the geographic space range included by the target tile;
calculating the geographic space range included by the target tile as a target geographic space range by taking the geographic coordinate of the seed point to be collected as a center and the size of the tile image with the geographic space range included by the target tile as a range;
according to the target geographic space range, acquiring a part of tile image including the target geographic space range from the tile image of the remote sensing image data set to be marked as a tile image to be cut;
cutting the tile image to be cut including the target geographic space range according to the target geographic space range, and splicing all the cut images including the target geographic space range according to the mutual continuous relation of the geographic space ranges to form a new image as a target tile;
determining the image marked corresponding to the target tile:
acquiring image annotation results corresponding to the tile image to be segmented from all the acquired final image annotation results as the tile image to be segmented;
and according to the target geographic space range, cutting the target geographic space range in the tile image to be segmented, and splicing the images including the target geographic space range after cutting according to the mutual continuous relation of the geographic space ranges to form a new image as the image which is marked corresponding to the target tile.
In the exemplary embodiment of the application, the image annotation result corresponding to the tile image to be cut is obtained from all the obtained final image annotation results and is used as the tile image to be cut; if the image annotation result corresponding to the tile image to be cut cannot be obtained from all the obtained final image annotation results, cutting the target geographic space range in the obtained tile image to be cut, splicing the cut images including the target geographic space range according to the mutual continuous relationship of the geographic space ranges to form a new image as a first image, creating the new image as a second image according to the geographic space range included by the target tile, placing the first image at the corresponding position in the second image according to the geographic space range included by the first image and the geographic space range included by the second image, and setting the pixel value of the pixel position which is not filled by the first image in the second image to be 0.
In the exemplary embodiment of the application, more training sample pairs are obtained from the limited number of images obtained by on-line deep learning of the new tile image and the new tile image labeled according to the seed points, so that the image recognition capability of the auxiliary labeling algorithm network model is improved, the accuracy of the auxiliary labeling algorithm network model after labeling is improved, the follow-up manual correction is reduced, and the labeling efficiency is improved. The number of training data samples of the online deep learning model update is smaller than that of the offline deep learning model update, so how to improve the recognition capability of the model by using limited image samples and further improve the accuracy of the annotation also needs to be considered.
In an exemplary embodiment of the present application, as shown in fig. 7, a may be set according to an initialization parameter1Seed points H within the region, wherein all seed points H can be uniformly located in the region a1And generating seed point positions and sampling numbers (sample pair number) by utilizing a predefined interval (namely, seed point distance), wherein the seed points are specifically set by simulating real sample distribution by utilizing a Monte Carlo method, and selecting the seed point positions.
In the exemplary embodiment of the present application, attributes are added to all the seed points, and the attributes may be divided into two types, where one type is correct (that is, the auxiliary annotation algorithm network model C can correctly identify whether the seed point is the type of the annotation task selected when determining the initialization parameter, for example, the pixel position where the second seed point H2 is located in fig. 8 can be correctly identified by the auxiliary annotation algorithm network model C, that is, the annotation of the corresponding position of the seed point in the image output by the auxiliary annotation algorithm network model C does not need to be corrected by subsequent manual work1The lower part is marked a of the network model C of the auxiliary marking algorithm1The middle part is a after artificial correction1) The other type is wrong (i.e. the auxiliary annotation algorithm network model C cannot correctly identify whether the seed point is the selected annotation task type, such as the pixel where the first seed point H1 is located in FIG. 8The position can not be correctly identified by the auxiliary labeling algorithm network model C, that is, the position label corresponding to the seed point in the image output by the auxiliary labeling algorithm network model C needs to be corrected by subsequent manual work).
In an exemplary embodiment of the present application, a preset parameter may be performed on the selected probability of the subsequent seed point according to the attribute of the seed point, where if the attribute of the seed point is correct, the selected preset first probability may be 9/10, and if the attribute of the seed point is wrong, the selected preset second probability may be 1/10. By carrying out attribute setting and selection probability setting on the seed points, the probability that the seed points with correct attributes are selected can be increased, so that the accuracy of image recognition of the network model of the auxiliary labeling algorithm after parameters of the network model of the subsequent auxiliary labeling algorithm are updated is improved, the manual correction cost is reduced, and the labeling efficiency is improved.
In an exemplary embodiment of the present application, a seed point to be collected is determined according to a sample sampling number and a seed point selection probability, and a sample pair is generated as a set of training sample pairs according to a corresponding seed point position, where the sample pair is as shown in fig. 9, and all the sets of training sample pairs constitute a training sample pair set.
In the exemplary embodiment of the present application, the training sample pair set is input into the auxiliary labeling algorithm network model C for learning, and the parameters of the auxiliary labeling algorithm network model C are updated by using the training algorithm.
In an exemplary embodiment of the present application, the iteration of the network model C of the auxiliary labeling algorithm according to the present technical solution updates parameters according to sample pairs, and a group of sample pairs may include: the image labeled by the auxiliary labeling algorithm network model C and/or the image after the image labeled by the auxiliary labeling algorithm network model C is corrected manually, and the newly formed target tile and the image after the target tile are in the corresponding standard.
In an exemplary embodiment of the present application, the method may further include: after the image annotation task is completed on the first-level subtask data set, applying the obtained auxiliary annotation algorithm network model with updated parameters to the image annotation task of the next first-level subtask data set; and/or the presence of a gas in the gas,
and after the image annotation task is completed on the secondary subtask data set, applying the obtained auxiliary annotation algorithm network model with updated parameters to the image annotation task of the next secondary subtask data set.
In the exemplary embodiment of the present application, the updated auxiliary annotation algorithm network model C can be used to continue the next sub-region a to be annotated2And repeating the scheme until all labeling tasks of A1 are completed.
In the exemplary embodiment of the present application, the method for executing the annotation task of a1 may be further repeated, and the annotation tasks of a2, A3 … …, and the like are performed by using the auxiliary annotation algorithm network model C updated after the annotation task of a1 is completed, until all the annotation tasks of the first task data set a are completed.
In the exemplary embodiment of the present application, whether to temporarily store the parameters of the network model C of the auxiliary labeling algorithm may be selected at this time; and inputting the final parameters of the auxiliary annotation algorithm network model C into a model warehouse for standby, and entering the annotation task of the second task data set B according to the method until all the first task data sets in the image data set P to be annotated are annotated.
In an exemplary embodiment of the present application, the method may further include: and after the image annotation task is completed on the image data set P, acquiring an annotation result corresponding to the image data set P as an annotation result of the image data set P.
In an exemplary embodiment of the present application, the corresponding annotation result may be submitted after any one of the first task data sets in the image data set P is annotated, or the corresponding annotation result may be submitted after all the first task data sets in the image data set P are annotated.
In an exemplary embodiment of the present application, the method may further include: and after the image data set P is subjected to image annotation task, storing the corresponding parameters of all the final auxiliary annotation algorithm network models after the obtained parameters are updated into a preset model parameter warehouse for standby.
In an exemplary embodiment of the present application, all annotated data may be converted to vector data using GDAL and stored into a data repository.
The embodiment of the present application further provides an apparatus 1 for assisting processing of remote sensing images based on online learning, as shown in fig. 10, which may include a processor 11 and a computer-readable storage medium 12, where the computer-readable storage medium 12 stores instructions, and when the instructions are executed by the processor 11, the method for assisting processing of remote sensing images based on online learning as described in any one of the above items is implemented.
In the exemplary embodiment of the present application, any of the foregoing method embodiments is applicable to the apparatus embodiment, and details are not repeated here.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (10)

1. A remote sensing image auxiliary processing method based on online learning is characterized by comprising the following steps:
acquiring a remote sensing image data set to be marked, wherein the remote sensing image data set comprises a plurality of subdata sets, and each subdata set comprises a plurality of tile images;
acquiring the labeling task type of the current labeling task, determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the remote sensing image data set to be labeled, and initializing the auxiliary labeling algorithm network model;
processing each sub data set through the auxiliary labeling algorithm network model to output a labeling result, performing parameter updating on the auxiliary labeling algorithm network model by using the training algorithm after each sub data set is processed so as to realize iterative parameter updating of the auxiliary labeling algorithm network model, and processing the unmarked sub data set by using the auxiliary labeling algorithm network model after the iterative parameter updating;
and acquiring the labeling result corresponding to each subdata set as the image labeling result of the remote sensing image data set.
2. The remote sensing image auxiliary processing method based on online learning according to claim 1, wherein the remote sensing image data set comprises: a first task data set; the sub data set includes: a primary subtask data set;
the method for acquiring the remote sensing image data set to be marked comprises the following steps:
the method comprises the steps of obtaining a first task data set to be marked, dividing the first task data set to be marked into a plurality of first-level subtask data sets according to a first preset rule, wherein the first task data set to be marked comprises a plurality of tile images.
3. The remote sensing image auxiliary processing method based on online learning as claimed in claim 2, wherein the acquiring of the first task data set to be labeled comprises:
the method comprises the steps of obtaining an image data set P to be marked, splitting the image data set P to be marked into a plurality of first task data sets according to data information of tile images in the image data set P, wherein the data information comprises the type of equipment for collecting images.
4. The remote sensing image auxiliary processing method based on online learning of claim 3, wherein the determining of an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the remote sensing image data set to be labeled comprises:
and determining an auxiliary labeling algorithm network model and a preset training algorithm according to the labeling task type and the first task data set to be labeled.
5. The remote sensing image auxiliary processing method based on online learning of claim 2, wherein the sub data set comprises: a secondary subtask data set; after the dividing the first task data set to be labeled into a plurality of primary subtask data sets, the method further includes:
and dividing each primary subtask data into a plurality of secondary subtask data sets according to a second preset rule, and taking each secondary subtask data set as a sub data set input into the auxiliary labeling algorithm network model.
6. The remote sensing image auxiliary processing method based on online learning of claim 5, wherein each sub data set is processed through the auxiliary labeling algorithm network model to output a labeling result, and after each sub data set is processed, the auxiliary labeling algorithm network model is updated with parameters by using the training algorithm, and the method comprises the following steps:
inputting any subdata set into an initialized auxiliary labeling algorithm network model to obtain a labeled subdata set, and updating parameters of the auxiliary labeling algorithm network model by using the training algorithm according to a tile image in the labeled subdata set;
for any subdata set which is not input into the auxiliary labeling algorithm network model in the plurality of subdata sets, executing the following operations until the plurality of subdata sets are all input into the auxiliary labeling algorithm network model:
inputting the updated auxiliary labeling algorithm network model to obtain a labeled subdata set; and updating parameters of the auxiliary labeling algorithm network model by utilizing the training algorithm according to the tile image in the labeled subdata set.
7. The remote sensing image auxiliary processing method based on online learning according to claim 6, wherein the method further comprises: and acquiring an image annotation result output after each subdata set is input into the auxiliary annotation algorithm network model, correcting annotation information in the image annotation result, and taking the corrected image annotation result as a final image annotation result corresponding to the subdata set.
8. The remote sensing image auxiliary processing method based on online learning of claim 7, wherein the parameter updating of the auxiliary labeling algorithm network model by the training algorithm according to the tile image in the labeled subdata set comprises:
acquiring a training sample pair set according to the sub data set and the final image labeling result corresponding to the sub data set;
inputting the training sample pair set into the auxiliary labeling algorithm network model, and updating parameters of the auxiliary labeling algorithm network model by adopting the training algorithm.
9. The remote sensing image auxiliary processing method based on online learning of claim 6, wherein the obtaining of the training sample pair set according to the sub data sets and the final image labeling results corresponding to the sub data sets comprises:
determining the position of a seed point on each tile image in the subdata set according to the predetermined distance and number of the seed points;
setting seed points at the positions of the seed points, and setting attributes of the seed points according to the final image labeling result corresponding to the subdata set; the attributes include: a correct attribute and an incorrect attribute; the correct attribute is: the labeling of the pixel position of the tile image where the current seed point is located and the labeling of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary labeling algorithm network model are not corrected; the error attribute refers to: revising the mark of the pixel position of the tile image where the current seed point is located in the image output after the tile image where the current seed point is located is input into the auxiliary marking algorithm network model;
correspondingly setting parameters corresponding to the selected probability of the subsequent seed points according to the attribute of each seed point;
determining seed points to be collected according to the preset sample pair sampling number and parameters corresponding to the selected probability of each seed point;
determining the geographic coordinates of the seed points to be acquired according to the pixel positions of the tile images of the seed points to be acquired, the geographic space range included by the tile images of the seed points to be acquired, and the size of the tile images of the seed points to be acquired;
and determining a target tile image containing the seed point to be acquired and an image which is correspondingly marked by the target tile image as a group of training sample pairs according to the geographic coordinate of the seed point to be acquired, the size of the tile image, the remote sensing image data set to be marked and all the obtained final image marking results, and forming the training sample pairs by all the groups of training sample pairs.
10. An auxiliary processing device for remote sensing images based on online learning, comprising a processor and a computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, the auxiliary processing device for remote sensing images based on online learning is implemented according to any one of claims 1 to 9.
CN202110378930.0A 2021-04-08 2021-04-08 Remote sensing image auxiliary processing method and device based on online learning Active CN113158855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110378930.0A CN113158855B (en) 2021-04-08 2021-04-08 Remote sensing image auxiliary processing method and device based on online learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110378930.0A CN113158855B (en) 2021-04-08 2021-04-08 Remote sensing image auxiliary processing method and device based on online learning

Publications (2)

Publication Number Publication Date
CN113158855A true CN113158855A (en) 2021-07-23
CN113158855B CN113158855B (en) 2023-04-18

Family

ID=76889286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110378930.0A Active CN113158855B (en) 2021-04-08 2021-04-08 Remote sensing image auxiliary processing method and device based on online learning

Country Status (1)

Country Link
CN (1) CN113158855B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699543A (en) * 2012-09-28 2014-04-02 南京理工大学 Information visualization method based on ground object classification of remote sensing image
US20170161584A1 (en) * 2015-12-07 2017-06-08 The Climate Corporation Cloud detection on remote sensing imagery
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN109447151A (en) * 2018-10-26 2019-03-08 成都国星宇航科技有限公司 A kind of remotely-sensed data analysis method based on deep learning
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN111597861A (en) * 2019-02-21 2020-08-28 中科星图股份有限公司 System and method for automatically interpreting ground object of remote sensing image
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
US20210004964A1 (en) * 2018-04-05 2021-01-07 Nec Corporation Image processing device, image processing method, and recording medium having image processing program stored thereon
CN112329751A (en) * 2021-01-06 2021-02-05 北京道达天际科技有限公司 Deep learning-based multi-scale remote sensing image target identification system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699543A (en) * 2012-09-28 2014-04-02 南京理工大学 Information visualization method based on ground object classification of remote sensing image
US20170161584A1 (en) * 2015-12-07 2017-06-08 The Climate Corporation Cloud detection on remote sensing imagery
US20210004964A1 (en) * 2018-04-05 2021-01-07 Nec Corporation Image processing device, image processing method, and recording medium having image processing program stored thereon
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN109447151A (en) * 2018-10-26 2019-03-08 成都国星宇航科技有限公司 A kind of remotely-sensed data analysis method based on deep learning
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN111597861A (en) * 2019-02-21 2020-08-28 中科星图股份有限公司 System and method for automatically interpreting ground object of remote sensing image
CN111967313A (en) * 2020-07-08 2020-11-20 北京航空航天大学 Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN112329751A (en) * 2021-01-06 2021-02-05 北京道达天际科技有限公司 Deep learning-based multi-scale remote sensing image target identification system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卞利涛等: "多时相大数据量遥感图像快速显示方法研究", 《测绘与空间地理信息》 *

Also Published As

Publication number Publication date
CN113158855B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112560876B (en) Single-stage small sample target detection method for decoupling measurement
US9886746B2 (en) System and method for image inpainting
CN109919209A (en) A kind of domain-adaptive deep learning method and readable storage medium storing program for executing
CN110909868A (en) Node representation method and device based on graph neural network model
CN110889863A (en) Target tracking method based on target perception correlation filtering
CN111149101B (en) Target pattern searching method and computer readable storage medium
CN111160360B (en) Image recognition method, device and system
CN112668608B (en) Image recognition method and device, electronic equipment and storage medium
CN110390639A (en) Processing joining method, device, equipment and the storage medium of orthography
US11393232B2 (en) Extracting values from images of documents
CN110517221B (en) Gap positioning method and device based on real coordinates and storage medium
CN114758199A (en) Training method, device, equipment and storage medium for detection model
CN113158856B (en) Processing method and device for extracting target area in remote sensing image
CN114511077A (en) Training point cloud processing neural networks using pseudo-element based data augmentation
CN113158855B (en) Remote sensing image auxiliary processing method and device based on online learning
CN108021985B (en) Model parameter training method and device
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN115830046A (en) Interactive image segmentation method, device, equipment and storage medium
CN111401394A (en) Image annotation method and device and computer readable storage medium
CN113986245A (en) Object code generation method, device, equipment and medium based on HALO platform
CN113658173A (en) Compression method, system and computing equipment of detection model based on knowledge distillation
CN113326877A (en) Model training method, data processing method, device, apparatus, storage medium, and program
CN112862002A (en) Training method of multi-scale target detection model, target detection method and device
CN112749293A (en) Image classification method and device and storage medium
CN110705479A (en) Model training method, target recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No.16, 1st floor, building 7, No.333, middle section of Shuangnan Avenue, Dongsheng Street, Shuangliu District, Chengdu, Sichuan 610094

Applicant after: Chengdu Guoxing Aerospace Technology Co.,Ltd.

Address before: No.16, 1st floor, building 7, No.333, middle section of Shuangnan Avenue, Dongsheng Street, Shuangliu District, Chengdu, Sichuan 610094

Applicant before: CHENGDU GUOXING AEROSPACE TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant