CN108764263A - The atural object annotation equipment and method of remote sensing image - Google Patents

The atural object annotation equipment and method of remote sensing image Download PDF

Info

Publication number
CN108764263A
CN108764263A CN201810146580.3A CN201810146580A CN108764263A CN 108764263 A CN108764263 A CN 108764263A CN 201810146580 A CN201810146580 A CN 201810146580A CN 108764263 A CN108764263 A CN 108764263A
Authority
CN
China
Prior art keywords
remote sensing
atural object
sensing image
class
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810146580.3A
Other languages
Chinese (zh)
Inventor
高彬
张弓
顾竹
宋宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Standard World Co Ltd
Original Assignee
Beijing Standard World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Standard World Co Ltd filed Critical Beijing Standard World Co Ltd
Priority to CN201810146580.3A priority Critical patent/CN108764263A/en
Publication of CN108764263A publication Critical patent/CN108764263A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the atural object annotation equipments and method of a kind of remote sensing image.The atural object annotation equipment of the remote sensing image includes:First unit, first unit are configured as obtaining training sample, and extract the atural object mark feature of training sample, generate training pattern;Second unit, second unit is configured as obtaining training pattern, and is labeled according to the atural object in the second remote sensing image of training pattern pair.The atural object mask method of the remote sensing image includes:The atural object for extracting training sample marks feature, generates training pattern;It is labeled according to the atural object in the second remote sensing image of training pattern pair.Using the atural object annotation equipment and method of remote sensing image provided by the present invention, the automatic marking to remote sensing image atural object is realized, atural object mark is quick, accuracy rate is high, while requiring the clarity of remote sensing image relatively low.

Description

The atural object annotation equipment and method of remote sensing image
Technical field
The present invention relates to remote sensing technology fields, in particular to the atural object annotation equipment and method of a kind of remote sensing image.
Background technology
The present invention belongs to the relevant technologies related to the present invention for the description of background technology, be only used for explanation and just In the invention content for understanding the present invention, it should not be construed as applicant and be specifically identified to or estimate applicant being considered of the invention for the first time The prior art for the applying date filed an application.
Atural object (road, house, water body etc.) in remote sensing image is labeled be remote sensing image field important topic, Currently, artificial mark is mostly used greatly to the mark of atural object;The atural object in remote sensing image is carried out using the method manually marked Mark, for example, " 1 " is labeled as the road in a frame remote sensing image, house is labeled as " 2 ", water body is labeled as " 3 ".Artificial mark The accuracy of note is higher, but extremely inefficient, and the atural object mark in a large amount of remote sensing image is taken time and effort.
To overcome drawbacks described above, it is proposed that the automatic marking method of remote sensing image.Each frame remote sensing image is by pixel Composition, each pixel can be characterized by red (R), green (G), blue (B) three kinds of colors, i.e., each pixel all have R values, G values, B values;Accordingly for the R values, G values, B values of two different atural objects, at least one group there is larger differences, utilize The R values of each atural object, the difference of G values or B values set its R value, the range of G values, B values to different atural object, may be implemented pair The automatic marking of different atural objects.Below for judging that some region is house or water body in remote sensing image, first, definition The R values of house and water body, the range of G values, B values in remote sensing image, for example, the R values in house, G values, B values be [0,50], [0, 100], [0,150], R values, G values, the B values of water body are [50,150], [0,50], [100,250];Then, it obtains to be marked distant Whether R values, G values, the B values of the pixel belong to the R values in house or water body, the range of G values, B values in sense image, such as to be marked distant It is 5,10,25 to feel the R values of some pixel in image, G values, B values, the R values of the pixel, G values, B values house R values, G values, B It is worth in range, then judges that the pixel is house in remote sensing image to be marked.
There is following problems for above-mentioned automatic marking method:Error is larger, when same atural object is due to regional difference, shooting Between the reasons such as difference, shooting angle difference, self color result in the R values, G values or B values of its own and have exceeded default The atural object R values, the value range of G values, B values, resulting in can not mark or even error label.
Invention content
The present invention provides a kind of atural object annotation equipment of remote sensing image of superior performance and methods.
The embodiment of first aspect present invention provides a kind of atural object annotation equipment of remote sensing image:First unit, The first unit is configured as obtaining training sample, and extracts the atural object mark feature of the training sample, generates training mould Type;Wherein, the training sample includes multiple the first remote sensing images that atural object mark is completed;Second unit, described second is single Member is configured as obtaining the training pattern, and is labeled according to the atural object in the second remote sensing image of the training pattern pair.
Preferably, the atural object annotation equipment of the remote sensing image further includes third unit, and the third unit is configured as The atural object in multiple first remote sensing images is labeled according to instruction, forms the training sample.
Preferably, the third unit includes:Module is cut out, if the module of cutting out is cut out to obtain to remote sensing image Dry first remote sensing image;Labeling module, the labeling module select multiple first remote sensing images in several first remote sensing images And atural object mark is carried out to multiple first remote sensing images chosen according to instruction, form the training sample.
Preferably, the third unit further includes expansion module, and the expansion module is to the atural object in the training sample Mark part carries out expansion process;The first unit obtains the training sample Jing Guo expansion process.
Preferably, the third unit further includes weight module, and the weight module is for calculating in the training sample The background weight of the weight of each atural object and each atural object;The first unit is according to the training sample, a certain atural object Weight, the background weight of a certain atural object generate the training pattern of the atural object.
Preferably, the calculation formula of the weight weight (class) of a certain atural object is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample; image_count (class) it is the image number of the first remote sensing image with the atural object in training sample;W is the first remote sensing image width direction Number of pixels;H is the number of pixels of the first remote sensing image length direction;The median of f (class) are f's (class) Median.
Preferably, the atural object annotation equipment of the remote sensing image further includes Unit the 4th, and Unit the 4th is configured as To generating the Period Process sampling of training pattern, current training pattern is obtained, using current training pattern to test sample Atural object mark is carried out, and the error amount between the annotation results and true mark is calculated according to error function;If the error amount Less than or equal to error threshold, then judge that current training pattern is correct;If the error amount is more than error threshold, described the Unit one obtains the error amount, and current training pattern is corrected according to the error amount.
Preferably, Unit the 4th includes input module, computing module, determination module and output module;It is described defeated Enter module and carries out ground to test sample for obtaining test sample and current training pattern, and according to current training pattern Object marks;The computing module calculates the annotation results and test sample for obtaining annotation results, and using error function Error amount between true mark;The determination module is used for the size of the error amount and the error threshold, if accidentally Difference is less than or equal to error threshold, then judges that current training pattern is correct;If the error amount is more than error threshold, lead to It crosses the output module and the error amount is transferred to the first unit.
Preferably, the error function is logarithm loss function.
Preferably, the size of first remote sensing image and second remote sensing image is 512x512 pixels.
Preferably, first remote sensing image and second remote sensing image are triple channel image;Described second is distant Feel after image marks is single channel image.
Preferably, the first unit generates the training pattern using U-net neural networks.
Preferably, the U-net neural networks include the first convolutional layer, pond layer, warp lamination and the second convolution Layer;The U-net neural networks are configured as:It is provided with one layer of pond layer between every two layers of first convolutional layer, most One layer of warp lamination, every two layers of volume Two are provided between the second convolutional layer of the first convolutional layer of later layer and first layer One layer of warp lamination is provided between lamination;For carrying out convolution operation, the pond layer is used for first convolutional layer Pondization operation is carried out, the warp lamination is for carrying out deconvolution operation, and second convolutional layer is for carrying out convolution operation.
Preferably, the convolution kernel of every layer of first convolutional layer includes two one-dimensional convolution kernels, respectively 1xN convolution Core and Nx1 convolution kernels, wherein N are the positive integer more than or equal to 2.
The embodiment of second aspect of the present invention provides a kind of atural object mask method of remote sensing image, including:Extraction training The atural object of sample marks feature, generates training pattern;The training sample includes multiple the first remote sensing that atural object mark is completed Image;It is labeled according to the atural object in the second remote sensing image of the training pattern pair.
Preferably, the training sample obtains as follows:According to instruction in multiple first remote sensing images Atural object be labeled.
Preferably, the training sample obtains as follows:Remote sensing image is cut out to obtain several first distant Feel image, multiple first remote sensing images and multiple first remote sensing according to instruction to choosing are selected in several first remote sensing images Image carries out atural object mark, forms the training sample.
Preferably, the atural object mask method of the remote sensing image further includes:To the atural object mark portion in the training sample Divide and carries out expansion process;The atural object mark feature of training sample Jing Guo expansion process is extracted, training pattern is generated.
Preferably, the atural object mask method of the remote sensing image further includes:Calculate each atural object in the training sample The background weight of weight and each atural object;According to the training sample, the weight of a certain atural object, the background weight of a certain atural object Generate the training pattern of the atural object.
Preferably, the calculation formula of the weight weight (class) of a certain atural object is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample; image_count (class) it is the image number of the first remote sensing image with the atural object in training sample;W is the first remote sensing image width direction Number of pixels;H is the number of pixels of the first remote sensing image length direction;The median of f (class) are f's (class) Median.
Preferably, the atural object mask method of the remote sensing image further includes:To generating the Period Process sampling of training pattern, Current training pattern is obtained, atural object mark is carried out to test sample using current training pattern, and according to error function meter Calculate the error amount between the annotation results and true mark;If the error amount is less than or equal to error threshold, judgement is current Training pattern it is correct;If the error amount is more than error threshold, current training pattern is corrected according to the error amount.
Preferably, the error function is logarithm loss function.
Preferably, the size of first remote sensing image and the size of second remote sensing image are 512x512 pictures Element.
Preferably, first remote sensing image and second remote sensing image are triple channel image;Described second is distant Feel after image marks is single channel image.
Preferably, the training pattern is generated using U-net neural networks.
Preferably, the U-net neural networks include the first convolutional layer, pond layer, warp lamination and the second convolution Layer;The U-net neural networks are configured as:It is provided with one layer of pond layer between every two layers of first convolutional layer, most One layer of warp lamination, every two layers of volume Two are provided between the second convolutional layer of the first convolutional layer of later layer and first layer One layer of warp lamination is provided between lamination;For carrying out convolution operation, the pond layer is used for first convolutional layer Pondization operation is carried out, the warp lamination is for carrying out deconvolution operation, and second convolutional layer is for carrying out convolution operation.
Preferably, the convolution kernel of every layer of first convolutional layer includes two one-dimensional convolution kernels, respectively 1xN convolution Core and Nx1 convolution kernels, wherein N are the positive integer more than or equal to 2.
The embodiment of third aspect present invention provides a kind of computer readable storage medium, is stored thereon with computer journey The step of sequence, which realizes the atural object mask method of remote sensing image when being executed by processor.
The embodiment of fourth aspect present invention provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, the processor realize the remote sensing shadow when executing described program The step of atural object mask method of picture.
The atural object annotation equipment and method of the remote sensing image that the embodiment of the present invention is provided, computer readable storage medium, And computer equipment, the automatic marking to remote sensing image atural object is realized, atural object mark is quick, accuracy rate is high, while to distant The clarity requirement for feeling image is relatively low.
The additional aspect and advantage of the present invention will become apparent in following description section, or practice through the invention Recognize.
Description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination following accompanying drawings to embodiment Obviously and it is readily appreciated that, wherein:
Fig. 1 is the structural schematic diagram of the first embodiment of the atural object annotation equipment of remote sensing image of the present invention;
Fig. 2 is the structural schematic diagram of second of embodiment of the atural object annotation equipment of remote sensing image of the present invention;
Fig. 3 is the structural representation of the third unit in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Figure;
Fig. 4 is the structural representation of Unit the 4th in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Figure;
Fig. 5 is the flow chart of the first embodiment of the atural object mask method of remote sensing image of the present invention;
Fig. 6 is the flow chart of second of embodiment of the atural object mask method of remote sensing image of the present invention;
Fig. 7 schematically illustrates U- used by second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Net neural network models;
Fig. 8 schematically illustrates the first convolution in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Convolution process of the layer for a pixel;
Fig. 9 schematically illustrates a pond in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Change process;
Figure 10 schematically illustrates second remote sensing image;
At Figure 11 shows second of embodiment using the atural object annotation equipment of remote sensing image of the present invention to Figure 10 Output pattern after reason.
Wherein, the correspondence in Fig. 1 to Fig. 4 between reference numeral and component names is:1 first unit, 2 second is single Member, 3 third units, 31 cut out module, 32 labeling modules, 33 expansion modules, 34 weight modules, Unit 4 the 4th, 41 input moulds Block, 42 computing modules, 43 determination modules, 44 output modules;Correspondence in Fig. 7 between reference numeral and component names is: 51 first convolutional layers;52 pond layers;53 warp laminations;54 second convolutional layers.
Specific implementation mode
To better understand the objects, features and advantages of the present invention, below in conjunction with the accompanying drawings and specific real Mode is applied the present invention is further described in detail.It should be noted that in the absence of conflict, the implementation of the application Feature in example and embodiment can be combined with each other.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still, the present invention may be used also To be implemented different from other modes described here using other, therefore, protection scope of the present invention is not by described below Specific embodiment limitation.
Following the discussion provides multiple embodiments of the present invention.Although each embodiment represents the single combination of invention, But different embodiments of the invention can replace, or merge combination, therefore the present invention is it is also contemplated that comprising recorded identical And/or all possible combinations of different embodiments.Thus, if one embodiment includes A, B, C, another embodiment includes B With the combination of D, then the present invention also should be regarded as include the every other possible combinations of one or more containing A, B, C, D reality Example is applied, although the embodiment may not have specific literature record in the following contents.
Fig. 1 is the structural schematic diagram of the first embodiment of the atural object annotation equipment of remote sensing image of the present invention;Such as Fig. 1 institutes Show, the first embodiment of the atural object annotation equipment of remote sensing image of the present invention, including first unit 1 and second unit 2;First Unit 1 is configured as obtaining training sample, and extracts the atural object mark feature of training sample, generates training pattern;Wherein, training Sample includes multiple the first remote sensing images that atural object mark is completed;Second unit 2 be configured as obtain training pattern, and according to Atural object in the second remote sensing image of training pattern pair is labeled.
The first embodiment of the atural object annotation equipment of remote sensing image of the present invention is by being arranged first unit 1 and the second list Member 2, first unit 1 obtain training sample, and the atural object by extracting training sample marks feature, to the ground for including in training sample Object mark feature is learnt, and training pattern is generated;Second unit 2 is connect with first unit 1, training pattern is obtained, by obtaining Atural object in the second remote sensing image of training pattern pair taken is labeled;On the one hand, it avoids artificial mark to take time and effort, realize Atural object mark is carried out to remote sensing image fast automaticly;On the other hand, the first remote sensing image and the second remote sensing image are distant Feel image, by marking the study of feature to the atural object of the first remote sensing image, may be implemented to the atural object of the second remote sensing image into Rower is noted, even relatively low in the clarity of remote sensing image, and the training pattern generated by the first remote sensing image also may be used With suitable for the atural object mark to the second remote sensing image, the accuracy of atural object mark is higher.
In the first embodiment of the atural object annotation equipment of remote sensing image of the present invention, training sample can be by manually to Atural object in one remote sensing image is labeled to be formed, can also be directly transfer historical data, such as a certain area remote sensing shadow The accurate labeled versions of atural object of picture;It is worth noting that the levels of precision of the atural object mark in training sample will directly influence instruction The precise degrees for practicing model should ensure accurately to mark the atural object in training sample as possible.The training that first unit 1 is extracted Atural object in sample marks feature:Connection relation, atural object between atural object color, atural object shape, each atural object Polymerization arrangement relationship, atural object edge feature;Feature is marked by extracting above-mentioned atural object and generates training pattern, in training pattern The atural object mark feature corresponding to each atural object is contained, to realize the automatic marking to atural object in the second remote sensing image.
Fig. 2 is the structural schematic diagram of second of embodiment of the atural object annotation equipment of remote sensing image of the present invention;Such as Fig. 2 institutes Show, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, the atural object annotation equipment of remote sensing image further includes Third unit 3, third unit 3 are configured as being labeled the atural object in multiple first remote sensing images according to instruction, form instruction Practice sample;Wherein, instruction is the mark instruction sent out by atural object mark personnel, and third unit 3 responds mark instruction, right First remote sensing image is labeled;For example, the first remote sensing image is illustrated on touch screen by touch-control screen display system, mark The atural object in the first remote sensing image for being illustrated on touch screen is identified in personnel, and sends out instruction to touch screen and (such as enclose Atural object etc. in the first remote sensing images is smeared in choosing) carry out atural object mark in the first remote sensing image.By the way that third unit 3 is arranged, Atural object in first remote sensing image is labeled, realizes and training sample is established by remote sensing image.
Fig. 3 is the structural representation of the third unit in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Figure;As shown in figure 3, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, third unit 3 includes cutting out mould Block 31 and labeling module 32, cut out module 31 and are cut out to obtain several first remote sensing images, labeling module 32 to remote sensing image Selected in several first remote sensing images multiple first remote sensing images and according to instruction to multiple first remote sensing images for choosing into Row atural object marks, and forms training sample.During forming training sample, the data to obtain meeting first unit 1 input lattice The first remote sensing image that formula requires, needs to be cut out remote sensing image, forms the first remote sensing image;Above-mentioned cut out can be pair Remote sensing image specifies specific the cutting out of region progress, can also be that the sequentially array carried out to remote sensing image is cut out;This implementation In example, module 31 is cut out by setting, carrying out sequentially array to remote sensing image cuts out, and obtains several first remote sensing images;Selection Several first remote sensing image enormous amounts that the above-mentioned mode of cutting out is generated, if being labeled to each first remote sensing image, It can increase training difficulty and the training time of first unit 1, in the present embodiment, pass through and labeling module 32 is set, on the one hand, Multiple first remote sensing images are randomly choosed in cutting out module 31 and cutting out several first remote sensing images to be formed;On the other hand, right Atural object in multiple first remote sensing images of selection is labeled, and forms training sample;It is effectively improved training pattern Generating rate, while the reliability of stochastical sampling is higher, it is ensured that the atural object diversity in training sample and atural object abundance.
As shown in figure 3, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, third unit 3 is also wrapped Expansion module 33 is included, expansion module 33 carries out expansion process to the atural object mark part in training sample;First unit 1 obtains warp Cross the training sample of expansion process.For in training sample atural object mark part, certain atural objects (such as water channel, road) relative to Area shared by background is smaller, causes the difficulty that first unit 1 extracts the atural object of these atural objects mark feature larger, very It is extremely inaccurate;By the way that expansion module 33 is arranged, expansion process is carried out to the atural object mark part in training sample, improves ground Elemental area of the object mark part relative to background, reduces the extraction that first unit 1 marks the atural object of these atural objects feature Difficulty, and then improve the precision of training pattern.
As shown in figure 3, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, third unit 3 is also wrapped Weight module 34 is included, weight module 34 is for calculating the weight of each atural object and the background of each atural object power in training sample Weight;First unit 1 generates the training mould of the atural object according to training sample, the weight of a certain atural object, the background weight of a certain atural object Type.For first remote sensing images, wherein comprising there are many atural objects, need to be labeled respectively, for example, first remote sensing Image needs to be labeled four kinds of atural objects respectively, with meadow wherein including four kinds of meadow, road, river, building atural objects Atural object mark for, then road, river, to build these three atural objects be background;In the present embodiment, by the way that weight module is arranged 34, the weight of a certain atural object and the background weight of a certain atural object are calculated, when regenerating training pattern, fully takes into account atural object phase For the significance level of background, Balance Treatment has been carried out between atural object and background, the generation of training pattern has been carried out excellent Change, prevents training pattern excessive for the feature extraction proportion of background.
Further, in the present embodiment, the background weight of the weight of a certain atural object and a certain atural object uses median Frequency balancing algorithms (median frequency balanced algorithm) are calculated, the weight weight (class) of a certain atural object Calculation formula it is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample; image_count (class) it is the image number of the first remote sensing image with the atural object in training sample, for example, for the meadow in training sample For atural object, if training sample includes 100 the first remote sensing images, wherein there are 80 the first remote sensing images to contain meadow atural object, Then image_count (class)=80;W is the number of pixels of the first remote sensing image width direction;H is that the first remote sensing image is long Spend the number of pixels in direction;Median of f (class) are the median of f (class).Using median frequency The calculating of atural object weight and background weight of the balancing algorithms suitable for remote sensing image atural object mark field;Weighted data More precisely.
As shown in Fig. 2, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, the atural object of remote sensing image Annotation equipment further includes the 4th unit 4, and the 4th unit 4 is configured as, to the Period Process sampling for generating training pattern, being worked as Preceding training pattern carries out atural object mark to test sample using current training pattern, and calculates the mark according to error function Note the error amount between result and true mark;If error amount is less than or equal to error threshold, current training pattern is judged Correctly;If error amount is more than error threshold, first unit 1 obtains error amount, and current training mould is corrected according to error amount Type.
This preferred embodiment obtains the training pattern being currently generated, and the instruction to being currently generated by the way that the 4th unit 4 is arranged Practice model to be assessed, if assessment result is met the requirements, illustrates that the training sample being currently generated can be accurately distant to second Feel image and carries out atural object mark;If assessment result is unsatisfactory for requiring, the error amount that assessment obtains is sent to first unit 1, First unit 1 obtains new training pattern according to above-mentioned feedback information combined training sample, repeats the above process, until generate The assessment result of training sample is met the requirements;By the way that the 4th unit 4 is arranged, the precision of training pattern ensure that.It is instructed generating During practicing model, such as training sample includes 100 the first remote sensing images, then can divide ten batches of input first units 1, every batch of Include 10 the first remote sensing images, first unit 1 carries out the extraction that atural object marks feature to first training sample first, raw At first training pattern, then the 4th unit 4 assesses first training pattern, if assessment result is unsatisfactory for requiring, First unit 1 combines error amount and second batch training sample, generates second training pattern, repeats the above process, until working as Previous existence at the assessment result of training pattern meet the requirements;If ten batches of the first remote sensing images input the training pattern generated after the completion Assessment result be still unsatisfactory for requiring, then may be selected to repeat the above process, or the new training sample of input.Wherein, test sample To have carried out the remote sensing image of atural object mark, the 4th unit 4 calculates current training pattern to test sample according to error function Annotation results and true mark between error amount assessed.
Fig. 4 is the structural representation of Unit the 4th in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Figure;As shown in figure 4, the 4th unit 4 includes input module 41, computing module 42, determination module 43 and output module 44;It is defeated Enter module 41 to carry out test sample for obtaining test sample and current training pattern, and according to current training pattern Atural object marks;Computing module 42 calculates the annotation results and test sample for obtaining annotation results, and using error function Error amount between true mark;Determination module 43 be used for relative error value and error threshold size, if error amount be less than or Equal to error threshold, then judge that current training pattern is correct;It, will by output module 44 if error amount is more than error threshold Error amount is transferred to first unit 1.By be arranged input module 41 complete according to current training pattern to test sample into Row atural object labeling operation completes the calculating of error amount by the way that computing module 42 is arranged, will accidentally by the way that determination module 43 is arranged Difference is compared with prior scheduled error threshold, by the way that output module 44 is arranged, to the comparison result of determination module 43 into Row output;Above-mentioned module setting and connection effectively assess current training pattern, and are carried to the amendment of training pattern Data support is supplied.
Further, when assessing current training pattern, error function is preferably logarithm loss function.Logarithm damages The Evaluated effect for losing function is accurate, and it is higher to assess rate.
In one embodiment, the size of the first remote sensing image and the second remote sensing image is 512x512 pixels.On The generation that pixel setting is effectively guaranteed training pattern is stated, and the atural object of the second remote sensing image is marked;Distant to second After feeling image progress atural object mark, concatenation can be carried out, the complete remote sensing image of atural object labeled versions is integrated into.
In one embodiment, the first remote sensing image and the second remote sensing image are triple channel image;Second remote sensing shadow As being single channel image after mark.Remote sensing image is usually triple channel image (i.e. rgb color image), the second remote sensing image mark It is afterwards single channel image, single channel image is gray level image, and annotation results can be intuitively shown using single channel image.? In one embodiment, the first remote sensing image can be converted into single channel image after being labeled, and carry out ground to single channel image The extraction of object mark feature can reduce the generated time of training pattern, simultaneously because the contrast of single channel image is higher, the One unit 1 can mark feature to atural object and effectively be extracted, and enhance the accuracy of training pattern.
In second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, first unit 1 is using U-net nerves Network generates training pattern.U-net neural networks can accurately extract the atural object mark feature in training sample, and Form training pattern;The training sample needed simultaneously is less, and the generated time of training pattern is shorter.
Fig. 7 schematically illustrates U- used by second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Net neural network models;As shown in fig. 7, in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, U-net Neural network includes the first convolutional layer 51, pond layer 52, warp lamination 53 and the second convolutional layer 54;U-net neural network quilts It is configured to:It is provided with one layer of pond layer 52 between every two layers first convolutional layers 51, last first convolutional layer of layer and first layer the It is provided with one layer of warp lamination 53 between two convolutional layers, one layer of warp lamination 53 is provided between every two layers second convolutional layers 54; First convolutional layer 51 is for carrying out convolution operation, and pond layer 52 is for carrying out pondization operation, and warp lamination 53 is for carrying out warp Product operation, the second convolutional layer 54 is for carrying out convolution operation.Second of implementation of the atural object annotation equipment of remote sensing image of the present invention The process that the U-net neural metwork trainings that example uses obtain training pattern is as follows:First, the first remote sensing image passes through U-net god The first convolutional layer 51 and pond layer 52 through network carry out down-sampling operation, wherein the first convolutional layer of each layer 51 is all made of Convolution kernel carries out process of convolution to the image of entrance, and every layer of first convolutional layer 51 can carry out a convolution, can also continuously into Row multiple convolution, the first remote sensing image first pass around the first convolutional layer of first layer, and the image formed after convolution enters first layer pond Change layer and carry out pondization operation, the image obtained after pondization operation enters back into the first convolutional layer of the second layer and carries out convolution operation, repeats The above process, until completing down-sampling operation, in the present embodiment, carrying out four down-samplings, (the middle image of each down-sampling passes through Cross one layer of first convolutional layer 51 and one layer of pond layer 52) i.e. it is believed that completing down-sampling operation;After completing down-sampling operation Image carries out up-sampling operation, operating process by the warp lamination 53 of U-net neural networks and the second convolutional layer 54 again In, image first passes around first layer warp lamination 53 and carries out deconvolution operation, and the deconvolution operation of each layer of warp lamination 53 is equal It operates and corresponds with the pondization of each layer of pond layer 52, the image formed after deconvolution enters the progress of the second convolutional layer of first layer Convolution operation, wherein every layer of second convolutional layer 54 can carry out a convolution, can also be carried out continuously multiple convolution, convolution behaviour The image obtained after work enters back into second layer warp lamination and carries out deconvolution operation, repeats the above process, until being adopted on completing Sample operates, in this preferred embodiment, carry out four up-samplings (every time up-sampling in image pass through one layer of warp lamination 53 with And one layer of second convolutional layer 54) i.e. it is believed that completing up-sampling operation.When first remote sensing image is by above-mentioned U-net nerves The image generated after network is compared with the atural object of first remote sensing image mark image, and convolution kernel is real-time by comparison result Be updated.When U-net neural networks are after above-mentioned multiple training, the U-net neural networks of generation (train mould Type) atural object mark can be carried out to the second remote sensing image.
By the structure setting of above-mentioned U-net neural networks, as training sample constantly inputs U-net neural networks, Convolution kernel in U-net neural networks is with the increase of training samples and the feedback mechanism of itself adjusts convolution kernel in real time, Convolution kernel is the characterization of feature to be marked to the atural object in training sample, therefore by U-net neural networks, may be implemented to training Atural object mark feature in sample extracts, and forms trained U-net neural networks, then the second remote sensing image is inputted and is instructed The U-net neural networks perfected, you can realize the automatic marking to the atural object in the second remote sensing image.In the present embodiment, U-net Neural network using the operation of four layers of down-sampling and four layers of up-sampling operation the atural object in training sample can be marked feature into Row effectively and rapidly extracts, so as to the high training pattern of more fast generation accuracy.
Fig. 8 schematically illustrates the first convolution in second of embodiment of the atural object annotation equipment of remote sensing image of the present invention Convolution process of the layer for a pixel;Fig. 9 schematically illustrates the second of the atural object annotation equipment of remote sensing image of the present invention A pond process in kind embodiment.In second of embodiment of the atural object annotation equipment of remote sensing image of the present invention, every layer The convolution kernel of one convolutional layer 51 includes two one-dimensional convolution kernels, and respectively 1xN convolution kernels and Nx1 convolution kernels, wherein N are big In the positive integer equal to 2.With the first remote sensing image for 512x512 pixels, for N=3, in convolution process, two one are used It ties up convolution kernel and convolution is carried out to the same pixel, then need to carry out 262144 convolution, that is, carry out 512x512 convolution, and will The convolution results of generation are sequentially inserted in the matrix of 512x512, and a convolution operation is completed;For example, for the pixel shown in Fig. 8 Matrix, and two one-dimensional convolution kernels shown in carry out process of convolution, obtained new picture to the first row, the pixel of first row Element value is 21 (21=1*1+1*2+1*3+1*1+1*5+1*9).The convolution kernel for selecting two one-dimensional convolution kernels to constitute, accelerates The calculating speed of first remote sensing image and the second remote sensing image in U-net neural networks, while two one-dimensional convolution kernels add The network depths of deep U-net neural networks and network it is non-linear, achieve and more preferably mark effect.As shown in figure 9, The present embodiment selects maximum pondization rule to carry out pondization operation, i.e., maximum one of pixel value is selected in the pixel of a 2x2 Cover original 2x2 pixels.
Figure 10 schematically illustrates second remote sensing image;Figure 11 shows the atural object using remote sensing image of the present invention Second of embodiment of annotation equipment carries out treated output pattern to Figure 10.It is marked using the atural object of remote sensing image of the present invention Second of embodiment of device achieves good effect to the atural object mark of remote sensing image.
Fig. 5 is the flow chart of the first embodiment of the atural object mask method of remote sensing image of the present invention;As shown in figure 5, this The first embodiment of the atural object mask method of invention remote sensing image includes the following steps:
T01:The atural object for extracting training sample marks feature, generates training pattern;
T02:It is labeled according to the atural object in the second remote sensing image of training pattern pair;
Wherein, training sample includes multiple the first remote sensing images that atural object mark is completed.
Atural object by extracting training sample marks feature and generates training pattern, according to the second remote sensing image of training pattern pair In atural object be labeled, to realize the automatic marking to atural object in the second remote sensing image.
Fig. 6 is the flow chart of second of embodiment of the atural object mask method of remote sensing image of the present invention;As shown in fig. 6, this Include the following steps in second of embodiment of the atural object mask method of invention remote sensing image:
S01:Remote sensing image is cut out, several first remote sensing images are formed;
S02:Selected in several first remote sensing images it is multiple carry out carry out atural object mark, formed training sample;
S031:Expansion process is carried out to the atural object mark part in training sample;
S032:Calculate atural object weight and the background weight of atural object;
S04:The atural object for extracting training sample marks feature, and the background weight of base area property rights weight and atural object generates Training pattern;
S05:Judge whether current training pattern error amount is more than error threshold;If error amount is more than error threshold, By in error amount input step S04, and combined training sample generates new training pattern;If error amount is less than error threshold, Then export the training pattern.
S06:It is labeled using the atural object of the second remote sensing image of training pattern pair of generation.
In second of embodiment of the atural object mask method of remote sensing image of the present invention, in step S032, the power of a certain atural object The calculation formula of weight weight (class) is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample; image_count (class) it is the image number of the first remote sensing image with the atural object in training sample;W is the first remote sensing image width direction Number of pixels;H is the number of pixels of the first remote sensing image length direction;Median of f (class) are in f (class) Digit.
In second of embodiment of the atural object mask method of remote sensing image of the present invention, step S05 is comprised the following processes:To life It is sampled at the Period Process of training pattern, obtains current training pattern, test sample is carried out using current training pattern Atural object marks, and calculates the error amount between the annotation results and true mark according to error function;If error amount is less than or waits In error threshold, then judge that current training pattern is correct;If error amount is more than error threshold, corrected according to error amount current Training pattern.Preferably, error function is logarithm loss function.
It, can for the atural object mask method embodiment of remote sensing image, computer equipment embodiment, computer in this specification For reading storage medium embodiment, since it is substantially similar to the atural object annotation equipment embodiment of remote sensing image, correlation expenditure ginseng The specification part for seeing the atural object annotation equipment embodiment of remote sensing image describes to avoid repeatability.
The embodiment of another aspect of the present invention provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, processor realize the atural object mark of remote sensing image when executing program The step of method.
The embodiment of further aspect of the present invention provides a kind of computer readable storage medium, is stored thereon with computer journey Sequence, when which is executed by processor the step of the atural object mask method of realization remote sensing image.Wherein, computer-readable storage medium Matter can include but is not limited to any type of disk, including floppy disk, CD, DVD, CD-ROM, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, mini drive and disk, flash memory device, magnetic or optical card, nanosystems (including molecular recording Device IC), or it is suitable for any kind of medium or equipment of store instruction and/or data.Processing equipment for example can be personal Any equipment for being suitable for handling data such as computer, general or specialized digital computer, computing device, machine.
In embodiments of the present invention, processor is the control centre of computer system, utilizes various interfaces and connection The various pieces of entire computer system, by running or executing the software program being stored in memory and/or unit, mould Block, and the data being stored in memory are called, to execute the various functions and/or processing data of computer system.Processing Device can be made of integrated circuit (Integrated Circuit, abbreviation IC), such as the IC that can be encapsulated by single is formed, It can also be formed by connecting the encapsulation IC of more identical functions or different function.In embodiments of the present invention, processor can be with Can be that unit calculates core at least one central processing unit (Central Processing Unit, abbreviation CPU), CPU, It can be multioperation core, can be the processor of physical machine, can also be the processor of virtual machine.
Those skilled in the art can be understood that technical scheme of the present invention can be by software and/or hardware To realize." module " and " unit " in this specification is to refer to complete independently or with other component coordinate complete specific function Software and/or hardware, wherein hardware for example can be that (Field-Programmable Gate Array, scene can by FPGA Program gate array), IC (Integrated Circuit, integrated circuit).
The atural object annotation equipment and method of the remote sensing image that the embodiment of the present invention is provided, computer readable storage medium, And computer equipment, the automatic marking to remote sensing image atural object is realized, atural object mark is quick, accuracy rate is high, while to distant The clarity requirement for feeling image is relatively low.
In the present invention, term " first ", " second ", " third " are only used for the purpose of description, and should not be understood as indicating Or imply relative importance;Term " multiple " then refers to two or more, unless otherwise restricted clearly.Term " installation ", The terms such as " connected ", " connection ", " fixation " shall be understood in a broad sense, for example, " connection " may be a fixed connection, can also be can Dismantling connection, or be integrally connected;" connected " can be directly connected, can also be indirectly connected through an intermediary.For this For the those of ordinary skill in field, the specific meanings of the above terms in the present invention can be understood according to specific conditions.
In description of the invention, it is to be understood that the orientation or positional relationship of the instructions such as term "upper", "lower" be based on Orientation or positional relationship shown in the drawings, is merely for convenience of description of the present invention and simplification of the description, rather than indicates or imply institute The device or unit of finger must have specific direction, with specific azimuth configuration and operation, it is thus impossible to be interpreted as to this hair Bright limitation.
In the description of this specification, the description of term " one embodiment ", " some embodiments ", " specific embodiment " etc. Mean that particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one reality of the present invention It applies in example or example.In the present specification, schematic expression of the above terms are not necessarily referring to identical embodiment or reality Example.Moreover, description particular features, structures, materials, or characteristics can in any one or more of the embodiments or examples with Suitable mode combines.
It these are only the preferred embodiment of the present invention, be not intended to restrict the invention, for those skilled in the art For member, the invention may be variously modified and varied.Any modification made by all within the spirits and principles of the present invention, Equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (29)

1. a kind of atural object annotation equipment of remote sensing image, which is characterized in that including:
First unit, the first unit are configured as obtaining training sample, and the atural object mark for extracting the training sample is special Sign generates training pattern;Wherein, the training sample includes multiple the first remote sensing images that atural object mark is completed;
Second unit, the second unit are configured as obtaining the training pattern, and distant according to the training pattern pair second Atural object in sense image is labeled.
2. the atural object annotation equipment of remote sensing image according to claim 1, which is characterized in that the atural object of the remote sensing image Annotation equipment further includes third unit, and the third unit is configured as according to instruction in multiple first remote sensing images Atural object is labeled, and forms the training sample.
3. the atural object annotation equipment of remote sensing image according to claim 2, which is characterized in that the third unit includes:
Module is cut out, the module of cutting out is cut out remote sensing image to obtain several first remote sensing images;
Labeling module, the labeling module select multiple first remote sensing images in several first remote sensing images and according to instructions pair Multiple first remote sensing images chosen carry out atural object mark, form the training sample.
4. the atural object annotation equipment of remote sensing image according to claim 3, which is characterized in that the third unit further includes Expansion module, the expansion module carry out expansion process to the atural object mark part in the training sample;The first unit Obtain the training sample Jing Guo expansion process.
5. the atural object annotation equipment of remote sensing image according to claim 3, which is characterized in that the third unit further includes Weight module, the weight module is for calculating the weight of each atural object and the background of each atural object power in the training sample Weight;The first unit generates the atural object according to the training sample, the weight of a certain atural object, the background weight of a certain atural object Training pattern.
6. the atural object annotation equipment of remote sensing image according to claim 5, which is characterized in that
The calculation formula of the weight weight (class) of a certain atural object is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample;Image_count (class) is The image number of the first remote sensing image with the atural object in training sample;W is the number of pixels of the first remote sensing image width direction; H is the number of pixels of the first remote sensing image length direction;The median of f (class) are the median of f (class).
7. the atural object annotation equipment of remote sensing image according to claim 1, which is characterized in that the atural object of the remote sensing image Annotation equipment further includes Unit the 4th, and Unit the 4th is configured as, to the Period Process sampling for generating training pattern, obtaining Current training pattern carries out atural object mark using current training pattern to test sample, and being calculated according to error function should Error amount between annotation results and true mark;If the error amount is less than or equal to error threshold, current instruction is judged It is correct to practice model;If the error amount is more than error threshold, the first unit obtains the error amount, and according to the mistake Difference corrects current training pattern.
8. the atural object annotation equipment of remote sensing image according to claim 7, which is characterized in that Unit the 4th includes defeated Enter module, computing module, determination module and output module;The input module is used to obtain test sample and current Training pattern, and atural object mark is carried out to test sample according to current training pattern;The computing module is marked for obtaining As a result, and calculating the error amount between the annotation results and the true mark of test sample using error function;The judgement mould Block is used for the size of the error amount and the error threshold, if error amount is less than or equal to error threshold, judgement is worked as Preceding training pattern is correct;If the error amount is more than error threshold, the error amount is transmitted by the output module To the first unit.
9. the atural object annotation equipment of remote sensing image according to claim 7, which is characterized in that the error function is logarithm Loss function.
10. the atural object annotation equipment of remote sensing image according to claim 1, which is characterized in that first remote sensing image And the size of second remote sensing image is 512x512 pixels.
11. the atural object annotation equipment of remote sensing image according to claim 1, which is characterized in that first remote sensing image And second remote sensing image is triple channel image;It is single channel image after the second remote sensing image mark.
12. the atural object annotation equipment of the remote sensing image according to any one of claim 1-11, which is characterized in that described First unit generates the training pattern using U-net neural networks.
13. the atural object annotation equipment of remote sensing image according to claim 12, which is characterized in that the U-net nerve nets Network includes the first convolutional layer, pond layer, warp lamination and the second convolutional layer;The U-net neural networks are configured as:Often One layer of pond layer is provided between two layers of first convolutional layer, last first convolutional layer of layer and the second convolution of first layer It is provided with one layer of warp lamination between layer, one layer of warp lamination is provided between every two layers of second convolutional layer; For carrying out convolution operation, the pond layer operates first convolutional layer for carrying out pondization, the warp lamination for into Row deconvolution operates, and second convolutional layer is for carrying out convolution operation.
14. the atural object annotation equipment of remote sensing image according to claim 12, which is characterized in that every layer of first convolution The convolution kernel of layer includes two one-dimensional convolution kernels, respectively 1xN convolution kernels and Nx1 convolution kernels, and wherein N is more than or equal to 2 Positive integer.
15. a kind of atural object mask method of remote sensing image, which is characterized in that including:
The atural object for extracting training sample marks feature, generates training pattern;Atural object mark is completed comprising multiple in the training sample First remote sensing image of note;
It is labeled according to the atural object in the second remote sensing image of the training pattern pair.
16. the atural object mask method of remote sensing image according to claim 15, which is characterized in that the training sample passes through Following steps obtain:The atural object in multiple first remote sensing images is labeled according to instruction.
17. the atural object mask method of remote sensing image according to claim 16, which is characterized in that the training sample passes through Following steps obtain:Remote sensing image is cut out to obtain several first remote sensing images, is selected in several first remote sensing images Multiple first remote sensing images simultaneously carry out atural object mark according to instruction to multiple first remote sensing images chosen, and form the trained sample This.
18. the atural object mask method of remote sensing image according to claim 17, which is characterized in that the ground of the remote sensing image Object mask method further includes:Expansion process is carried out to the atural object mark part in the training sample;To by expansion process The atural object mark feature of training sample extracts, and generates training pattern.
19. the atural object mask method of remote sensing image according to claim 17, which is characterized in that the ground of the remote sensing image Object mask method further includes:Calculate the weight of each atural object and the background weight of each atural object in the training sample;According to The training sample, the weight of a certain atural object, the background weight of a certain atural object generate the training pattern of the atural object.
20. the atural object mask method of remote sensing image according to claim 19, which is characterized in that the weight of a certain atural object The calculation formula of weight (class) is as follows:
Weight (class)=median of f (class)/f (class);
F (class)=frequency (class)/(image_count (class) * w*h);
The calculation formula of the background weight weight (ground) of a certain atural object is as follows:
Weight (ground)=1-weight (class);
Wherein, frequency (class) is the sum of the pixel of the atural object in training sample;Image_count (class) is The image number of the first remote sensing image with the atural object in training sample;W is the number of pixels of the first remote sensing image width direction; H is the number of pixels of the first remote sensing image length direction;The median of f (class) are the median of f (class).
21. the atural object mask method of remote sensing image according to claim 15, which is characterized in that the ground of the remote sensing image Object mask method further includes:To generating the Period Process sampling of training pattern, current training pattern is obtained, current instruction is utilized Practice model and atural object mark is carried out to test sample, and the error between the annotation results and true mark is calculated according to error function Value;If the error amount is less than or equal to error threshold, judge that current training pattern is correct;It is missed if the error amount is more than Poor threshold value then corrects current training pattern according to the error amount.
22. the atural object mask method of remote sensing image according to claim 21, which is characterized in that the error function is pair Number loss function.
23. the atural object mask method of remote sensing image according to claim 15, which is characterized in that first remote sensing image And the size of second remote sensing image is 512x512 pixels.
24. the atural object mask method of remote sensing image according to claim 15, which is characterized in that first remote sensing image And second remote sensing image is triple channel image;It is single channel image after the second remote sensing image mark.
25. the atural object mask method of the remote sensing image according to any one of claim 15-24, which is characterized in that adopt The training pattern is generated with U-net neural networks.
26. the atural object mask method of remote sensing image according to claim 25, which is characterized in that the U-net nerve nets Network includes the first convolutional layer, pond layer, warp lamination and the second convolutional layer;The U-net neural networks are configured as:Often One layer of pond layer is provided between two layers of first convolutional layer, last first convolutional layer of layer and the second convolution of first layer It is provided with one layer of warp lamination between layer, one layer of warp lamination is provided between every two layers of second convolutional layer; For carrying out convolution operation, the pond layer operates first convolutional layer for carrying out pondization, the warp lamination for into Row deconvolution operates, and second convolutional layer is for carrying out convolution operation.
27. the atural object mask method of remote sensing image according to claim 25, which is characterized in that every layer of first convolution The convolution kernel of layer includes two one-dimensional convolution kernels, respectively 1xN convolution kernels and Nx1 convolution kernels, and wherein N is more than or equal to 2 Positive integer.
28. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor It is realized when execution such as the step of any one of claim 15-27 the method.
29. a kind of computer equipment, including memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, which is characterized in that the processor is realized when executing described program as described in any one of claim 15-27 The step of method.
CN201810146580.3A 2018-02-12 2018-02-12 The atural object annotation equipment and method of remote sensing image Pending CN108764263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810146580.3A CN108764263A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810146580.3A CN108764263A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of remote sensing image

Publications (1)

Publication Number Publication Date
CN108764263A true CN108764263A (en) 2018-11-06

Family

ID=63980074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810146580.3A Pending CN108764263A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of remote sensing image

Country Status (1)

Country Link
CN (1) CN108764263A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875649A (en) * 2018-06-22 2018-11-23 北京佳格天地科技有限公司 A kind of terrain classification method, system, equipment and storage medium
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN111144487A (en) * 2019-12-27 2020-05-12 二十一世纪空间技术应用股份有限公司 Method for establishing and updating remote sensing image sample library
CN112527442A (en) * 2020-12-24 2021-03-19 中国科学院地理科学与资源研究所 Environment data multi-dimensional display method, device, medium and terminal equipment
CN112597855A (en) * 2020-12-15 2021-04-02 中国农业大学 Crop lodging degree identification method and device
CN113011584A (en) * 2021-03-18 2021-06-22 广东南方数码科技股份有限公司 Coding model training method, coding device and storage medium
CN116521041A (en) * 2023-05-10 2023-08-01 北京天工科仪空间技术有限公司 Remote sensing data labeling method based on front page, computer equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN103823845A (en) * 2014-01-28 2014-05-28 浙江大学 Method for automatically annotating remote sensing images on basis of deep learning
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107273502A (en) * 2017-06-19 2017-10-20 重庆邮电大学 A kind of image geographical marking method learnt based on spatial cognition
CN107403197A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of crack identification method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770584A (en) * 2009-12-30 2010-07-07 重庆大学 Extraction method for identification characteristic of high spectrum remote sensing data
CN103823845A (en) * 2014-01-28 2014-05-28 浙江大学 Method for automatically annotating remote sensing images on basis of deep learning
CN105740894A (en) * 2016-01-28 2016-07-06 北京航空航天大学 Semantic annotation method for hyperspectral remote sensing image
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107273502A (en) * 2017-06-19 2017-10-20 重庆邮电大学 A kind of image geographical marking method learnt based on spatial cognition
CN107403197A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of crack identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUOJIE HUANG ET AL.: "Vehicle segmentation from remote sensing images using the small object segmentation convolutional network", 《 2017 4TH INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI)》 *
MICHAEL KAMPFFMEYER ET AL.: "Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875649A (en) * 2018-06-22 2018-11-23 北京佳格天地科技有限公司 A kind of terrain classification method, system, equipment and storage medium
CN109670060A (en) * 2018-12-10 2019-04-23 北京航天泰坦科技股份有限公司 A kind of remote sensing image semi-automation mask method based on deep learning
CN111144487A (en) * 2019-12-27 2020-05-12 二十一世纪空间技术应用股份有限公司 Method for establishing and updating remote sensing image sample library
CN111144487B (en) * 2019-12-27 2023-09-26 二十一世纪空间技术应用股份有限公司 Method for establishing and updating remote sensing image sample library
CN112597855A (en) * 2020-12-15 2021-04-02 中国农业大学 Crop lodging degree identification method and device
CN112597855B (en) * 2020-12-15 2024-04-16 中国农业大学 Crop lodging degree identification method and device
CN112527442A (en) * 2020-12-24 2021-03-19 中国科学院地理科学与资源研究所 Environment data multi-dimensional display method, device, medium and terminal equipment
CN113011584A (en) * 2021-03-18 2021-06-22 广东南方数码科技股份有限公司 Coding model training method, coding device and storage medium
CN113011584B (en) * 2021-03-18 2024-04-16 广东南方数码科技股份有限公司 Coding model training method, coding device and storage medium
CN116521041A (en) * 2023-05-10 2023-08-01 北京天工科仪空间技术有限公司 Remote sensing data labeling method based on front page, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN108764263A (en) The atural object annotation equipment and method of remote sensing image
Yu et al. Learning a discriminative feature network for semantic segmentation
CN109829849B (en) Training data generation method and device and terminal
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN108470185A (en) The atural object annotation equipment and method of satellite image
CN107767413A (en) A kind of image depth estimation method based on convolutional neural networks
JP6213745B2 (en) Image processing method and apparatus
CN108647634A (en) Framing mask lookup method, device, computer equipment and storage medium
CN105120185B (en) A kind of video image is scratched as method and apparatus
CN105893925A (en) Human hand detection method based on complexion and device
CN105139385B (en) Image vision salient region detection method based on the reconstruct of deep layer autocoder
CN106846336A (en) Extract foreground image, replace the method and device of image background
CN110689000B (en) Vehicle license plate recognition method based on license plate sample generated in complex environment
CN104408708A (en) Global-local-low-rank-based image salient target detection method
CN111160501A (en) Construction method and device of two-dimensional code training data set
CN110427933A (en) A kind of water gauge recognition methods based on deep learning
CN112950554B (en) Lung lobe segmentation optimization method and system based on lung segmentation
CN114387270B (en) Image processing method, image processing device, computer equipment and storage medium
CN108388905A (en) A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context
CN106651797A (en) Determination method and apparatus for effective region of signal lamp
CN110251942A (en) Control the method and device of virtual role in scene of game
CN110163864A (en) Image partition method, device, computer equipment and storage medium
CN111882559B (en) ECG signal acquisition method and device, storage medium and electronic device
CN112132232A (en) Medical image classification labeling method and system and server
CN113449878B (en) Data distributed incremental learning method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106