CN114882379B - Accurate screening method for remote sensing image group - Google Patents

Accurate screening method for remote sensing image group Download PDF

Info

Publication number
CN114882379B
CN114882379B CN202210777516.1A CN202210777516A CN114882379B CN 114882379 B CN114882379 B CN 114882379B CN 202210777516 A CN202210777516 A CN 202210777516A CN 114882379 B CN114882379 B CN 114882379B
Authority
CN
China
Prior art keywords
image group
image
images
group
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210777516.1A
Other languages
Chinese (zh)
Other versions
CN114882379A (en
Inventor
贾若愚
陈宇
李洁
段红伟
邹圣兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Original Assignee
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuhui Spatiotemporal Information Technology Co ltd filed Critical Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority to CN202210777516.1A priority Critical patent/CN114882379B/en
Publication of CN114882379A publication Critical patent/CN114882379A/en
Application granted granted Critical
Publication of CN114882379B publication Critical patent/CN114882379B/en
Priority to PCT/CN2023/077837 priority patent/WO2024007598A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention provides an accurate screening method of a remote sensing image group, which relates to the technical field of remote sensing and comprises the following steps: selecting a target area, acquiring an initial image group in the target area, and calculating a detection score of the initial image group; dividing the initial image group into a first image group and a second image group according to the detection scores; expanding the first image group based on a first expansion strategy to obtain a third image group; calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group; and expanding the preferred image group according to a second expansion strategy to obtain a final image group. The invention can greatly save manpower, material resources and financial resources and achieve the aim of quickly and accurately screening the images required by the user.

Description

Accurate screening method for remote sensing image group
Technical Field
The invention belongs to the technical field of remote sensing, and particularly relates to an accurate screening method of a remote sensing image group.
Background
The earth is a common home for human survival, and with the continuous progress of human civilization, the continuous discovery and understanding of the unknown world by using technical means becomes a powerful driving force for the progress of human civilization. Due to the vast and vast surface of the earth, although mankind has evolved over the earth for tens of millions of years, awareness of the habitation of mankind from local to global is very limited. Until the middle of the 20 th century, with the advent of satellite remote sensing technology, the human beings do not really draw the relatively continuous cognition curtain of the whole earth by acquiring image data on the earth surface through the "sky eye". Particularly, in the early twenty-first century, with the rapid progress of the technology and the rapid development of the remote sensing technology, the number of remote sensing satellites is continuously increased, observation data on the earth surface obtained by human beings is continuously and rapidly accumulated, the data scale reaches EB level at present, and then the large data technology generated by due operation provides technical support for processing of mass data and information mining.
The space-to-ground observation technology provides multi-temporal, wide-coverage and three-dimensional remote sensing image data for scientific research of the earth system, so that observation, understanding, simulation and prediction of the whole earth system behavior are possible. Remote sensing image data acquired by means of satellite, aviation and the like has abundant space, time and attribute information, and has become an important source for researching and solving key problems of global change, disaster prevention and reduction, sustainable development and the like. Most of traditional remote sensing data screening methods search results meeting conditions from mass data based on a query engine provided by a spatial database. However, the traditional remote sensing data screening method has the following problems that on one hand, the setting of the query conditions is relatively fixed, on the other hand, a large amount of redundancy exists in the screened remote sensing data, the optimal remote sensing data needs to be obtained by means of manual selection, time and labor are wasted, and the efficiency is low. Therefore, how to quickly and automatically screen out the optimal remote sensing image data and reduce the workload and time of manual selection is very important.
Disclosure of Invention
Based on the technical problems, the invention provides a set of images with moderate quantity and high quality obtained by the steps of preliminary screening, expansion, redundancy removal and expansion aiming at a target area selected by a user.
The invention provides an accurate screening method of a remote sensing image group, which comprises the following steps:
s1, selecting a target area, acquiring an initial image group in the target area, and calculating the detection score of the initial image group;
s2, dividing the initial image group into a first image group and a second image group according to the detection scores;
s3, expanding the first image group based on a first expansion strategy to obtain a third image group;
s4, calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group;
and S5, expanding the preferred image group according to the second expansion strategy to obtain a final image group.
In an embodiment of the present invention, the step S1 includes:
carrying out cloud amount detection on the images in the initial image group to obtain cloud amount;
detecting quality items of the images in the initial image group to obtain quality scores, wherein the detection of the quality items comprises strip detection, high exposure detection, edge detection and histogram detection;
and integrating the cloud content and the mass fraction to obtain a detection fraction, wherein the numerical range of the detection fraction is 0-100.
In an embodiment of the present invention, the step S2 includes:
and sorting the initial image groups from high to low according to the detection scores, and screening from positive sequence until the coverage rate of the union set of the screened images on the target area reaches a coverage threshold, taking the screened images as a first image group, and taking the images except the first image group in the initial image group as a second image group.
In an embodiment of the present invention, the mass fraction is obtained by integrating and calculating the detection result of each mass item, and the numerical range of the mass fraction is 0 to 100.
In an embodiment of the present invention, the step S3 includes:
taking one image in the first image group as an image to be expanded;
determining a region to be expanded in the image to be expanded;
sequencing the second image group from high to low according to the detection fraction, and selecting the images in the second image group from positive sequence to expand the area to be expanded until the coverage rate of the selected union set of the images in the second image group to the area to be expanded reaches 100%;
and traversing all the images of the first image group, and combining the selected images in the second image group with the first image to form a third image group.
In an embodiment of the present invention, the image to be expanded intersects with a union set of the remaining images in the first image group except for the image to be expanded, and the image to be expanded is divided into an intersecting region and a non-intersecting region, where the non-intersecting region is the region to be expanded.
In an embodiment of the present invention, the step S4 includes:
acquiring metadata of the initial image group, and analyzing the metadata to obtain time sequence data and sensor data;
calculating the variance of the time-series data and the sensor data, and integrating the quality score, the variance of the time-series data and the variance of the sensor data into a comprehensive score;
constructing an evaluation function based on the cloud content and the comprehensive score;
calculating the third image group by using the evaluation function to obtain an evaluation score of the third image group;
and sorting the third image group from low to high according to the evaluation scores of the third image group, and screening from the positive sequence according to a screening strategy to obtain a preferred image group.
In an embodiment of the present invention, the screening strategy is:
the first step is as follows: sequentially selecting a single image as a pre-screening image from the positive sequence, and collecting the images except the pre-screening image in the third image group as a remaining image group;
the second step is that: setting a reservation rule, wherein the reservation rule is to reserve the corresponding pre-screened image if the coverage rate of the union set of the images in the rest image groups to the target area is reduced after the pre-screened image is removed, and execute a third step if the pre-screened image does not accord with the reservation rule;
the third step: calculating the average evaluation score of the third image group and the average evaluation score of the rest image groups, and if the average evaluation score of the rest image groups is lower than the average evaluation score of the third image group, reserving the corresponding pre-screening images;
the fourth step: if the evaluation scores of two or more images are the same in the images to be retained after the second step and the third step are executed, the images with the same evaluation scores are sorted from low to high according to the cloud content, the first image is retained, and the rest images are screened out.
In an embodiment of the present invention, the step S5 includes:
taking the images in the initial image group except the preferred image group as candidate image groups;
judging the candidate image group based on the second expansion strategy, and classifying the images meeting the requirements of the second expansion strategy into an expanded image group;
and expanding the preferred image group by utilizing the expanded image group to obtain a final image group.
In an embodiment of the present invention, the determining the candidate image group based on the second expansion policy, and the classifying the images meeting the requirement of the second expansion policy into the expanded image group includes:
respectively comparing one image in the candidate image group with all images in the preferred image group one by one,
if the coverage rate of no image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation score of the image in the candidate image group, and if the evaluation score is higher than the average evaluation score of all images in the preferred image group, classifying the image in the candidate image group into the extended image group;
if the coverage rate of one image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation scores of the two images, and if the evaluation score of the image in the candidate image group is higher than that of the image in the preferred image group, classifying the image in the candidate image group into the expanded image group;
if the coverage rate of two or more than two image subgroups in the preferred image group to the image in the candidate image group reaches 100%, comparing the evaluation score of the image in the candidate image group with the evaluation score of each image in the image subgroups, and if the evaluation score of the image in the candidate image group is higher than the lowest evaluation score in the image subgroups, classifying the image in the candidate image group into the extended image group.
The invention has the beneficial effects that: the invention provides an accurate screening method of a remote sensing image group, which comprises the following steps: selecting a target area, acquiring an initial image group in the target area, and calculating a detection score of the initial image group; dividing the initial image group into a first image group and a second image group according to the detection scores; expanding the first image group based on a first expansion strategy to obtain a third image group; calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group; and expanding the preferred image group according to a second expansion strategy to obtain a final image group. The method has the advantages that the images are integrally evaluated and screened based on the cloud content, the comprehensive grading and the evaluation score, after primary screening, first expansion, fine screening and second expansion, the number of the obtained final image groups is optimal, the image quality is high, the accuracy of the images in interpretation and other applications in a target area can be greatly improved, the screening process is quick, the calculated amount is small, the manpower, material resources and financial resources can be greatly saved, and the purpose of quickly and accurately screening the images required by users is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. It should be noted that, unless otherwise conflicting, the embodiments and features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are all within the scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Referring to fig. 1, the present invention provides a method for accurately screening a remote sensing image group, which includes:
s1, selecting a target area, acquiring an initial image group in the target area, and calculating the detection score of the initial image group;
s2, dividing the initial image group into a first image group and a second image group according to the detection scores;
s3, expanding the first image group based on a first expansion strategy to obtain a third image group;
s4, calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group;
and S5, expanding the preferred image group according to the second expansion strategy to obtain a final image group.
Determining a target area according to an image of a required area provided by a user, and acquiring all images in the target area to form an initial image group, wherein the images in the initial image group are original image data.
And performing primary detection on all images in the initial image group to obtain detection scores. The method specifically comprises the following steps:
s11 performs cloud amount detection on the images in the initial image group, where the cloud amount detection method in this embodiment may be:
step one, judging whether the image in the initial image group is a panchromatic image or a multispectral image, if the image is the panchromatic image, directly switching to step two, if the image is the multispectral image, converting the multispectral image in the initial image group into a single-waveband brightness image, wherein the conversion is realized by the following formula:
Figure DEST_PATH_IMAGE001
in the formula (I), the compound is shown in the specification,P i,j() representing the luminance value of the picture element located at (i, j) in the converted luminance map,R i,j() ,G i,j() ,B i,j() respectively represent the positions in the multispectral image (A), (B), (C), (D), (C), (D) and D)i,j) The brightness values of the red, green and blue bands of the pixel. The cloud shows the Mie scattering to sunlight and has stronger scattering to each wave band, which is shown on a multispectral image, the cloud is white, and the brightness value of each wave band is very high. The non-cloud earth surface target reflects sunlight diffusely, the reflectivity of different wave bands is different and is displayed on a multispectral image, the non-cloud target is color, the brightness value of each wave band is different, but the minimum brightness value is lower. Therefore, the single-band brightness image obtained by extracting the minimum value of the brightness values of all the bands through the formula (1) integrates the brightness and saturation information of the multispectral image, and the cloud and non-cloud targets are easily distinguished.
Step two, roughly estimating brightness double thresholds: and calculating corresponding highest and lowest brightness thresholds according to the cloud-free and cloud-containing images. Calculating a maximum brightness threshold over a certain number of cloudless imagesT h In order to ensure the accuracy of the cloud; calculating minimum brightness threshold by a certain number of cloud-containing imagesT l The purpose is to ensure the recall ratio of the cloud.
Step three, calculating an accurate brightness threshold value: analyzing an image histogram in the initial image group, and qualitatively screening a cloud-free image; and for the cloud-containing image, performing calculation based on the maximum inter-class variance by taking the roughly estimated brightness dual-threshold as a limiting condition to obtain an accurate brightness threshold.
Step four, for the initial result after threshold segmentation, the detection area is smaller thana 1 The cloud area of (2) is defined as highlight noise, deleted and marked as non-cloud; after removing the highlight noise, executing the shape scale of the cloud-containing area asa 2 Judging the brightness of the newly added pixel and the brightness gradient in the expansion direction while performing morphological expansion, and taking the brightness and the brightness gradient as the limited conditions of the expansion; after the expansion treatment, the detection area is less thana 3 The non-cloud area of (2) is defined as a tiny cloud seam, deleted and marked as a cloud. The above morphological dilation uses the formula:
Figure 388836DEST_PATH_IMAGE002
in the formula (I), the compound is shown in the specification,Gin order to increase the brightness of the newly added pixel,
Figure DEST_PATH_IMAGE003
for the brightness gradient in the direction of the expansion,dis a constant with a value ranging from 0.05 to 0.25. As described abovea 1a 2a 3 Are all configuration parameters.
The final cloud content is obtained by the steps.
S12, detecting a plurality of quality items of the images in the initial image group, wherein the detection of the quality items comprises strip detection, high exposure detection, edge detection and histogram detection, normalizing the detection results of the quality items, and normalizing the values into quality scores in the range of 0-100.
And S13, integrating the cloud content and the mass fraction to obtain a detection fraction. The detection score has a value in the range of 0 to 100.
In an embodiment of the present invention, step S2 includes:
s21 ranks the initial image groups from high to low according to the detection scores, and selects from positive order, that is, from the image with the highest detection score.
The selection principle is that after the current image is added into the first image group, the coverage rate of the union set of the images in the first image group on the target area can be increased.
And S22, until the coverage rate of the union set of the screened images on the target area reaches a coverage threshold value, using the screened images as a first image group. If the part of the current image covering the target area is covered by the union of the images selected previously, the current image is classified into the second image group.
Specifically, the coverage threshold is the maximum coverage of the union of the screened images to the target area, and may be 70%, 80%, or 100%, and in this embodiment, the coverage threshold is 80%.
The steps S21-S22 are explained in an embodiment of the present invention: the 100 images in the initial image group A are sorted from high to low according to the detection scores and are marked asA={A 1 ,A 2 ,...,A 100 I.e. thatA 1 Has the highest detection score fromA 1 The comparison with the target area is started,A 1 covering a part of the target area, willA 1 Classifying the images into a first image group; then, the subsequent images are respectively compared with the target area,A 35 the part covering the target area isL 1 And the above-mentionedL 1 Has been describedA 1 ,A 2 ,...,A 34 The 34 images are completely covered, thenA 35 Is judged not to increase the coverage rate of the union of the images in the first image group to the target areaA 35 And classified into a second image group. Finally, the first image group is obtainedB={B 1 ,B 2 ,...,B m The second image set is marked asC={C 1 ,C 2 ,...,C n },m+n=100, andm<n
in an embodiment of the present invention, step S3 includes:
s31 takes a certain image in the first image group as an image to be expanded, the image to be expanded intersects with a union set of the remaining images in the first image group except the image to be expanded, and the image to be expanded is divided into an intersecting region and a non-intersecting region, where the non-intersecting region is the region to be expanded.
S32, the second image group is sorted from high to low according to the detection scores, and the images in the second image group are selected from the positive sequence to expand the area to be expanded until the coverage rate of the selected image union set on the area to be expanded reaches 100%.
S33 expands all the videos in the first video group according to steps S31-S32, and combines all the videos selected from the second video group with the first video group to form a third video group.
The steps S31-S33 are explained in an embodiment of the present invention: firstly, for the first image setB={B 1 ,B 2 ,...,B m In (c) }B 1 To expand, i.e.B 1 When the image is to be expanded, a specific image is first determinedB 2 ,B 3 ,...,B m Integration of imageUUnion setUMeans aB 2 ,B 3 ,...,B m Thism1 image area shared between the images, each image having a large feature and having a certain degree of area intersection between the images, since the initial image group is images screened with the target area as the object, that is, the images are considered to be sharedB 1 AndUthere is also some degree of intersection, by which will beB 1 Divided into intersecting zonesL 2 And non-intersecting regionL 3 Wherein, in the process,L 3 i.e. the area to be expanded. Then, the second image group is sorted from high to low according to the detection scores and is sorted from the second image groupCScreening the image to treat the extended areaL 3 Expanding, wherein the selection principle is to compare the image with the highest detection score with the region to be expanded until the image set is obtained by screeningC i 1 Can completely cover the area to be expandedL 3 . According to the above steps, a checkB 2 ,B 3 ,...,B m Respectively expanding the images, and respectively making the image sets obtained by screening be a great faceC i 2 ,C i 3 ,...,C i m In which, the superscripts 1, 2.,mi.e. representing a peerB 1 ,B 2 ,...,B m The corresponding image set label, subscript obtained by expansioniIndicating the number of images in the selected image set,iis an indefinite number, i.e.C i 2 AndC i 3 in (1)iThe numbers represented may or may not be the same,i<n. Last coupleC i 1 ,C i 2 ,C i 3 ,...,C i m Removing the duplicate to obtain an extended image setC i C i
Figure 243660DEST_PATH_IMAGE004
CWill beC i And withBMerge into a third image setD
In an embodiment of the present invention, step S4 includes:
each image in the initial image group acquired in S41 includes metadata, where the metadata includes information of time, remote sensor, area, and the like of the image, and the information in the metadata is extracted and analyzed to obtain time-series data and sensor data.
S42, calculating the variance of the time series data and the sensor data, carrying out data normalization on the quality score, the variance of the time series data and the variance of the sensor data, and integrating the data into a comprehensive score, wherein the score range of the comprehensive score is 0-100.
S43 constructing the merit function based on the cloud content and the composite score.
S44, calculating the third image group by using the evaluation function to obtain the evaluation score of the third image group, wherein the evaluation score is the numerical value after cloud cover and comprehensive score normalization and is recorded asEThe value range is still 0-100.
Firstly, the first step is toThe cloud content is normalized to a value of 0 to 100 and is recorded asXThe composite score is recordedYThe formula for calculating the evaluation score is as follows:
Figure DEST_PATH_IMAGE005
wherein, the first and the second end of the pipe are connected with each other,ω 1 is a weight coefficient containing the cloud cover,ω 2 is the weight coefficient of the composite score,ω 1 >ω 2 and the sum of the two weight coefficients is 1. In the present embodiment of the present invention,ω 1 taking out the mixture of 0.7 percent,ω 2 take 0.3.
S45, sorting the third image group from low to high according to the evaluation scores of the third image group, and screening from positive according to a screening strategy to obtain a preferred image group. The screening strategy is as follows:
the first step is as follows: sequentially selecting a single image as a pre-screening image from the positive sequence, and taking an image set except the pre-screening image in the third image group as a residual image group;
the second step: setting a reservation rule, wherein the reservation rule is that if the coverage rate of the union set of the images in the remaining image groups to the target area is reduced after the pre-screening image is removed, the corresponding pre-screening image is directly reserved, and if the pre-screening image does not accord with the reservation rule, the third step is executed;
the third step: calculating the average evaluation score of the third image group and the average evaluation score of the rest image groups, and if the average evaluation score of the rest image groups is lower than the average evaluation score of the third image group, reserving the corresponding pre-screening images;
the fourth step: if the evaluation scores of two or more images are the same in the images left after the third step, the images with the same evaluation scores are sorted from low to high according to the cloud content, the first image is reserved, and the rest images are screened out.
Step S45 is explained in an embodiment of the present invention: combining the third image setDRanked from low to high by rating scoreIs written asD={D 1 ,D 2 ,...,D x FromD 1 The execution of the screening policy is initiated, for example,D 1 for pre-screening out images, a mapD 2 ,...,D x And f, the rest image groups. First according to the retention rule pairD 1 Making a determination to removeD 1 Rear, openingD 2 ,...,D x The coverage rate of the union of the images to the target area is reduced to be no more than 100 percent, and thenD 1 If the image is judged to be necessary, keeping the image, otherwise, still marking the image as pre-screening; the average rating score for the third image group and the average rating scores for the remaining image groups are then calculated, e.g.,D 3 if the image does not conform to the retention rule and is still marked as a pre-screening image, thenD 3 The corresponding rest image group isD 1 ,D 2 ,D 4 ,...,D x Recording the rest of the images asK 3 CalculatingK 3 The average evaluation score of all the images in (1) is recorded
Figure 23397DEST_PATH_IMAGE006
And calculateDThe average evaluation score of all the images in (1) is recorded
Figure 602627DEST_PATH_IMAGE007
If, if
Figure DEST_PATH_IMAGE008
Then will beD 3 To be reserved, otherwise, toD 3 Screening out; to pairDAfter all the images are processed, the retained images are used as the retained image group and recorded as the retained image groupG={G 1 ,G 2 ,...,G y },y<xSuppose thatG y-4G y-2G y If the evaluation scores are the same, the evaluation scores will beG y-4G y-2G y According to cloud content fromLow to high order, the order after order isG y-2 <G y <G y-4G y-2 Has the lowest cloud content, is reservedG y-2 Sifting outG y AndG y-4 . The images that are ultimately retained form the preferred image set.
In an embodiment of the present invention, step S5 includes:
s51 sets the images in the initial image group except the preferred image group as candidate image groups.
S52 determines the candidate image group based on the second expansion strategy, and the images meeting the requirement of the second expansion strategy are classified into the expanded image group.
S53 expands the preferred image group by the expanded image group to obtain a final image group.
The specific execution procedures of steps S52-S53 are as follows:
(1) respectively comparing one image in the candidate image group with all images in the preferred image group one by one,
(2) if the coverage rate of no image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation score of the image in the candidate image group, and if the evaluation score is higher than the average evaluation score of all images in the preferred image group, classifying the image in the candidate image group into the extended image group;
(3) if the coverage rate of one image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation scores of the two images, and if the evaluation score of the image in the candidate image group is higher than the evaluation score of the image in the preferred image group, classifying the image in the candidate image group into the extended image group;
(4) and if the coverage rate of two or more than two image subgroups in the preferred image group to the image in the candidate image group reaches 100%, comparing the evaluation score of the image in the candidate image group with the evaluation score of each image in the image subgroups, and if the evaluation score of the image in the candidate image group is higher than the lowest evaluation score in the image subgroups, classifying the image in the candidate image group into the expanded image group.
(5) The augmented image set and the preferred image set are merged into a final image set.
The steps S51-S53 are explained in an embodiment of the present invention: record the preferred image asZ={Z 1 ,Z 2 ,...,Z s Record as candidate image groupV={V 1 ,V 2 ,...,V t },Z+V=ATo, forVAll images in (1) are in common withZComparing the images in (1).
To be provided withV 1 For example, inZSearching whether images can be completely coveredV 1
If it isZNone of the images can completely coverV 1 Then calculate-Z 1 ,Z 2 ,...,Z s ThissMean of evaluation scores of sheet images
Figure 354682DEST_PATH_IMAGE009
And anV 1 Evaluation score of (2)
Figure DEST_PATH_IMAGE010
Comparison of
Figure 177145DEST_PATH_IMAGE009
And
Figure 631129DEST_PATH_IMAGE011
the size of (1) when
Figure DEST_PATH_IMAGE012
Will beV 1 Fall into the expanded image group.
If there is an image such asZ 2 Can coverV 1 Then calculateZ 2 AndV 1 evaluation score of (2)
Figure 830029DEST_PATH_IMAGE013
And
Figure DEST_PATH_IMAGE014
when it comes to
Figure 436591DEST_PATH_IMAGE015
Will beV 1 Fall into the expanded image group.
If it isZAll 2 or more than 2 images can be completely coveredV 1 Such asZ 1Z k Z l k,l<s) Are all alignedV 1 The coverage rate of (2) is 100%, the calculation is performed firstZ 1Z k Z l Evaluation score of (2)
Figure DEST_PATH_IMAGE016
The evaluation scores are sorted according to the size
Figure 616905DEST_PATH_IMAGE017
Then calculateV 1 Evaluation score of (2)
Figure DEST_PATH_IMAGE018
When it comes to
Figure 371235DEST_PATH_IMAGE019
Will beV 1 Fall into the expanded image group.
Check and checkV 2 ,...,V t And executing the steps respectively to obtain an expanded image group, and adding the expanded image group into the preferred image group to obtain a final image group.
The beneficial effects of the invention are as follows: the invention provides an accurate screening method of a remote sensing image group, which comprises the following steps: selecting a target area, acquiring an initial image group in the target area, and calculating a detection score of the initial image group; dividing the initial image group into a first image group and a second image group according to the detection scores; expanding the first image group based on a first expansion strategy to obtain a third image group; calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group; and expanding the preferred image group according to a second expansion strategy to obtain a final image group. The method has the advantages that the images are integrally evaluated and screened based on the cloud content, the comprehensive grading and the evaluation score, after primary screening, first expansion, fine screening and second expansion, the number of the obtained final image groups is optimal, the image quality is high, the accuracy of the images in interpretation and other applications in a target area can be greatly improved, the screening process is quick, the calculated amount is small, the manpower, material resources and financial resources can be greatly saved, and the purpose of quickly and accurately screening the images required by users is achieved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are also within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. A method for accurately screening a remote sensing image group is characterized by comprising the following steps:
s1, selecting a target area, acquiring an initial image group in the target area, and calculating the detection score of the initial image group;
step S1 includes:
carrying out cloud content detection on the images in the initial image group to obtain cloud content;
detecting quality items of the images in the initial image group to obtain quality scores, wherein the detection of the quality items comprises strip detection, high exposure detection, edge detection and histogram detection;
integrating the cloud content and the mass fraction to obtain a detection fraction, wherein the numerical range of the detection fraction is 0-100;
s2, dividing the initial image group into a first image group and a second image group according to the detection scores;
step S2 includes:
sorting the initial image groups from high to low according to detection scores, starting screening from a positive sequence until the coverage rate of a union set of the screened images on the target area reaches a coverage threshold, taking the screened images as a first image group, and taking the images except the first image group in the initial image group as a second image group;
s3, expanding the first image group based on a first expansion strategy to obtain a third image group;
step S3 includes:
taking one image in the first image group as an image to be expanded;
determining a region to be expanded in the image to be expanded;
sequencing the second image group from high to low according to the detection fraction, and selecting the images in the second image group from positive sequence to expand the area to be expanded until the coverage rate of the selected union set of the images in the second image group to the area to be expanded reaches 100%;
traversing all images of the first image group, combining the selected images in the second image group with the first image group to serve as a third image group;
s4, calculating the evaluation score of the third image group, and screening the third image group according to the evaluation score to obtain a preferred image group;
step S4 includes:
acquiring metadata of the initial image group, and analyzing the metadata to obtain time sequence data and sensor data;
calculating the variance of the time-series data and the sensor data, and integrating the quality score, the variance of the time-series data and the variance of the sensor data into a comprehensive score;
constructing an evaluation function based on the cloud content and the comprehensive score;
calculating the third image group by using the evaluation function to obtain an evaluation score of the third image group;
sorting the third image group from low to high according to the evaluation scores of the third image group, and screening the third image group from the positive sequence according to a screening strategy to obtain a preferred image group;
s5, expanding the preferred image group according to a second expansion strategy to obtain a final image group;
step S5 includes:
taking the images in the initial image group except the preferred image group as a candidate image group;
judging the candidate image group based on the second expansion strategy, and classifying the images meeting the requirements of the second expansion strategy into an expanded image group;
and expanding the optimized image group by utilizing the expanded image group to obtain a final image group.
2. The method for accurately screening a set of remote sensing images according to claim 1, wherein the mass fraction is obtained by integrating and calculating the detection result of each mass item, and the numerical range of the mass fraction is 0-100.
3. The method for accurately screening a remote sensing image group according to claim 1, wherein the image to be expanded intersects with a union set of the remaining images of the first image group except the image to be expanded, the image to be expanded is divided into an intersecting region and a non-intersecting region, and the non-intersecting region is the region to be expanded.
4. The method for accurately screening a set of remote sensing images of claim 1, wherein the screening strategy is:
the first step is as follows: sequentially selecting single images as pre-screening images from the positive sequence, and taking the image set except the pre-screening images in the third image group as a residual image group;
the second step: setting a reservation rule, wherein the reservation rule is to reserve the corresponding pre-screened image if the coverage rate of the union set of the images in the remaining image groups to the target area is reduced after the pre-screened image is removed, and execute a third step if the pre-screened image does not accord with the reservation rule;
the third step: calculating the average evaluation score of the third image group and the average evaluation score of the rest image groups, and if the average evaluation score of the rest image groups is lower than the average evaluation score of the third image group, reserving the corresponding pre-screening images;
the fourth step: if the evaluation scores of two or more images are the same in the images to be retained after the second step and the third step are executed, the images with the same evaluation scores are sorted from low to high according to the cloud content, the first image is retained, and the rest images are screened out.
5. The method of claim 1, wherein the determining the candidate image group based on the second augmentation policy is performed, and images meeting the requirements of the second augmentation policy are included in an augmented image group, comprising:
respectively comparing one image in the candidate image group with all images in the preferred image group one by one,
if the coverage rate of no image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation score of the image in the candidate image group, and if the evaluation score is higher than the average evaluation score of all images in the preferred image group, classifying the image in the candidate image group into the extended image group;
if the coverage rate of one image in the preferred image group to the image in the candidate image group reaches 100%, calculating the evaluation scores of the two images, and if the evaluation score of the image in the candidate image group is higher than the evaluation score of the image in the preferred image group, classifying the image in the candidate image group into the extended image group;
if the coverage rate of two or more than two image subgroups in the preferred image group to the image in the candidate image group reaches 100%, comparing the evaluation score of the image in the candidate image group with the evaluation score of each image in the image subgroups, and if the evaluation score of the image in the candidate image group is higher than the lowest evaluation score in the image subgroups, classifying the image in the candidate image group into the extended image group.
CN202210777516.1A 2022-07-04 2022-07-04 Accurate screening method for remote sensing image group Active CN114882379B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210777516.1A CN114882379B (en) 2022-07-04 2022-07-04 Accurate screening method for remote sensing image group
PCT/CN2023/077837 WO2024007598A1 (en) 2022-07-04 2023-02-23 Accurate screening method for remote sensing image group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210777516.1A CN114882379B (en) 2022-07-04 2022-07-04 Accurate screening method for remote sensing image group

Publications (2)

Publication Number Publication Date
CN114882379A CN114882379A (en) 2022-08-09
CN114882379B true CN114882379B (en) 2022-09-13

Family

ID=82683082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210777516.1A Active CN114882379B (en) 2022-07-04 2022-07-04 Accurate screening method for remote sensing image group

Country Status (2)

Country Link
CN (1) CN114882379B (en)
WO (1) WO2024007598A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882379B (en) * 2022-07-04 2022-09-13 北京数慧时空信息技术有限公司 Accurate screening method for remote sensing image group

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830312A (en) * 2018-06-01 2018-11-16 苏州中科天启遥感科技有限公司 A kind of integrated learning approach adaptively expanded based on sample
CN110413828A (en) * 2019-07-31 2019-11-05 中国电子科技集团公司第五十四研究所 Remote sensing huge image data auto-screening method based on optimized Genetic Algorithm
WO2022126478A1 (en) * 2020-12-17 2022-06-23 深圳市大疆创新科技有限公司 Image acquisition menthod, apparatus, movable platform, control terminal, and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006001681B4 (en) * 2006-01-12 2008-07-10 Wismüller, Axel, Dipl.-Phys. Dr.med. Method and device for displaying multi-channel image data
CN109101894B (en) * 2018-07-19 2019-08-06 山东科技大学 A kind of remote sensing image clouds shadow detection method that ground surface type data are supported
CN111222539B (en) * 2019-11-22 2021-07-30 国际竹藤中心 Method for optimizing and expanding supervision classification samples based on multi-source multi-temporal remote sensing image
CN113297407B (en) * 2021-05-21 2021-11-26 生态环境部卫星环境应用中心 Remote sensing image optimization method and device
CN113327259B (en) * 2021-08-04 2021-10-29 中国科学院空天信息创新研究院 Remote sensing data screening method and system for area coverage
CN113780096B (en) * 2021-08-17 2023-12-01 北京数慧时空信息技术有限公司 Vegetation ground object extraction method based on semi-supervised deep learning
CN113936227A (en) * 2021-12-17 2022-01-14 北京数慧时空信息技术有限公司 Remote sensing image sample migration method
CN114882379B (en) * 2022-07-04 2022-09-13 北京数慧时空信息技术有限公司 Accurate screening method for remote sensing image group

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830312A (en) * 2018-06-01 2018-11-16 苏州中科天启遥感科技有限公司 A kind of integrated learning approach adaptively expanded based on sample
CN110413828A (en) * 2019-07-31 2019-11-05 中国电子科技集团公司第五十四研究所 Remote sensing huge image data auto-screening method based on optimized Genetic Algorithm
WO2022126478A1 (en) * 2020-12-17 2022-06-23 深圳市大疆创新科技有限公司 Image acquisition menthod, apparatus, movable platform, control terminal, and system

Also Published As

Publication number Publication date
WO2024007598A1 (en) 2024-01-11
CN114882379A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
Liang et al. An efficient forgery detection algorithm for object removal by exemplar-based image inpainting
JP3740065B2 (en) Object extraction device and method based on region feature value matching of region-divided video
CN111738318B (en) Super-large image classification method based on graph neural network
CN109410171B (en) Target significance detection method for rainy image
WO2015192115A1 (en) Systems and methods for automated hierarchical image representation and haze removal
CN114882379B (en) Accurate screening method for remote sensing image group
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
Saad et al. Image retrieval based on integration between YCbCr color histogram and texture feature
CN107895162B (en) Image saliency target detection algorithm based on object prior
CN115223056A (en) Multi-scale feature enhancement-based optical remote sensing image ship target detection method
CN111428730B (en) Weak supervision fine-grained object classification method
US20040161152A1 (en) Automatic natural content detection in video information
JP4285644B2 (en) Object identification method, apparatus and program
JP2005352718A (en) Representative image selection device, representative image selection method and representative image selection program
CN115359442A (en) Vehicle weight recognition method based on component representation learning and personalized attribute structure
CN115690410A (en) Semantic segmentation method and system based on feature clustering
JP4030318B2 (en) Map data update device and map data update method
JP3897306B2 (en) Method for supporting extraction of change region between geographic images and program capable of supporting extraction of change region between geographic images
CN114943903A (en) Self-adaptive clustering target detection method for aerial image of unmanned aerial vehicle
JP2005063307A (en) Image identification method and device, object identification method and device, and program
JPH06251147A (en) Video feature processing method
US11157767B2 (en) Image searching method based on feature extraction
TWI385595B (en) Image segmentation method using image region merging algorithm
CN113222005B (en) Automatic updating method for land coverage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Precise Filtering Method for Remote Sensing Image Groups

Effective date of registration: 20230413

Granted publication date: 20220913

Pledgee: Haidian Beijing science and technology enterprise financing Company limited by guarantee

Pledgor: Beijing Shuhui spatiotemporal information technology Co.,Ltd.

Registration number: Y2023110000158

PE01 Entry into force of the registration of the contract for pledge of patent right