CN111553315A - Satellite image-based poverty prediction model construction and poverty prediction method - Google Patents

Satellite image-based poverty prediction model construction and poverty prediction method Download PDF

Info

Publication number
CN111553315A
CN111553315A CN202010404838.2A CN202010404838A CN111553315A CN 111553315 A CN111553315 A CN 111553315A CN 202010404838 A CN202010404838 A CN 202010404838A CN 111553315 A CN111553315 A CN 111553315A
Authority
CN
China
Prior art keywords
poverty
satellite image
image
daytime
satellite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010404838.2A
Other languages
Chinese (zh)
Inventor
李旭涛
叶允明
倪烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202010404838.2A priority Critical patent/CN111553315A/en
Publication of CN111553315A publication Critical patent/CN111553315A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Abstract

The invention relates to a method for constructing a poverty prediction model and predicting poverty based on satellite images. Which comprises the following steps: constructing a poor degree prediction model based on a satellite image; inputting a calibration daytime satellite image into the satellite image-based poverty prediction model; the output of the satellite image-based poverty prediction model is used as poverty prediction data indicative of the region corresponding to the nominal daytime satellite image. According to the technical scheme, poverty degree prediction data of different regions can be obtained quickly and accurately.

Description

Satellite image-based poverty prediction model construction and poverty prediction method
Technical Field
The invention relates to the technical field of computer application, in particular to a method for constructing a poverty prediction model and predicting poverty based on satellite images.
Background
The economic index of a geographic area, especially the accurate acquisition of the poverty, needs to consume a large amount of manpower and material resources, and only poverty survey data aiming at partial geographic areas exist at present through long-term accumulation. Because there is a certain relevance between the regional light index and the regional poverty, there is a method for predicting the poverty of a specific geographic region through a dim light remote sensing image of the geographic region at present, and more specifically, the corresponding poverty is indicated through the light intensity of the dim light remote sensing image. However, on one hand, the light intensity of the low-light-level remote sensing image is single-dimensional information, and the poor degree prediction rate is not accurate, and on the other hand, the method is generally suitable for the poor degree prediction of a geographic area with a large area, such as a provincial administrative unit of China, but cannot be accurately applied to the poor degree prediction of a geographic area with a smaller area.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for constructing a poverty prediction model and predicting poverty based on a satellite image.
In a first aspect, the invention provides a method for constructing a poor degree prediction model based on a satellite image, which comprises the following steps:
acquiring calibration poverty survey data and a low-light-level remote sensing image and a daytime satellite image corresponding to the calibration poverty survey data;
acquiring brightness information of the low-light-level remote sensing image;
taking the daytime satellite image as a sample, taking the brightness information as a label, and training a preset neural network model to obtain a daytime image recognizer;
taking the output of the daytime image recognizer as a sample, taking the calibration poverty survey data as a label, and training a preset regression model to obtain a poverty predictor;
and obtaining a satellite image-based poverty prediction model according to the daytime image recognizer and the poverty predictor.
Further, the acquiring of the calibration poverty survey data and the low-light-level remote sensing image and the daytime satellite image corresponding to the calibration poverty survey data includes:
after the calibration poverty survey data are obtained, determining longitude and latitude information corresponding to the calibration poverty survey data;
and acquiring the low-light remote sensing image and the daytime satellite image according to the longitude and latitude information.
Further, the luminance information includes low luminance information indicating low light intensity, medium luminance information indicating medium light intensity, and high luminance information indicating high light intensity; the acquiring of the brightness information of the low-light-level remote sensing image comprises the following steps:
determining the light intensity corresponding to each pixel point of the low-light-level remote sensing image;
and converting the light intensity into the brightness information by a method of fitting mixed Gaussian distribution.
Further, the preset neural network model is a dense convolutional neural network.
Further, the preset regression model is a LASSO regression model.
Further, the cost function of the LASSO regression model is:
Figure BDA0002490899700000021
where m denotes the total number of samples, x(i)Denotes the ith sample, y(i)And a label corresponding to the ith sample is represented, w represents weight, b represents bias, and lambda represents a penalty factor.
In a second aspect, the invention provides a device for constructing a poor degree prediction model based on satellite images, which comprises a memory and a processor; the memory for storing a computer program; the processor is configured to, when executing the computer program, implement the method for constructing a poor degree prediction model based on satellite images as described above.
In a third aspect, the present invention provides a method for predicting poverty based on satellite images, including the following steps:
inputting the calibration daytime satellite image into the satellite image-based poverty prediction model constructed by the satellite image-based poverty prediction model construction method;
the output of the satellite image-based poverty prediction model is used as poverty prediction data indicative of the region corresponding to the nominal daytime satellite image.
In a fourth aspect, the present invention provides an apparatus for satellite image-based poverty prediction, the apparatus comprising a memory and a processor; the memory for storing a computer program; the processor is configured to, when executing the computer program, implement the method for satellite image-based poverty prediction as described above.
In a fifth aspect, the present invention provides a computer-readable storage medium, having stored thereon a computer program, which, when being executed by a processor, implements the method for constructing a model for predicting poverty based on satellite images as described above, or implements the method for predicting poverty based on satellite images as described above.
The method, the device and the storage medium for constructing the poverty prediction model based on the satellite images have the advantages that because the dim light remote sensing images only have one-dimensional characteristic information of brightness information, the accuracy and the range of reflected regional poverty are limited, the daytime image recognizer can obtain more-dimensional characteristic information of houses, rivers, forests, roads and the like in the corresponding daytime satellite images based on the dim light remote sensing images, information related to poverty can be more comprehensively and accurately represented, and then poverty data corresponding to the region can be obtained through the poverty predictor. Therefore, by the method, poverty prediction data of different regions can be obtained quickly and accurately. In addition, the method can also be applied to the poverty degree data prediction of regions with smaller areas, and has a wider application range.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below to the drawings required for the description of the embodiments or the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a method for constructing a satellite image-based poverty prediction model according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for predicting poverty based on satellite images according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a method for constructing a poor degree prediction model based on a satellite image according to an embodiment of the present invention includes the following steps:
and S11, acquiring calibration poverty survey data and a low-light-level remote sensing image and a daytime satellite image corresponding to the calibration poverty survey data.
Specifically, a manually-performed poverty investigation database of parts of africa is disclosed on the network, and poverty investigation data collected manually for a specific area can be acquired from the database. The method is specific to a specific area, so that a low-light remote sensing image shot at night and a daytime satellite image shot at daytime of the area can be correspondingly obtained.
And S12, acquiring the brightness information of the low-light-level remote sensing image.
Specifically, the main characteristic of the low-light-level remote sensing image is brightness reflecting illumination intensity, and each pixel point corresponds to corresponding brightness information.
And S13, taking the daytime satellite image as a sample, taking the brightness information as a label, and training a preset neural network model to obtain the daytime image recognizer.
Specifically, the calibration poor degree survey data corresponds to the low-light-level remote sensing image and the daytime satellite image, the daytime satellite image is used as a training sample, the brightness information of the low-light-level remote sensing image is used as a label, the preset neural network model is trained, and after a certain condition is met, the daytime image recognizer can be obtained and outputs information such as houses, rivers, forests and roads in the indicating daytime satellite image.
And S14, taking the output of the daytime image recognizer as a sample, taking the calibration poverty survey data as a label, and training a preset regression model to obtain a poverty predictor.
Specifically, the multi-dimensional feature output of the daytime image recognizer is used as a training sample, poverty survey data is calibrated to be used as a label, a preset regression model is trained, and a poverty predictor can be obtained after a certain condition is met. The output of the poverty predictor may indicate the poverty of the area corresponding to the satellite image, as affected by the calibrated poverty survey data obtained through manual survey.
And S15, obtaining a satellite image-based poverty prediction model according to the daytime image recognizer and the poverty predictor.
Specifically, the daytime image recognizer and the poverty predictor can be packaged and combined to serve as a poverty prediction model based on satellite images, the daytime satellite images of a region are input into the poverty prediction model, and the obtained output can indicate poverty prediction data of the region.
In this embodiment, since the dim light remote sensing image only has one-dimensional feature information of luminance information, the accuracy and the range of the reflected area poverty are limited, and the daytime image recognizer can obtain feature information of houses, rivers, forests, roads and other dimensions in the corresponding daytime satellite image based on the dim light remote sensing image, so that the information related to poverty can be more comprehensively and accurately represented, and then poverty data corresponding to the area can be obtained through the poverty predictor. Therefore, by the method, poverty prediction data of different regions can be obtained quickly and accurately. In addition, the method can also be applied to the poverty degree data prediction of regions with smaller areas, and has a wider application range.
Optionally, the acquiring calibration poverty survey data and the low-light-level remote sensing image and the daytime satellite image corresponding to the calibration poverty survey data includes:
and after the calibration poverty survey data is acquired, determining longitude and latitude information corresponding to the calibration poverty survey data.
And acquiring the low-light remote sensing image and the daytime satellite image according to the longitude and latitude information.
Specifically, a poor-degree survey database, such as a particular region of africa in a DHS database, may be obtained first. The area is determined, so that longitude and latitude information of the area can be further acquired, and then the low-light-level remote sensing image and the daytime satellite image of the area are acquired from the map data according to the longitude and latitude information.
More specifically, global low-light remote sensing images are mainly derived from DMSP/OLS and VIIRS satellites. The resolution ratio of the DMSP/OLS low-light-level remote sensing image is about 1km, namely the brightness of one pixel point represents 1km2Night illumination intensity in the range, so the real area of the daytime satellite image should be 1km2Left and right. According to longitude and latitude information provided by poor survey data, selecting average longitude and latitude of survey points in close distance ranges as aggregation points, and crawling daytime satellite images by taking the aggregation points as centers. The daytime satellite images may be derived from various map APIs. Selecting an image with a specific zoom scale, for example, from Google Mapstatic API, the image with the zoom scale 16 has 1 pixel corresponding to an actual distance of 2.39m, and the image size is 400 × 400 pixels, so that the one daytime satellite image can correspond to one pixel point on the DMSP/OLS low-light remote sensing image. The brightness value of the pixel point is the night light intensity of the area corresponding to the daytime satellite image.
It should be noted that the size of the picture should be moderate, too large will reduce the speed of neural network training, and too small will result in serious picture information loss, so that the neural network cannot extract important information.
Optionally, the brightness information includes low brightness information indicating low light intensity, medium brightness information indicating medium light intensity, and high brightness information indicating high light intensity; the acquiring of the brightness information of the low-light-level remote sensing image comprises the following steps:
and determining the light intensity corresponding to each pixel point of the low-light-level remote sensing image.
And converting the light intensity into the brightness information by a method of fitting mixed Gaussian distribution.
Specifically, since the brightness range of the low-light remote sensing image from the DMSP/OLS is 0-63, and the features in the daytime satellite image are not in proportion to the light intensity at night, other factors such as the life habits of residents in the area can be influenced. For example, there is an area with a much larger number of houses than another area, but finally the noctilucence intensity label of the area is smaller than that of another area, which is important in relation to the number of resident residents living in the area. Therefore, the brightness needs to be subdivided, for example, into three levels of low, medium and high as training labels, so as to reduce the training error caused by label error.
More specifically, by drawing a histogram of 64 luminances, it is assumed that the histogram is a mixed normal distribution in which 3 normal distributions are combined. And fitting the mixed Gaussian distribution model by using an EM (effective velocity) algorithm to obtain three normal distribution curves, and taking the junction point of the curves as a dividing point. Thus, for example, luminance values of 0-6 can be divided into low intensities, luminance values of 7-20 into medium intensities, and luminance values of 21-63 into high intensities. In addition, because the area of poor areas is often larger than that of rich areas, the number of samples with low brightness values is much larger than that of samples with high brightness values, and oversampling and undersampling methods can be adopted to increase the number of samples with high brightness values and reduce the number of samples with low brightness values, respectively, so as to reduce the influence of unbalanced samples on subsequent identification.
Optionally, the preset neural network model is a dense convolutional neural network.
Specifically, the neural network in the present application may employ a structure including, for example, 4 Dense blocks (Dense blocks), 3 Transition layers (Transition layers), and one Classification Layer (Classification Layer). Of course, other suitable configurations may be selected.
Wherein each dense block is composed of a series combination unit of a plurality of Batch Normalization (Batch Normalization) layers, a ReLU activation function, 1 × 1 convolution kernels, Batch Normalization layers, a ReLU activation function, and 3 × 3 convolution kernels. In a dense block, aThe input of each convolution kernel is the concatenation of all convolution kernel outputs before the convolution kernel, so that the feature map size needs to be guaranteed to be the same by adjusting the sliding step size and padding (padding) of the convolution kernel. The introduction of the 1 × 1 convolution kernel can be regarded as linear combination among characteristic diagram channels, so that channel information interaction is realized, and meanwhile, the problem of dimension sharp increase caused by dense structures is greatly reduced. Replacing 5 × 5 or 7 × 7 large convolution kernels with 3 × 3 small convolution kernel concatenation can extract more accurate features and obtain more non-linearity. Each dense block has two specific parameters, bn _ size _ growth _ rate and growth _ rate. Where bn _ size _ growth _ rate is the number of channels of the feature map generated by the 1 × 1 convolution kernel, and growth _ rate is the output of the 3 × 3 convolution kernel. This controls the number of input channels of the ith combining unit to be k0And the output channel number is the growth _ rate (i-1), so that the parameter number of the convolutional neural network is greatly reduced.
The addition of the batch normalization layer is to avoid the problem of gradient disappearance, and the value before the mapping of the nonlinear activation function is pulled back to the standard normal distribution with the mean value of 0 and the variance of 1, but the neural network loses part of the nonlinear property, so that the feature extraction capability is reduced. In order to ensure the non-linearity, the batch normalization layer performs linear transformation on the transformed values, and the specific process is as follows:
first, the mean of the samples is obtained:
Figure BDA0002490899700000071
wherein x isiIs the sample and m is the total number of samples.
The variance is then obtained based on the mean of the samples:
Figure BDA0002490899700000072
then, sample data is adjusted to be normally distributed:
Figure BDA0002490899700000073
finally, the standard normal distribution is scaled and shifted:
Figure BDA0002490899700000081
wherein gamma and beta are hyper-parameters and are obtained through learning and training, the arrow represents assignment, and the ident is an identity number. Gamma and beta are parameters obtained by the training of the neural network, an initial value is given, and the initial value is continuously adjusted to an optimal value along with the training. The two parameters are added, so that the adjusted standard normal distribution can adapt to a ReLU nonlinear function, part of original nonlinear properties are recovered, and a better balance point is found between linearity and nonlinearity, so that the neural network has stronger expression capability and faster convergence rate.
And the classification layer classifies the generated feature map and generates the possibility that the image belongs to each class of label. The classification layer is composed of a global average pooling layer and a full-connection layer, the global average pooling layer can convert the characteristic diagram with the size of x, y and z into a characteristic vector with the dimension of z, and then the characteristic vector is sent to the full-connection layer to generate various probabilities. This allows the network to adapt to pictures of different sizes, because the generated feature maps have different scales, but the number of channels is the same, and the dimension of the finally generated feature vector is also the same.
The neural network structure can maximize the flow of information and realize multi-scale feature multiplexing. The specific structure is shown in table 1.
TABLE 1
Figure BDA0002490899700000082
Taking a satellite image between days with the size of 224 × 224 and the number of channels of 3 as an example, an input image is firstly subjected to Convolution kernel (size of 7 × 7, stride (step size) of 2) to generate a feature map with the size of 112 × 64, and then subjected to downsampling by a Max Pool (Max Pool) layer (size of 3 × 3, stride of 2, padding of 1) to generate a feature map with the size of 56 × 64. Then, the first dense block (having 6 above-mentioned combination units) is started to pass through, and the growth _ rate of the unit is the input of the first combination unit, i.e. 56 × 64 size feature map, and 56 × 32 size feature map is output. Before passing through the second combination unit, the feature map is spliced with the input of the first combination unit into a feature map with the size of 56 × 96 as the input of the second combination unit. And by analogy, the input of the last combination unit is a feature map with the size of 56 × 224, the output of the last combination unit is a feature map with the size of 56 × 32, and the inputs of all the previous combination units are spliced to generate a feature map with the size of 56 × 256, and the feature map is sent to the first transition layer. The feature map is first passed through a 1 × 1 convolution kernel to compress half the number of channels in the feature map to 128, and then passed through an average pooling (average pool) layer (2 × 2 in size, stride 2) to generate a 28 × 128 size feature map. Similar to the above operation, the feature map passes through 3 dense blocks (each having 12, 24, and 16 combination units described above) and 2 transition layers, the size of the finally obtained feature map is 7 × 1024, a 1024-dimensional feature vector is generated through a global average pooling (global average pool) layer, and the possibility of three categories is obtained through a full connection layer and a Softmax function.
In the embodiment, a dense convolutional neural network is adopted to extract multi-scale features, so that important features are prevented from being lost in the convolution process, and higher-level abstract features are kept, so that finally extracted features can accurately predict the poverty.
Optionally, the preset regression model is a LASSO regression model. Of course, other suitable configurations may be selected.
Specifically, after obtaining the daytime image recognizer through training, the output feature vector of the daytime image recognizer can be used as the input of a regression model, and the feature vector represents the features extracted after the daytime satellite image is subjected to convolution and dimensionality reduction, and may include important features such as the number of houses, the area of forests, the length of rivers and the like.
Optionally, the cost function of the LASSO regression model is:
Figure BDA0002490899700000091
where m denotes the total number of samples, x(i)Denotes the ith sample, y(i)And indicating a label corresponding to the ith sample, w indicating a weight, b indicating an offset, and lambda indicating a penalty factor, wherein lambda is more than 0.
In particular, L1The addition of norm can reduce the w coefficient, avoids the slight change of x to cause great influence to the predicted value to reduce the influence that overfitting brought, L simultaneously1Norm may be compared to L2The norm results in a more sparse solution, w has more zero components. The feature vector extracted by the daytime image recognizer may have 4096 components, which components play a key role in predicting poverty, L1The norm may solve the problem of component selection.
More specifically, assume that the output of the daytime image recognition is a five-dimensional vector, e.g., [0.12,0.23, -0.34,0.67,0.0]. Each dimension represents a different meaning, the first dimension may be the number of houses and the second dimension may represent a river or desert. The model randomly initializes a weight [ w1, w2, w3, w4, w5 ] with a normal distribution]And an offset b, x(i)Is [0.12,0.23, -0.34,0.67,0.0 ]]The predicted value can be obtained according to the cost function formula. The model will continually optimize the weights w and offsets b based on the predicted values compared to the true values provided by, for example, DHS survey data, such that the predicted values are continually closer to the true values.
The fitting process adopts cross validation, the original data can be equally divided into N groups, each group is selected to be used as a test set, the rest N-1 groups are used as training sets, and finally the average classification accuracy after N times of tests is used as an evaluation index of a classification model. The mode of dividing the set adopts the stratified sampling, ensures that the ratio of the number of various samples in the training set and the testing set is approximately equal, and avoids the influence on the training of the model caused by different distribution of the training set and the testing set due to random sampling. According to the method, the distribution result is fitted by using the outer cross validation, the optimal hyper-parameter is selected by using the inner cross validation, the training error and the generalization error are considered, and the influence caused by overfitting is further reduced.
For example, 5-fold cross validation is used, i.e. the total data is divided into 5 parts, 4 of which are used as training set and 1 of which is used as test set, and is used to evaluate the model training result. And sequentially rotating the data sets to train to obtain five models, and finally taking the average value of the five models as a final prediction result.
In another embodiment of the invention, an apparatus for constructing a satellite image-based poverty prediction model includes a memory and a processor; the memory for storing a computer program; the processor is configured to, when executing the computer program, implement the method for constructing a poor degree prediction model based on satellite images as described above.
As shown in fig. 2, a method for predicting poverty based on satellite images according to an embodiment of the present invention includes the following steps:
s21, inputting the calibration daytime satellite image into the satellite image-based poverty prediction model constructed by the satellite image-based poverty prediction model construction method.
And S22, using the output of the satellite image-based poverty prediction model as poverty prediction data indicating the region corresponding to the calibration daytime satellite image.
Specifically, for a particular region, a daytime satellite image of the region may be acquired from a map database, and the daytime satellite image may be input into the poverty prediction model, the output of which may indicate poverty data for the particular region. The efficiency and the accuracy of predicting the poverty data of different regions are effectively improved.
In the embodiment, the advantages that the feature information of the daytime satellite image is rich and the low-light-level remote sensing image reflects economic information, particularly poverty, are combined, the two defects that the feature information of the daytime satellite image cannot obviously express economic indexes indicating poverty and the characteristic of the low-light-level remote sensing image is single are overcome, poverty prediction data of different areas can be obtained quickly and accurately, and the method is also suitable for poverty data prediction of areas with smaller areas and has a wider application range.
In another embodiment of the present invention, an apparatus for satellite image-based poverty prediction includes a memory and a processor; the memory for storing a computer program; the processor is configured to, when executing the computer program, implement the method for satellite image-based poverty prediction as described above.
In another embodiment of the present invention, a computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the method for constructing a model for satellite image-based poverty prediction as described above, or implements the method for satellite image-based poverty prediction as described above.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for constructing a poor degree prediction model based on a satellite image is characterized by comprising the following steps:
acquiring calibration poverty survey data and a low-light-level remote sensing image and a daytime satellite image corresponding to the calibration poverty survey data;
acquiring brightness information of the low-light-level remote sensing image;
taking the daytime satellite image as a sample, taking the brightness information as a label, and training a preset neural network model to obtain a daytime image recognizer;
taking the output of the daytime image recognizer as a sample, taking the calibration poverty survey data as a label, and training a preset regression model to obtain a poverty predictor;
and obtaining a satellite image-based poverty prediction model according to the daytime image recognizer and the poverty predictor.
2. The method for constructing the satellite image-based poverty prediction model according to claim 1, wherein the acquiring of the calibration poverty survey data and the low-light-level remote sensing image and the daytime satellite image corresponding to the calibration poverty survey data comprises:
after the calibration poverty survey data are obtained, determining longitude and latitude information corresponding to the calibration poverty survey data;
and acquiring the low-light remote sensing image and the daytime satellite image according to the longitude and latitude information.
3. The method of claim 1, wherein the luminance information includes low luminance information indicating low light intensity, medium luminance information indicating medium light intensity, and high luminance information indicating high light intensity; the acquiring of the brightness information of the low-light-level remote sensing image comprises the following steps:
determining the light intensity corresponding to each pixel point of the low-light-level remote sensing image;
and converting the light intensity into the brightness information by a method of fitting mixed Gaussian distribution.
4. The method for constructing the satellite image-based poverty prediction model according to any one of claims 1 to 3, wherein the preset neural network model is a dense convolutional neural network.
5. The method of constructing a model for predicting the poverty of poverty based on satellite images as claimed in any one of claims 1 to 3, wherein said predetermined regression model is a LASSO regression model.
6. The method of claim 5, wherein the cost function of the LASSO regression model is:
Figure FDA0002490899690000021
where m denotes the total number of samples, x(i)Denotes the ith sample, y(i)And a label corresponding to the ith sample is represented, w represents weight, b represents bias, and lambda represents a penalty factor.
7. A device for constructing a poor degree prediction model based on a satellite image is characterized by comprising a memory and a processor; the memory for storing a computer program; the processor, when executing the computer program, is configured to implement the method for constructing a satellite image-based poverty prediction model according to any one of claims 1 to 6.
8. A method for predicting poverty based on satellite images is characterized by comprising the following steps:
inputting a calibration daytime satellite image into a satellite image-based poverty prediction model constructed by the satellite image-based poverty prediction model construction method according to any one of claims 1 to 6;
the output of the satellite image-based poverty prediction model is used as poverty prediction data indicative of the region corresponding to the nominal daytime satellite image.
9. An apparatus for predicting poverty based on satellite images, comprising a memory and a processor; the memory for storing a computer program; the processor, when executing the computer program, is configured to implement the method for satellite image based poverty prediction according to claim 8.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, implements the method for constructing a model for satellite image-based poverty prediction according to any one of claims 1 to 6, or implements the method for satellite image-based poverty prediction according to claim 8.
CN202010404838.2A 2020-05-14 2020-05-14 Satellite image-based poverty prediction model construction and poverty prediction method Pending CN111553315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010404838.2A CN111553315A (en) 2020-05-14 2020-05-14 Satellite image-based poverty prediction model construction and poverty prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010404838.2A CN111553315A (en) 2020-05-14 2020-05-14 Satellite image-based poverty prediction model construction and poverty prediction method

Publications (1)

Publication Number Publication Date
CN111553315A true CN111553315A (en) 2020-08-18

Family

ID=72000709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010404838.2A Pending CN111553315A (en) 2020-05-14 2020-05-14 Satellite image-based poverty prediction model construction and poverty prediction method

Country Status (1)

Country Link
CN (1) CN111553315A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734035A (en) * 2020-12-31 2021-04-30 成都佳华物链云科技有限公司 Data processing method and device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734035A (en) * 2020-12-31 2021-04-30 成都佳华物链云科技有限公司 Data processing method and device and readable storage medium
CN112734035B (en) * 2020-12-31 2023-10-27 成都佳华物链云科技有限公司 Data processing method and device and readable storage medium

Similar Documents

Publication Publication Date Title
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN110533631B (en) SAR image change detection method based on pyramid pooling twin network
CN112052755B (en) Semantic convolution hyperspectral image classification method based on multipath attention mechanism
CN110555841B (en) SAR image change detection method based on self-attention image fusion and DEC
CN112634292A (en) Asphalt pavement crack image segmentation method based on deep convolutional neural network
CN111259853A (en) High-resolution remote sensing image change detection method, system and device
CN113095409B (en) Hyperspectral image classification method based on attention mechanism and weight sharing
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN114926693A (en) SAR image small sample identification method and device based on weighted distance
CN111310623B (en) Method for analyzing debris flow sensitivity map based on remote sensing data and machine learning
CN115546656A (en) Remote sensing image breeding area extraction method based on deep learning
CN111553315A (en) Satellite image-based poverty prediction model construction and poverty prediction method
CN114241332A (en) Deep learning-based solid waste field identification method and device and storage medium
CN113284093A (en) Satellite image cloud detection method based on improved D-LinkNet
CN116188995B (en) Remote sensing image feature extraction model training method, retrieval method and device
CN116994071A (en) Multispectral laser radar point cloud classification method based on self-adaptive spectrum residual error
CN112818777A (en) Remote sensing image target detection method based on dense connection and feature enhancement
CN116611725A (en) Land type identification method and device based on green ecological index
CN115497006B (en) Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy
CN115147727A (en) Method and system for extracting impervious surface of remote sensing image
CN114821324A (en) Crop identification method based on selective learning and playback remote sensing image
CN111127393B (en) Sample making method and system for radar image change detection, storage medium and equipment
CN117808650B (en) Precipitation prediction method based on Transform-Flownet and R-FPN
Zhang et al. A multi-task architecture for remote sensing by joint scene classification and image quality assessment
CN117333704A (en) Small sample materialization experiment equipment state detection method based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200818

RJ01 Rejection of invention patent application after publication