CN110263717B - Method for determining land utilization category of street view image - Google Patents

Method for determining land utilization category of street view image Download PDF

Info

Publication number
CN110263717B
CN110263717B CN201910539888.9A CN201910539888A CN110263717B CN 110263717 B CN110263717 B CN 110263717B CN 201910539888 A CN201910539888 A CN 201910539888A CN 110263717 B CN110263717 B CN 110263717B
Authority
CN
China
Prior art keywords
street view
image
category
land
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910539888.9A
Other languages
Chinese (zh)
Other versions
CN110263717A (en
Inventor
葛咏
赵维恒
贾远信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN201910539888.9A priority Critical patent/CN110263717B/en
Publication of CN110263717A publication Critical patent/CN110263717A/en
Application granted granted Critical
Publication of CN110263717B publication Critical patent/CN110263717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for determining land utilization categories fused into street view images. Firstly, extracting fine ground feature category information in street view image sampling points in a deep learning convolutional neural network mode; meanwhile, the remote sensing image is preprocessed, and a ground cover map is obtained by using a supervision and classification method. Secondly, the category condition of the adjacent pixels is calculated according to the spectrum, texture, shape and geographical distribution information of the pixels where the street view sampling points are located. And finally, integrating the pixel classification information and the land cover map to obtain a fine multi-class land utilization result. The invention starts from remote sensing based on pixel classification, combines fine street view information and has high precision of classification results.

Description

Method for determining land utilization category of street view image
Technical Field
The invention relates to a method for determining land utilization categories, and belongs to the field of geospatial information.
Background
Land utilization refers to all activities of purposefully developing and utilizing land resources by human beings, and land utilization information plays a vital role in urban planning, urban environment monitoring, urban transportation and the like (Liu X, He J, Yao Y, et al.). How to extract more accurate land use information is an important problem and challenge. Remote sensing image information is an important means (Zhao Yingshi) for extracting land utilization information as a large-range and non-contact data source. By using the high-resolution remote sensing image acquired by the resource satellite, combining professional remote sensing image processing software and applying a scientific information extraction method, the corresponding resource information can be quickly and accurately acquired.
However, due to the influence of the self-precision of the remote sensing image, for example, the medium-low spatial resolution remote sensing image has lower spatial resolution and weak ground object identification capability although having higher spectral resolution and time resolution; the high spatial resolution remote sensing image has high spatial resolution and strong ground object identification capability but less historical archives and poorer spectral resolution. It is difficult to extract high-precision ground feature information by simply using remote sensing image information. Many researchers have aided information by some geospatial information to improve the accuracy of land use classification. In the aspect of extracting land utilization information by fusing a remote sensing image and geographic spatial data, HU and the like use open social data to segment the remote sensing image into block areas and use a nuclear density mode to estimate land feature distribution so as to improve the land utilization classification precision (Hu T, Yang J, Li X, et al; ZhangY, Li Q, Huang H, et al), Chen and the like use social perception data, and the urban land utilization effect is improved by using methods of extracting urban green land functional areas (Chen W, Huang H, Dong J, et al.) and mobile phone position information data (Jiay, Ge Y, et al) in an auxiliary way through spatial information and character information in the social perception data.
The street view image records the appearance of a city at a certain moment in a digital image form, and because the city street view image data has the advantages of geographic labels and openness, the large geographic image data with fine image quality and uniform distribution becomes a research hotspot (Betula grahami) in the field of GIS and computer vision.
In the existing classification research of land utilization integrated with street view images, the street scale is mainly taken as a research hotspot, remote sensing images are divided into land parcels through road information data, the land parcels are classified through the distribution density of street view spots, the most common method is a kernel density method, and the classification of the remote sensing land utilization is assisted by performing space-to-space analysis on street view sampling points (Rui Cao, Jia song, et. al; Jian Kang a, Marco)
Figure BDA0002102221380000011
). However, these methods are limited to the size of a block, and do not use single pixel information to classify land use, and for some areas with sparse block sample points or areas with unevenly distributed block samples, using kernel density regression may cause a large error.
Therefore, the invention provides the method for determining the street view information fused into the land use category based on the pixels, effectively improves the defects of the method, utilizes a small amount of fine samples to calculate more sample points, and obtains a more fine land use classification result based on the pixels.
The invention content is as follows:
the technical problems solved by the invention are as follows: the method overcomes the defects of the prior art, provides a method for determining the land utilization category of the street view image, theoretically constructs a theoretical model of the land utilization classification of the street view image, classifies the land utilization from the fine category, and improves the classification precision of the land utilization.
Firstly, for the collected street view picture, a fine ground feature type represented in the image is extracted by a deep learning method. Secondly, the remote sensing image is preprocessed, the land cover is classified, and the region is divided into a plurality of large classes according to a primary classification system. Then integrating street view category information into land utilization classification, establishing a window by taking a sampling point as a center, and marking pixels similar to the sample point information through characteristic marks such as pixel spectrum, texture, shape and the like; the above-mentioned manner is repeated for the marked image element to extract another image element until enough classification category information is obtained. And finally classifying the land utilization condition.
The technical scheme of the invention is as follows: a land use category determining method for integrating street view images comprises the following steps:
step 1: processing the street view image, editing road vector data of a street view image acquisition area, extracting the road vector data with the street view image, and randomly generating a street view image sampling point on a road vector of the street view image acquisition area through GIS software; obtaining street view images of each sampling point position according to the sampling point spatial positions of the street view images, and then carrying out preprocessing operation on the obtained street view images to obtain the range of the bottom area of the street view images;
step 2: semantic segmentation and category judgment are carried out on the street view images in the bottom area range obtained by the preprocessing in the step 1 by utilizing a convolutional neural network, and the categories of ground objects in the street view images in all directions are judged for the street view images on the left side and the right side of the road direction where the sampling point is located through the convolutional neural network; judging the road category in the street view image by the convolutional neural network for the street view image of the road direction where the sampling point is located to obtain a convolutional neural network street view image category judgment result, endowing the judgment result to the street view image sampling point, and carrying out spatial position transition on the street view image sampling point endowed with the category to ensure that the street view image sampling point effectively represents the pixel characteristic of the real position of the street view image sampling point;
and step 3: in order to judge the land cover category of the street view image acquisition sampling point region, a remote sensing image of the street view image acquisition region needs to be acquired, the remote sensing image is preprocessed, and relevant auxiliary information including NDVI (normalized difference of intensity), NDWI (normalized difference of intensity), DEM (digital elevation model) data is added for classifying the land cover of the street view image acquisition region;
and 4, step 4: the remote sensing image of the street view image acquisition area is classified into land cover, land features are classified into 5 categories of vegetation, construction land, water, unused land and cultivated land in a remote sensing image supervision and classification mode to obtain a remote sensing land cover classification result, and the classification result is classified and then processed to improve the classification precision of the land cover;
and 5: and (4) combining the remote sensing image land cover classification result and the convolutional neural network street view image category judgment result, and adding the convolutional neural network street view image category judgment result to the land utilization classification model respectively according to the construction land category and the vegetation category in the remote sensing land cover classification result obtained in the step (4) so as to obtain a more refined land utilization category.
In the step 2, the specific implementation manner of pushing the spatial position of the sampling point is as follows: the sampling points on the left side and the right side are displaced, the sampling points are pushed h m towards the left side and the right side of the vertical road, the longitude and latitude and the feature type of the pushed sample points are recorded, wherein h is the moving distance, the specific value is determined by the resolution of the remote sensing image and the street view image, and the center coordinates of the pushed feature are as follows:
X′=x+h*cosθ
Y′=y+h*sinθ
wherein x is the longitude of the position of the sampling point, y is the latitude of the position of the sampling point, the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
In the step 5, the land use classification model is specifically realized as follows:
(1) setting a sampling window, carrying out pixel-by-pixel search on the whole remote sensing image, and judging whether each pixel neighborhood contains a streetscape sampling point;
(2) if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method, wherein the definition of the inverse distance weight method is as follows:
Figure BDA0002102221380000031
wherein WiRepresenting the weight of the distance of the ith street view sampling point, n representing the number of the street view sample points existing in the search window range, hiThe distance from the ith point to the central pixel is represented, and P represents a parameter of the control weight changing along with the distance;
(3) calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers:
Figure BDA0002102221380000032
wherein M represents the sum of difference values of central pixels and sampling pixels of each image layer, M represents the number of image layers, n represents the number of image layers, and comprises 3 spectral image layers, texture feature image layers and shape feature image layers, and z is an amplification factor;
and finally, multiplying the M value of each sampling point by the weight value corresponding to the M value and calculating the proportion:
Ci=Mi *Wi L=1
Figure BDA0002102221380000033
wherein C isiThe coefficient of i-type ground objects in the central pixel is represented, L represents the number of the i-th type ground objects in n samples of the neighborhood of the pixel, and the number of the i-th type ground objects is represented by comparing C of each type of the pixeliValue in CiThe ground object with the largest value is the pixel land utilization category. Repeating the land classification model algorithm until all pixelsUntil a unique land use category is obtained.
In the step 2: the specific process of performing semantic segmentation and category judgment on the street view image by using the convolutional neural network is as follows:
(1) performing semantic segmentation on the street view image, segmenting pixels according to different semantic meanings expressed in the image to obtain labels of different ground objects in the image, and keeping a large range and an interested area as the labels to perform category judgment;
(2) and (4) judging the street view image category, namely, adopting a convolutional neural network as a street view image classification model, selecting an equal number of street view images for each category according to a set classification system, training the selected model by using the interesting region label obtained in the previous step, and obtaining the category attributes of other photos by using the trained model parameters.
Compared with the prior art, the invention has the advantages that:
(1) by utilizing the street view image, the land use category can be more accurately judged from the human observation angle, and the accuracy of selecting the sample is higher than that of the traditional supervision and classification method.
(2) And land utilization classification is carried out based on the pixel attribute characteristics, so that the obtained classification result is more precise and more accurate than the land utilization classification based on the block scale.
Drawings
FIG. 1 is a main flow diagram of the present invention;
FIG. 2 is a schematic diagram of classification of pixel land utilization integrated with street view spatial information according to the present invention, which can improve pixel classification accuracy.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1, a method for determining a land use category fused with street view information according to the present invention includes the steps of:
step 1: streetscape information extraction
Selecting street view sampling points: editing the road vector data, and extracting the road vector data with the street view image; sampling points are randomly generated on a road through GIS software, direction calculation is carried out on each sampling point due to the data quality of the road trend image sampling points, and the four directions of the obtained sampling points are 0 degree, 270 degrees, 180 degrees and 90 degrees respectively. And respectively selecting one street view image on the left side and one street view image on the right side of the road as a ground feature distinguishing sample and one street view image as a road distinguishing sample according to the position of the sampling point.
Preprocessing street view images: and acquiring street view images of sampling points, wherein 3 frames are acquired at each sampling point. In order to reduce the classification precision images of remote object pairs with the unpredictable distance, images on the left side and the right side of the road are cut to obtain 320 × 320 pixel images on the lower part of the road; and performing semantic annotation on all images, and delineating the ground feature semantic information to be extracted in the images, such as outlines of roads, houses and the like.
And performing semantic segmentation and classification judgment on the labeled tag files through a convolutional neural network, classifying the left and right pictures into fine ground object types, and using the middle road part for classifying road classes.
Because the acquired coordinates of the sampling points are all located in the center of the road, the positions of the sampling points on the left side and the right side need to be displaced in order to accurately express the ground feature categories on the two sides of the road, the distance from the middle lower area of the image to the road is estimated to be 5m more, the sampling points are pushed to the left side and the right side of the vertical road by 5m respectively, and the longitude and latitude of the sample points and the types of the ground features after the pushing are recorded.
X′=x+h*cosθ
Y′=y+h*sinθ
Wherein the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
Step 2: remote sensing land cover classification
Remote sensing image preprocessing: the method comprises the steps of obtaining remote sensing images of a research area, preprocessing the remote sensing images of the research area such as radiometric calibration, atmospheric correction and geometric correction, calculating NDVI, NDWI and NDBI values of the remote sensing images, and providing elevation and gradient information through DEM data of the research area. And fusing the spectral information and various additional information for image classification.
The land cover types are divided into 5 categories such as vegetation, construction land, water, unused land, cultivated land and the like through software, classification post-processing is carried out on classification results, and classification precision is improved. And the classification precision of the samples is preliminarily evaluated through verification.
And step 3: street view information-integrated land utilization classification
And selecting 70% from the street view sample points as training samples. And for vegetation and construction land, adding fine classification information obtained by extracting street view information into the land cover classification result to obtain a more fine land utilization category. And adding the cement road and the asphalt road in the road category information obtained by extracting the street view information into the land cover category of the construction land for refining and classifying, and adding the soil road into the land cover category of the unused land for refining and classifying. And finally, fusing the classification results to obtain a soil utilization map.
The model is provided with a sampling window, pixel-by-pixel search is carried out on the whole remote sensing image, and whether neighborhood of the remote sensing image contains street view sampling points or not is judged.
And if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method. The definition of the inverse distance weight method is:
Figure BDA0002102221380000051
wherein WiA weight representing the distance of the ith street view sample point. n represents the number of street view sample points existing in the search window range. h isiRepresenting the distance of the ith point from the center pixel element. P represents a parameter that controls the weight as a function of distance.
And calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers.
Figure BDA0002102221380000061
Wherein M represents the sum of the difference values of the center pixel and the sampling pixel of each image layer. n represents the number of drawing layers, wherein the drawing layers comprise 3 spectral drawing layers, texture characteristic drawing layers and shape characteristic drawing layers. z is the magnification factor.
And finally, multiplying the M value of each sampling point by the corresponding weight value and calculating the proportion of the M value and the weight value.
Ci=Mi *Wi L=1
Figure BDA0002102221380000062
Wherein C isiThe coefficient of i-type ground objects in the central pixel is represented, L represents the number of the i-th type ground objects in n samples of the neighborhood of the pixel, and the number of the i-th type ground objects is represented by comparing C of each type of the pixeliValue in CiThe ground object with the largest value is the pixel land utilization category. And repeating the land classification model algorithm until all the pixels acquire the unique land utilization category.
And for vegetation and construction land, adding fine classification information obtained by extracting street view information into the land cover classification result to obtain a more fine land utilization category. And adding the cement road and the asphalt road in the road category information obtained by extracting the street view information into the land cover category of the construction land for refining and classifying, and adding the soil road into the land cover category of the unused land for refining and classifying. And finally, fusing the classification results to obtain a soil utilization map.
As shown in fig. 2, the central pixel represents a pixel of a to-be-determined category, and feature category points obtained by classifying street view pictures are arranged around the pixel point, attribute values of feature point layers obtained by classifying the central pixel and the street view pictures are sequentially determined by the method, and the feature category is obtained by performing land use category determination on the central pixel by using the difference value and combining the distance weight.

Claims (3)

1. A land use category determination method fused into street view images is characterized by comprising the following steps:
step 1: processing the street view image, editing road vector data of a street view image acquisition area, extracting the road vector data with the street view image, and randomly generating a street view image sampling point on a road vector of the street view image acquisition area through GIS software; obtaining street view images of each sampling point position according to the sampling point spatial positions of the street view images, and then carrying out preprocessing operation on the obtained street view images to obtain the street view images in the bottom area range;
step 2: semantic segmentation and category judgment are carried out on the street view images in the bottom area range obtained by the preprocessing in the step 1 by utilizing a convolutional neural network, and the categories of ground objects in the street view images in all directions are judged for the street view images on the left side and the right side of the road direction where the sampling point is located through the convolutional neural network; judging the road category in the street view image by the convolutional neural network for the street view image of the road direction where the sampling point is located to obtain a convolutional neural network street view image category judgment result, endowing the judgment result to the street view image sampling point, and carrying out spatial position transition on the street view image sampling point endowed with the category to ensure that the street view image sampling point effectively represents the pixel characteristic of the real position of the street view image sampling point;
and step 3: in order to judge the land cover category of the street view image acquisition sampling point region, a remote sensing image of the street view image acquisition region needs to be acquired, the remote sensing image is preprocessed, and relevant auxiliary information including NDVI (normalized difference of intensity), NDWI (normalized difference of intensity), DEM (digital elevation model) data is added for classifying the land cover of the street view image acquisition region;
and 4, step 4: the remote sensing image of the street view image acquisition area is classified into land cover, land features are classified into 5 categories of vegetation, construction land, water, unused land and cultivated land in a remote sensing image supervision and classification mode to obtain a remote sensing land cover classification result, and the classification result is classified and then processed to improve the classification precision of the land cover;
and 5: combining the remote sensing image land cover classification result and the convolutional neural network street view image category judgment result, and adding the convolutional neural network street view image category judgment result to a land utilization classification model respectively according to the construction land category and the vegetation category in the remote sensing land cover classification result obtained in the step 4 so as to obtain a more refined land utilization category;
in the step 5, the land use classification model is specifically realized as follows:
(1) setting a sampling window, carrying out pixel-by-pixel search on the whole remote sensing image, and judging whether each pixel neighborhood contains a streetscape sampling point;
(2) if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method, wherein the definition of the inverse distance weight method is as follows:
Figure FDA0002920516240000011
wherein WiRepresenting the weight of the distance of the ith street view sampling point, n representing the number of the street view sample points existing in the search window range, hiThe distance from the ith point to the central pixel is represented, and P represents a parameter of the control weight changing along with the distance;
(3) calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers:
Figure FDA0002920516240000021
wherein M represents the sum of difference values of central pixels and sampling pixels of each image layer, M represents the number of image layers, n represents the number of image layers, and comprises 3 spectral image layers, texture feature image layers and shape feature image layers, and z is an amplification factor;
and finally, multiplying the M value of each sampling point by the weight value corresponding to the M value and calculating the proportion:
Ci=Mi*Wi L=1
Figure FDA0002920516240000022
wherein C isiCoefficient representing i-type ground object in central pixel, L tableIn n samples of the pixel neighborhood, the number of the i-th type ground objects is compared with the C of each type of the pixeliValue in CiThe ground object with the maximum value is the pixel land utilization category; and repeating the land classification model algorithm until all the pixels acquire the unique land utilization category.
2. The method for determining land use category fused to streetscape image according to claim 1, wherein: in the step 2, the specific implementation manner of pushing the spatial position of the sampling point is as follows: the sampling points on the left side and the right side are displaced, the sampling points are pushed h m towards the left side and the right side of the vertical road, the longitude and latitude and the feature type of the pushed sample points are recorded, wherein h is the moving distance, the specific value is determined by the resolution of the remote sensing image and the street view image, and the center coordinates of the pushed feature are as follows:
X′=x+h*cosθ
Y′=y+h*sinθ
wherein x is the longitude of the position of the sampling point, y is the latitude of the position of the sampling point, the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
3. The method for determining land use category fused to streetscape image according to claim 1, wherein: in the step 2, the specific process of performing semantic segmentation and category judgment on the streetscape image by using the convolutional neural network is as follows:
(1) performing semantic segmentation on the street view image, segmenting pixels according to different semantic meanings expressed in the image to obtain labels of different ground objects in the image, and keeping a large range and an interested area as the labels to perform category judgment;
(2) and (4) judging the street view image category, namely, adopting a convolutional neural network as a street view image classification model, selecting an equal number of street view images for each category according to a set classification system, training the selected model by using the interesting region label obtained in the previous step, and obtaining the category attributes of other photos by using the trained model parameters.
CN201910539888.9A 2019-06-21 2019-06-21 Method for determining land utilization category of street view image Active CN110263717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910539888.9A CN110263717B (en) 2019-06-21 2019-06-21 Method for determining land utilization category of street view image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910539888.9A CN110263717B (en) 2019-06-21 2019-06-21 Method for determining land utilization category of street view image

Publications (2)

Publication Number Publication Date
CN110263717A CN110263717A (en) 2019-09-20
CN110263717B true CN110263717B (en) 2021-04-09

Family

ID=67919959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910539888.9A Active CN110263717B (en) 2019-06-21 2019-06-21 Method for determining land utilization category of street view image

Country Status (1)

Country Link
CN (1) CN110263717B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111177587B (en) * 2019-12-12 2023-05-23 广州地理研究所 Shopping street recommendation method and device
CN111402131B (en) * 2020-03-10 2022-04-01 北京师范大学 Method for acquiring super-resolution land cover classification map based on deep learning
CN111444783B (en) * 2020-03-11 2023-11-28 中科禾信遥感科技(苏州)有限公司 Crop planting land block identification method and device based on pixel statistics
CN111414878B (en) * 2020-03-26 2023-11-03 北京京东智能城市大数据研究院 Social attribute analysis and image processing method and device for land parcels
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN111598048B (en) * 2020-05-31 2021-06-15 中国科学院地理科学与资源研究所 Urban village-in-village identification method integrating high-resolution remote sensing image and street view image
CN113469226B (en) * 2021-06-16 2022-09-30 中国地质大学(武汉) Land utilization classification method and system based on street view image
CN113722530B (en) * 2021-09-08 2023-10-24 云南大学 Fine granularity geographic position positioning method
CN114155433B (en) * 2021-11-30 2022-07-19 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114529838B (en) * 2022-04-24 2022-07-15 江西农业大学 Soil nitrogen content inversion model construction method and system based on convolutional neural network
CN116129278B (en) * 2023-04-10 2023-06-30 牧马人(山东)勘察测绘集团有限公司 Land utilization classification and identification system based on remote sensing images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868533B (en) * 2016-03-23 2018-12-14 四川理工学院 Based on Internet of Things and the integrated perception of 3S technology river basin water environment and application method
CN109657602A (en) * 2018-12-17 2019-04-19 中国地质大学(武汉) Automatic functional region of city method and system based on streetscape data and transfer learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868533B (en) * 2016-03-23 2018-12-14 四川理工学院 Based on Internet of Things and the integrated perception of 3S technology river basin water environment and application method
CN109657602A (en) * 2018-12-17 2019-04-19 中国地质大学(武汉) Automatic functional region of city method and system based on streetscape data and transfer learning

Also Published As

Publication number Publication date
CN110263717A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110263717B (en) Method for determining land utilization category of street view image
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN111582194B (en) Multi-temporal high-resolution remote sensing image building extraction method based on multi-feature LSTM network
Feng et al. GCN-based pavement crack detection using mobile LiDAR point clouds
CN109858450B (en) Ten-meter-level spatial resolution remote sensing image town extraction method and system
CN108921120B (en) Cigarette identification method suitable for wide retail scene
CN111626947A (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN112766155A (en) Deep learning-based mariculture area extraction method
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN111666909A (en) Suspected contaminated site space identification method based on object-oriented and deep learning
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core
CN116258956A (en) Unmanned aerial vehicle tree recognition method, unmanned aerial vehicle tree recognition equipment, storage medium and unmanned aerial vehicle tree recognition device
CN108022245A (en) Photovoltaic panel template automatic generation method based on upper thread primitive correlation model
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN105631849B (en) The change detecting method and device of target polygon
CN110929739B (en) Automatic impervious surface range remote sensing iterative extraction method
CN110175638B (en) Raise dust source monitoring method
WO2023116359A1 (en) Method, apparatus and system for classifying green, blue and gray infrastructures, and medium
Schiewe Status and future perspectives of the application potential of digital airborne sensor systems
CN112580504B (en) Tree species classification counting method and device based on high-resolution satellite remote sensing image
CN111639672B (en) Deep learning city function classification method based on majority voting
CN115115954A (en) Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing
CN112036246B (en) Construction method of remote sensing image classification model, remote sensing image classification method and system
CN114863274A (en) Surface green net thatch cover extraction method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant