CN115346009B - Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data - Google Patents

Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data Download PDF

Info

Publication number
CN115346009B
CN115346009B CN202210860449.XA CN202210860449A CN115346009B CN 115346009 B CN115346009 B CN 115346009B CN 202210860449 A CN202210860449 A CN 202210860449A CN 115346009 B CN115346009 B CN 115346009B
Authority
CN
China
Prior art keywords
hyperspectral
geographic
geographic entity
pixel element
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210860449.XA
Other languages
Chinese (zh)
Other versions
CN115346009A (en
Inventor
刘俊伟
郭大海
王志伟
刘亚萍
赵雅新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Air Remote Information Technology Co ltd
Original Assignee
Shanghai Air Remote Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Air Remote Information Technology Co ltd filed Critical Shanghai Air Remote Information Technology Co ltd
Publication of CN115346009A publication Critical patent/CN115346009A/en
Application granted granted Critical
Publication of CN115346009B publication Critical patent/CN115346009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data, which comprises the following steps: s1, under a preset atmospheric environment, collecting a near-ground hyperspectral R under typical geographic entity classification li And the collected near-ground hyperspectral is subjected to classified internal spectrum fusion processing and oblique photography, and the remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral is also collected p The method comprises the steps of carrying out a first treatment on the surface of the S2, fusion spectrum R under classification of various geographic entities ci Extracting optimal wave band combinations of each geographical entity category, and forming near-ground combined hyperspectral R of the optimal wave band combinations through wave band cutting and splicing i The method comprises the steps of carrying out a first treatment on the surface of the And S3, scanning pixel elements of the remote sensing hyperspectrum one by one to form a semantic system, and finally, automatically modeling oblique photographic data of each scanned pixel element or forming mapping from an oblique three-dimensional monomer model to geographic entity classification in an alternative mode of associating the oblique photographic data with a pixel element center point in a remote sensing image, thereby completing efficient and accurate geographic entity three-dimensional semantic modeling based on hyperspectral data.

Description

Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data
Technical Field
The invention provides a three-dimensional semantic modeling method of natural resource industry in the field of natural and human geography, relates to semantic annotation by hyperspectral, in particular to a geography entity semantic modeling method based on hyperspectral data and oblique three-dimensional data, and belongs to the field of geography landscape semantic indexing.
Background
The hyperspectral data belongs to a three-dimensional cubic data structure of a two-dimensional image and a one-dimensional spectrum of a ground object, particularly the growth of crops, disaster resistance and soil components are subjected to classification research by adopting a remote sensing technology in the prior art, required optical and chemical substance parameters are calculated by adopting an inversion means in most cases, and the problem of semantic classification of geographic entities by using hyperspectral data is not paid attention to.
For hyperspectral data processing, the prior art is often used for collecting the same type of geographic classification such as crops, so that the collected images can be easily processed as a whole, but when the research range is expanded to a larger geographic area, various geographic entities are difficult to present on remote sensing images, and therefore characteristic wave band extraction cannot be carried out on the whole. And if RNN, CNN, SVM or other improved algorithms are adopted, different samples are needed for learning calculation, the extraction accuracy is less than 71%, and the algorithm load is greatly increased.
In addition, the remote sensing hyperspectral is limited by the atmospheric environment, a complex correction algorithm is needed, so that the cost of data processing and the final semantic annotation efficiency are improved, meanwhile, the correction algorithm is various, and the specific condition of which algorithm is more suitable for the remote sensing image acquisition cannot be determined, so that the hyperspectral acquisition method which bypasses the atmospheric environment constraint needs to be considered in the face of complex and variable complex acquisition environments with some unknown uncontrollable factors.
The unmanned aerial vehicle is used, so that factors of the atmospheric environment are greatly reduced, and the image factors of the atmospheric environment can be avoided by using the unmanned aerial vehicle to acquire ground object images on the premise of 200m of the height region of most of cloud layers due to the height limitation regulation of 120 m. In the prior art, three-dimensional geographic entity reconstruction is realized by using oblique photography of an unmanned aerial vehicle through multi-angle shooting. However, the oblique photography of the prior art is only performed in five directions of front, back, left and right, and the angle photography of the other four corners of the eight neighborhoods is lacking, so the information quantity is incomplete.
Disclosure of Invention
The invention considers a new concept of oblique photography under semantic annotation based on the current situation that the three-dimensional semantic modeling information of oblique photography of the geographic entity in the prior art is incomplete, namely, the invention fuses with the data of the oblique photography on the premise of adopting a hyperspectral means to carry out semantic annotation, and mainly comprises the following aspects: collecting remote sensing images of a first geographic area; collecting hyperspectral of a second geographic region and extracting a characteristic spectrum segment of hyperspectral of each geographic entity type; and scanning the geographic entity type of each pixel in the remote sensing image by using an image pixel scanning mode, and scanning based on octave oblique photography by using an unmanned aerial vehicle to form a data set of the semantic annotation and image fusion completion oblique three-dimensional monomer.
Based on the above consideration, the invention provides a geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data, which comprises the following steps:
s1, under a preset atmospheric environment, collecting a near-ground hyperspectral R under typical geographic entity classification li I represents the serial numbers of different geographical entity classifications, and the acquired near-ground hyperspectral is subjected to classification internal spectrum fusion processing to obtain a fusion spectrum R ci Simultaneously, the remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral is also acquired p
S2, fusion spectrum R under classification of various geographic entities ci Extracting optimal wave band combinations of each geographical entity category, and forming near-ground combined hyperspectral R of the optimal wave band combinations through wave band cutting and splicing i
S3, scanning pixel elements one by one for the remote sensing hyperspectral, comparing the hyperspectral of each scanned pixel element with the optimal combination band spectrum of each geographic entity class, thereby determining the geographic entity class to which the currently scanned pixel element belongs, and mapping the identified type with the geographic entity class field to form a semantic system, so that labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectral is completed after the scanning is completed;
Near-ground hyperspectral R under S1 acquisition of a typical geographic entity class li Also comprises the following specific steps of P1-1:
the unmanned aerial vehicle carrying the oblique photographing device scans pixel elements one by one in the area where the remote sensing hyperspectral is located, a plurality of oblique images are obtained, the coordinate of each pixel element under the real geographic coordinate system E where the unmanned aerial vehicle is located when scanning is recorded, and the space coordinate of each pixel element in the oblique images at each angle and the corresponding ground projection coordinate are calculated;
after step S3 is completed, the method further comprises the following steps:
p1-2, establishing a mapping relation between the ground projection coordinates corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system completed in the step S3;
p1-3 forms oblique photographic data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system as in P1-2;
P1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information in the spatial position of the model near the any point, and can select to disappear the classification information at any time.
Preferably, the specific steps of P1-3 are replaced by: displaying the central point of each pixel element scanned by the unmanned aerial vehicle in P1-1 in the remote sensing hyperspectral remote sensing image dimension, displaying the inclined image shot at the central point by clicking any central point to replace automatic modeling, at the moment, adopting a semantic system, and mapping and displaying by clicking any point in any inclined image through the ground projection coordinates of the point to obtain corresponding geographic entity classification information, and also selecting to disappear the classification information at any time;
i.e. a multi-view model of a plurality of (e.g. nine) oblique images per pixel element being scanned, can also be defined as a three-dimensional model obtained by means of a snap-shot, which directly results in a method that saves several months without modeling.
Preferably, all vehicles on the road in oblique photography are deleted before the specific step P1-3 replacement is performed. Preferably, the specific deleting method is to fill all pixels of coordinates where the road is located with preset color pixel values.
Preferably, in S3, after the labeling of the geographical entity semantics of the remote sensing image in the acquired remote sensing hyperspectrum is completed, the method further includes:
applying the optimal band combination of the classification of the various geographic entities proposed by the S2 to the remote sensing hyperspectral which is subjected to semantic annotation in the S3, and obtaining a remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of the S2' i Using near-ground combined hyperspectral R i For remote sensing combined hyperspectral R' i Performing correction processing to obtain corrected remote sensing combination hyperspectral R cor =f(R' i ,R i ) F (·) is the correct operation, preferably f (·) is R ')' i Correcting the peak and/or integral intensity of the corresponding characteristic band of (c) to R i Specifically fitting each band spectral peak using a gaussian function, to achieve the correction.
Therefore, semantic labeling of each geographic entity is completed by utilizing near-ground hyperspectral data, semantic remote sensing hyperspectral correction is completed, and the high efficiency and the simplicity of semantic labeling are improved.
Wherein S1 specifically includes:
s1-1, preparing corresponding unmanned frame numbers kN, k=1, 2,3,4 and 5 according to the type number N of the geographical entity types to be classified, and carrying out hyperspectral acquisition on a plurality of acquisition points of each type of geographical entity by a plurality of unmanned aerial vehicles along a preset flight route under preset meteorological conditions;
The method comprises the following specific steps of:
p1-1 adopts an unmanned aerial vehicle carrying an oblique photographing device, a visual field range is set, a minimum external square of the visual field range is constructed, the minimum external square is taken as a pixel element, pixel elements are scanned one by one according to a specified scanning mode for remote sensing hyperspectrum, a plurality of oblique images are obtained, the coordinate of each pixel element under a real geographic coordinate system E where the unmanned aerial vehicle is located when scanning is recorded, and the space coordinate of each pixel element in each angle oblique image and the corresponding ground projection coordinate are calculated;
preferably, the predetermined scanning method is a line scanning method and/or a column scanning method. Preferably, for the same row and/or column scanning, the side length of one pixel element is shifted when the scanning of the previous pixel element is finished, and the scanning of the next pixel element is continued.
Preferably, the flying heights of all unmanned aerial vehicles are consistent, and for various geographic entities, all unmanned aerial vehicles start flying at the same time according to a preset route, and all acquisitions are stopped at the same time point. The method and the device ensure that various collected geographic entities are in approximately the same time period, and prevent the occurrence of time periods when the collection work between some geographic entities is obviously in different two interval time periods, so that the hyperspectral data cannot accurately reflect the state of the same time period.
S1-2, preprocessing hyperspectrum acquired by a plurality of acquisition points of each type of geographic entity to obtain near-ground hyperspectrum R under the classification of typical geographic entities lij J is the number of acquisition points, and each acquisition point acquires a hyperspectral;
preferably, the pretreatment comprises: at least one of first derivative, normalization, multiple scatter correction, standard normal variable transformation, savitzky-Golay smoothing.
S1-3, for each classification interior, carrying out superposition average processing on hyperspectrum acquired by a plurality of acquisition points
Figure GDA0004139682920000041
Wherein K is i Collecting points for class i geographic entities, +.>
Figure GDA0004139682920000042
For each spectrum R lij Is at each wavelength to form R ci Simultaneously acquiring remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral p
Wherein S2 specifically includes:
s2-1 fusion spectrum R under classification of geographic entities ci Optimal band combining for extracting classifications of geographic entities
Figure GDA0004139682920000048
Wherein q is i ∈[3,9]Band sequence numbers in the best band combination for class i geographic entities, for each
Figure GDA0004139682920000049
Are shown as an optimal band combination,
wherein, the optimal wave band combination
Figure GDA0004139682920000047
The method is selected by the following steps:
setting optimum band discrimination index
Figure GDA0004139682920000043
Or->
Figure GDA0004139682920000044
Wherein u, v, x, y, z are hyperspectral band sequence numbers of the ith geographic entity, Q i For the total number of hyperspectral bands of the i-th geographic entity, S is the standard deviation calculated by hyperspectral bands of a plurality of sampling points, R uv 、R xyz For the correlation coefficients between the bands with the corresponding subscripts sequence numbers, the sum symbols are for all possible totals +.>
Figure GDA0004139682920000045
Taking sum of all times, taking +.>
Figure GDA0004139682920000046
The corresponding band combination is three-band and four-band combination as the optimal band combination;
it can be seen that our way of combining by means of optimal bands is not for artificial intelligence algorithms, but rather is seen as a feature characterizing a specific geographical entity type for identifying the entity type. Firstly, the optimal wave band combination modes are different, and secondly, the integral intensity of each wave band is also different, so that the characteristic foundation capable of identifying the entity type is formed. For features with most of less complex components, a maximum of four bands is generally sufficient to distinguish from the best band combinations of other different target features.
S2-2 is subjected to band cutting and splicing, and near-ground combined hyperspectral R formed by sequentially combining wavelength values of all the optimal bands i And the clipping is to delete all band data except the optimal band, the clipping is to forcedly define the intensity value on the splicing point at the splicing point, which is connected end to end in sequence, as a base line value, the size and the range of the wavelength value spanned by each band spectral line after the clipping are still defined unchanged, and the peak value at each wavelength except the splicing point is unchanged.
In S3, scanning the pixel elements of the remote sensing hyperspectrum one by one, and comparing the hyperspectrum of each scanned pixel element with the spectrum of the best combined band of each geographic entity class, thereby determining the geographic entity class to which the currently scanned pixel element belongs specifically includes:
s3-1, defining a region with the pixel element size p being the positive integer M epsilon [1,4] square of the resolution res [0.3M,1M ], namely p=M.res×M.res, and establishing a geographic entity type database which comprises geographic entity classification fields and geographic entity basic parameter fields, wherein the geographic entity classification fields comprise geographic entity codes exclusive to each entity classification;
s3-2, scanning the remote sensing image in the remote sensing hyperspectral by using a pixel element p in a specified scanning mode, and calling the corresponding hyperspectral in each pixel element in the scanning process
Figure GDA0004139682920000051
Wherein t is the serial number of the pixel element currently scanned;
preferably, the predetermined scanning mode is a mode of image line scanning and/or column scanning. Preferably, when scanning the same row and/or the same column, shifting one pixel side length distance and continuing the next pixel scanning when the current pixel scanning is finished;
S3-3 will
Figure GDA0004139682920000052
Corresponding to various geographic entities R i Spectral lines corresponding to the wave bands of the wave bands and corresponding various geographic entities R i The similarity comparison is carried out on the spectral lines of the pixels, and when the highest similarity is compared and is greater than 80-90% of a preset value, the pixel element p of the current scanning is identified as the corresponding geographic entity type;
and when the highest similarity is smaller than a preset value, performing the following analysis on the pixel elements of the pixels with the complete image in the p eight neighborhood range of the current pixel element:
if there are already identified geographic entity types in the eight neighborhoods, then
Figure GDA0004139682920000053
Spectral lines corresponding to those bands of near-ground combined hyperspectral bands corresponding to the identified geographic entities are compared with the combined hyperspectral lines classified by the identified geographic entities in similarity; if the undetermined geographic entity type exists in the eight adjacent domains, continuing to identify the geographic entity type to which the adjacent domain of the undetermined geographic entity type belongs;
calculation of
Figure GDA0004139682920000061
The similarity of spectral lines corresponding to the bands of the near-ground combined hyperspectral bands of the geographic entities represented by the neighborhood behind all the determined geographic entity types and the spectral lines of the near-ground combined hyperspectrum of the geographic entities represented by the neighborhood behind all the determined geographic entity types is obtained, and the pair with the maximum similarity is taken The geographic entity type represented by the corresponding neighborhood is taken as the geographic entity type identified by the current pixel element p, the neighborhood pixel element with the geographic entity type identified is regarded as being scanned, and scanning identification is not carried out in the later scanning process;
if at least one neighborhood with the similarity smaller than the preset value calculated in the process of identifying the represented geographic entity type also exists in the eight neighbors, the current similarity comparison object is not counted, and only the neighborhood with the similarity larger than the preset value is used as the similarity comparison object;
if all the eight neighborhoods are not counted in the current similarity comparison object, defining the geographic entity type of the current pixel element p as the geographic entity type identified by the previous pixel element or the next pixel element,
the similarity comparison method comprises the following steps:
s3-3-1 will
Figure GDA0004139682920000062
The step of S2-2 is carried out to obtain +.>
Figure GDA0004139682920000063
Corresponding combined hyperspectral->
Figure GDA0004139682920000064
Wherein i represents p->
Figure GDA0004139682920000065
R corresponding to the type of the i-th geographic entity i The forming mode is combined to form;
s3-3-2 pair
Figure GDA0004139682920000066
Integrated intensity value of each band in the best combination band +.>
Figure GDA0004139682920000067
And R is R i Integral intensity value of corresponding band +.>
Figure GDA0004139682920000069
Comparing, and obtaining the reciprocal of standard deviation +.>
Figure GDA0004139682920000068
Defined as similarity.
The mapping the identified type and the geographical entity classification field to form a semantic system specifically comprises:
s3-4 assigns a corresponding geo-entity code to the classification field identified by each scanned pixel element,
s3-5, the space-time coordinates of the centers of each pixel element are endowed with corresponding geographic entity basic parameter fields, and the geographic entity basic parameter fields endowed with the space-time coordinates of each scanned pixel element and the classification fields endowed with the geographic entity codes form a semantic system.
Preferably, after step S3-5, further comprises
P1-2 establishes a mapping relation between the ground projection coordinates corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system completed in S3-5;
p1-3 forms oblique photographic data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system as in P1-2;
p1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information in the spatial position of the model near the any point, and can select to disappear the classification information at any time.
Preferably, the specific steps of P1-3 are replaced by: and displaying the central point of each pixel element scanned by the unmanned aerial vehicle in P1-1 in the remote sensing image dimension of the remote sensing hyperspectrum, and displaying the inclined image shot at the central point by clicking any central point to replace automatic modeling.
Preferably, all vehicles on the road in oblique photography are deleted before the specific step P1-3 replacement is performed.
Preferably, the specific deleting method is to fill all pixels of coordinates where the road is located with preset color pixel values.
Advantageous effects
1. The machine learning algorithm is avoided, the combination hyperspectrum under the optimal wave band combination of the hyperspectral 3-4 wave bands is adopted as the identification characteristic, the geographical entity data is classified and divided, the semantic labeling algorithm is simplified, and the labeling efficiency is improved;
2. the method adopts the idea of combining remote sensing images with pixel element scanning of oblique photography, respectively identifies entity classification in pixel elements based on combined hyperspectral, adopts eight-neighborhood correction, thereby accurately matting out the range of each geographic classification from the remote sensing hyperspectral, thereby establishing a semantic system between pixel element classification and geographic coding, completing semantic annotation, simultaneously, compared with the prior art five-way oblique photography, the method also increases the acquisition of the oblique image of the complete eight-neighborhood of the oblique photography, avoids possible angle dead angles, and increases the neighborhood defect information of modeling features.
Drawings
The exemplary area of fig. 1 is typically the true color image dimension in the object remote sensing hyperspectral, wherein a schematic diagram of the oblique photography pixel element scanning method based on the square circumscribed by the field of view and the data structure of the oblique photography database is given,
figure 2 is a schematic diagram of a near-ground combined hyperspectral acquisition process for twelve types of geographic entities,
FIG. 3 is a schematic view of a specific cutting and splicing combination process of near-ground combination hyperspectral, wherein FIG. 3a is a schematic view of a cutting wave band, FIG. 3b is a schematic view of a splicing mode of FIG. 3c, FIG. 3d is a schematic view of a splicing mode,
FIG. 4a is a block diagram of the exemplary area exemplary of FIG. 1; the area scan near the junction of rice and non-main crop in FIG. 4b is shown enlarged to the right in FIG. 4 b. FIG. 4c shows the invoking of a pixel element when it has undergone a position of element 1 in the diagramHyperspectral R in (3) 1 p And calculating the similarity of the spectral peaks of the wave bands corresponding to the near-ground combined hyperspectral spectrum to determine the rice type. FIG. 4c shows pixel 2 crossing the rice-to-non-major crop boundary, where pixel 2 combines with the rice-representative hyperspectral R of FIG. 4d 8 The similarity of (2) is the largest rice in all near-ground combined hyperspectral.
Figure 5 is a flow chart of a semantic hierarchy forming process,
figure 6 is a map of the ground projection coordinates and the geographic entity class in the oblique photography database,
fig. 7 is a diagram showing the visual effect of classifying the ground objects to form different colors after semantic annotation after the classification of the geographical entities of all thirteen classes.
Detailed Description
Example 1
FIG. 1 shows real color image dimensions in a typical remote sensing hyperspectral of a typical area, which are divided into thirteen categories of urban roads, rural roads, greenhouses, buildings, rivers, ponds, woodlands, grasslands, rice, corns, non-main crops, bare lands and other ground features. The river and the pond are regarded as water bodies, so that the total number of the river and the pond is ten, 12-36 unmanned aerial vehicles carrying hyperspectral equipment are arranged, the hyperspectral collection of the ten-class ground entities is respectively responsible, 1000-10000 hyperspectral collection is respectively carried out within 1 hour, and the collection frequency is 0.36-3.6 times per second.
The flying heights of all unmanned aerial vehicles are consistent in the range of 100-120m, and for various geographic entities, all unmanned aerial vehicles start flying at the same time according to a preset route, and all the collection is stopped 1 hour from the beginning. Wherein, preprocessing the hyperspectrum collected by 5000 collection points of each type of geographic entity, carrying out first derivative pretreatment, normalization, multi-element scattering correction and Savitzky-Golay smoothing treatment, as shown in FIG. 2, obtaining near-ground hyperspectrum { R } under typical geographic entity classification l1 ,R l2 ,R l3 ,R l4 ,R l5 ,R l6 ,R l7 ,R l8 ,R l9 ,R l10 ,R l11 ,R l12 Each element comprising 5000 acquisition pointsEach collecting point collecting a hyperspectrum;
at the same time of collecting the hyperspectral near-ground, as shown in fig. 1, an unmanned aerial vehicle with an oblique photographing device is adopted, a minimum external square of the visual field range is constructed through the visual field range set by changing the depression angle of other oblique photographing devices in eight neighborhood of the central overlooking photographing device, the minimum external square is taken as a pixel element, the pixel elements with the minimum external square as the pixel element are scanned one by one in a scanning mode of combining the right side to the left side and the upper side and the lower side of the remote sensing hyperspectral according to the visual angle of fig. 1, and each pixel is scanned and then the next pixel element is scanned by translating one square side length.
In practical applications, the oblique photographing device mounted on the unmanned aerial vehicle may be a nine-direction nine-lens camera, fig. 1 uses the nine-lens camera as an example, and the nine-direction field of view is set by changing the depression angle of other oblique photographing devices in eight neighborhoods of the central overlooking photographing device, and the minimum external square of the nine-direction field of view is constructed, and further the collected oblique photographing data also corresponds to nine-direction oblique photographing data of a nine-grid. Of course, in addition to the unmanned aerial vehicle equipped with the nine-direction oblique photographing device described above, an unmanned aerial vehicle equipped with a five-direction oblique photographing device may be employed.
Each pixel is scanned to obtain 8 oblique photographic oblique images and 1 overlook image. As shown in fig. 1, an enlarged right-side view of the first pixel scan in the upper right corner of the image is taken as an example. There are 9 angle ranges of oblique photography of the square circumscribed by the square, in the image obtained by photographing the image photographing device in each angle range, the coordinates of each pixel element in the real geographic coordinate system E where the unmanned aerial vehicle is located (namely, considered to be at the center of the middle overlooking photographing range) during scanning are recorded, the space coordinates of each pixel element in the 9 angle oblique images and the corresponding ground projection coordinates (hereinafter referred to as coordinates) are calculated, a special oblique photographing database is built for storing these data as shown in fig. 1, and the space coordinates of the corresponding angle of each scanning pixel element corresponding to a-i respectively, the corresponding ground projection coordinates and an additional image storage unit are stored. It will be appreciated that each database, for example, the top view image and the spatial coordinates of the pixels therein, and the corresponding ground projection coordinates, in each scanned pixel element in the top view image map represented by e. Thus e and other sub-libraries are in effect a multi-dimensional array. For coordinate storage, a first dimension is a data element in which pixel elements are sequentially arranged (for example, the data element 1 and the data element 2 in fig. 1 respectively correspond to two scanned pixel elements in the present figure), a second dimension is corresponding coordinates stored in the data element in each first dimension, and the second dimension may be arranged according to the order of extracting the image in the pixel elements, for example, a two-dimensional array is used to correspond to the coordinate storage of the position of each pixel in the image (for example, the structural analysis of the data element 1 in fig. 1 is schematic, and three points represent omitted symbols).
For each of twelve classifications, performing superposition average processing on hyperspectrum acquired by 5000 acquisition points to obtain { R } c1 ,R c2 ,R c3 ,R c4 ,R c5 ,R c6 ,R c7 ,R c8 ,R c9 ,R c10 ,R c11 ,R c12 Each element containing only one fused hyperspectral data
Simultaneously acquiring remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral p
Example 2
Fusion spectrum { R under classification of geographical entities c1 ,R c2 ,R c3 ,R c4 ,R c5 ,R c6 ,R c7 ,R c8 ,R c9 ,R c10 ,R c11 ,R c12 Optimal band combining for extracting each geographical entity class
Figure GDA0004139682920000091
Wherein q is i ∈[3,9]Band number in the best band combination for the i-th geographical entity, for each +.>
Figure GDA0004139682920000092
Are shown as an optimal band combination,
wherein, the optimal wave band combination
Figure GDA0004139682920000093
The method is selected by the following steps:
setting optimum band discrimination index
Figure GDA0004139682920000094
Or->
Figure GDA0004139682920000095
Wherein u, v, x, y, z are hyperspectral band sequence numbers of the ith geographic entity, Q i =7 is the total number of hyperspectral bands of the class i geographic entity, S is the standard deviation calculated for each band of hyperspectrum through multiple sampling points, R uv 、R xyz For the correlation coefficients between the bands with the corresponding subscripts sequence numbers, the sum symbols are for all possible totals +.>
Figure GDA0004139682920000096
Taking sum of all times, taking +.>
Figure GDA0004139682920000097
The corresponding band combinations, i.e., three-band, four-band, up to n-band combinations, are taken as the optimal band combinations. Obtaining twelve kinds of geographic entity classification optimal combined wave bands
Figure GDA0004139682920000098
If the three bands are found to be F for one of the geographical entities i For the geographic entity type, such as house, then
Figure GDA0004139682920000101
For rice, four wave bands are adopted, corresponding to +.>
Figure GDA0004139682920000102
As shown in FIG. 3, FIG. 3a is a graph of one of the fused hyperspectral R in a geographic entity type ci . Its best modeThe combined bands have three spectral peaks numbered 1,2 and 3, the spectral data between the peak 2 and the peak 3 are cut to form a cut state of fig. 3b, and then the peak 2 and the peak 3 are spliced according to the wavelength value sequence of each optimal band, namely, the wavelength sequence of the peak 1, the peak 2 and the peak 3. Note that the leftmost data point of the band of peak 3 circled in fig. 3c is greater than the baseline value, so that this data point is forcefully defined as the baseline value after stitching, but coincides with the rightmost baseline value of the band of peak 2, resulting in the stitched state fig. 3d. After splicing, peak 1 and peak 2 are moving, and peak 3 is a shift operation, except that the leftmost value of the band is forcedly defined as a baseline value, the size and range of the wavelength value spanned by the spectral line of peak 3 are still defined unchanged, and thus are not the range of the wavelength value indicated by the wavelength axis.
For other geographic entity types, the same method is operated to finally form near-ground combined hyperspectral { R } 1 ,R 2 ,R 3 ,R 4 ,R 5 ,R 6 ,R 7 ,R 8 ,R 9 ,R 10 ,R 11 ,R 12 And (as shown in figure 2).
Example 3
The present embodiment will describe a method for classifying geographic entities of a remote sensing image based on pixel element scanning. As shown in fig. 4a, the exemplary region of fig. 1 is typically a region shown by a box in a true color image of physical remote sensing as an embodiment of pixel scanning. The pixel size is set to 4m by 4m, the resolution is 1m for the rice area and the passing area of a non-main crop that is trapped in FIG. 4 a. The area scan near the junction of rice and non-main crop in FIG. 4b is shown enlarged to the right in FIG. 4 b. Wherein the scanning is performed in a right-to-left, top-to-bottom scanning manner. As shown in FIG. 4b, hyperspectral as in FIG. 4c is invoked in a pixel element when the pixel element has undergone a position of element 1 in the figure
Figure GDA0004139682920000103
Combining hyperspectral { R with near ground 1 ,R 2 ,R 3 ,R 4 ,R 5 ,R 6 ,R 7 ,R 8 ,R 9 ,R 10 ,R 11 ,R 12 And (3) calculating the similarity of the spectrum peaks of the corresponding wave bands, and determining the rice type. In practice, pixel 1 is also completely within the rice field.
However, in FIG. 4b, pixel 1 is the current initial scanning position, and when scanning goes to the next pixel 2, pixel 2 spans the boundary between the rice and the non-main crop, and the combined hyperspectral R of pixel 2 and the rice represented in FIG. 4d 8 If the similarity is the largest of all near-ground combined hyperspectral but less than 80% of the preset value, then pixel 2 is eight neighbors as shown in fig. 4b, where pixels 4 and 7 are located entirely within the rice area, pixels 6 and 9 are all in the non-major crop area, the similarity is the lowest, and only pixels 5 and 8 are also self-limiting. The type described by the pixel element 8 is taken when the similarity between the pixel element 8 and the pixel element 8 is relatively large through calculation, and the pixel element 8 and R 8 The similarity of (2) has been calculated to be more than 80% as the rice type, and thus the type of pixel 2 is defined as rice.
If the pixel element is 1m×1m resolution, the pixel element is divided into four in the scanning direction, the leftmost part is completely in the non-main crop, and 3/4 is basically in the rice range, so that the two types of rice and the non-main crop are distinguished from each other in the area of the pixel element 2 more carefully. It can be seen that the smaller the pixel element, the finer the recognition will be.
As shown in fig. 5, a corresponding geographic entity code is assigned to the classification field identified by each scanned pixel element, a corresponding geographic entity basic parameter field is assigned to the space-time coordinate of the center of each pixel element, the geographic entity basic parameter fields assigned with the space-time coordinate of each scanned pixel element and the classification field assigned with the geographic entity code form a semantic system, and the semantic labeling of the geographic entity type of the remote sensing image of embodiment 1 is completed.
Establishing a mapping relation between the ground projection coordinates corresponding to each pixel element and geographic entity classification in the semantic system according to the completed geographic entity semantic system;
as shown in fig. 6, taking the sub-library e as an example in the oblique photography database in fig. 1, invoking one of the scanning pixel elements, mapping each ground projection coordinate into a geographic entity basic parameter field in a semantic system, and finding corresponding coordinates, wherein the coordinates are associated to corresponding geographic entity classifications in a classification field through association pointing, so that the mapping relation is established. Namely, the geographical entity classification to which the coordinate belongs can be found from the ground projection coordinate in the inclined image.
Forming the oblique photographic data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, wherein the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2 at the moment, so that the same mapping relation as that described in fig. 6 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
selecting any point in the three-dimensional model, acquiring the geographic entity classification in the semantic system corresponding to the any point, displaying the classification information in the spatial position of the model nearby the any point, and selecting to disappear the classification information at any time.
Preferably, the pixels of the coordinates where the road is located in all the oblique photographic images are filled with preset color pixel values, and then the specific steps of automatic modeling are replaced by: the center point of each pixel element scanned by the unmanned aerial vehicle in embodiment 1 is displayed in the remote sensing image dimension of the remote sensing hyperspectrum, and the oblique image (for example, the images in the a-i sub-library in fig. 1) shot at the center point can be displayed by clicking any center point to replace the automatic modeling.
Also included thereafter is:
Applying the proposed optimal band combination of twelve types of geographic entity classification to the remote sensing hyperspectral with the semantically marked in the S3, and obtaining a remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of the S2' i Using near-ground combined hyperspectral R i For remote sensing combined hyperspectral R' i Performing correction processing to obtain corrected remote sensing combination hyperspectral R cor =f(R' i ,R i ) F (& gt) is the correction operation, f (& gt) is R '& lt & gt' i Peak and/or integral of corresponding characteristic bands of (c)Intensity is corrected to R i Specifically fitting each band spectral peak using a gaussian function, to achieve the correction.
According to the method, we obtain semantic annotation of ten types of geographic entity types, wherein fig. 7 is a visual ground object classification effect diagram of different colors formed after semantic annotation is performed after all thirteen types of geographic entities are classified.

Claims (11)

1. The geographic entity three-dimensional semantic modeling method based on hyperspectral data and oblique photography is characterized by comprising the following steps of:
s1, under a preset atmospheric environment, collecting a near-ground hyperspectral R under typical geographic entity classification li I represents the serial numbers of different geographical entity classifications, and the acquired near-ground hyperspectral is subjected to classification internal spectrum fusion processing to obtain a fusion spectrum R ci Simultaneously, the remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral is also acquired p
S2, fusion spectrum R under classification of various geographic entities ci Extracting optimal wave band combinations of each geographical entity category, and forming near-ground combined hyperspectral R of the optimal wave band combinations through wave band cutting and splicing i
S3, scanning pixel elements one by one for the remote sensing hyperspectral, comparing the hyperspectral of each scanned pixel element with the optimal combination band spectrum of each geographic entity class, thereby determining the geographic entity class to which the currently scanned pixel element belongs, and mapping the identified type with the geographic entity class field to form a semantic system, so that labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectral is completed after the scanning is completed; wherein,,
near-ground hyperspectral R under S1 acquisition of a typical geographic entity class li Also comprises the following specific steps of P1-1:
the unmanned aerial vehicle carrying the nine-direction inclined photographing device scans pixel elements one by one in the area where the remote sensing hyperspectral is located, a plurality of inclined images are obtained, the coordinate of each pixel element under a real geographic coordinate system E where the unmanned aerial vehicle is located when scanning is recorded, and the space coordinate of each pixel element in each angle inclined image and the corresponding ground projection coordinate are calculated;
After step S3 is completed, the method further comprises the following steps:
p1-2, establishing a mapping relation between the ground projection coordinates corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system completed in the step S3;
p1-3 forms nine-direction oblique photographic data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system as in P1-2;
p1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information in the spatial position of the model near the any point, and can select to disappear the classification information at any time; wherein,,
s1 specifically comprises: s1-1, preparing corresponding unmanned frame numbers kN, k=1, 2,3,4 and 5 according to the type number N of the geographical entity types to be classified, and carrying out hyperspectral acquisition on a plurality of acquisition points of each type of geographical entity by a plurality of unmanned aerial vehicles along a preset flight route under preset meteorological conditions;
The method comprises the following specific steps of:
p1-1 adopts an unmanned aerial vehicle carrying a nine-direction oblique photographing device, a nine-direction visual field range is set, a minimum external square of the nine-direction visual field range is constructed, the minimum external square is taken as a pixel element, pixel elements are scanned one by one according to a specified scanning mode for remote sensing hyperspectral, a plurality of oblique images are obtained, the coordinate of the unmanned aerial vehicle under a real geographic coordinate system E when each pixel element is scanned is recorded, and the space coordinate of each pixel element in each angle oblique image and the corresponding ground projection coordinate are calculated;
s1-2, preprocessing hyperspectrum acquired by a plurality of acquisition points of each type of geographic entity to obtain near-ground hyperspectrum R under the classification of typical geographic entities lij J is the number of acquisition points, and each acquisition point acquires a hyperspectral;
the pretreatment comprises the following steps: at least one of first derivative, normalization, multiple scatter correction, standard normal variable transformation, savitzky-Golay smoothing;
s1-3, for each classification interior, carrying out superposition average processing on hyperspectrum acquired by a plurality of acquisition points
Figure QLYQS_1
Wherein K is i Collecting points for class i geographic entities, +. >
Figure QLYQS_2
For each spectrum R lij Is at each wavelength to form R ci Simultaneously acquiring remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral p
S2 specifically comprises: s2-1 fusion spectrum R under classification of geographic entities ci Optimal band combining for extracting classifications of geographic entities
Figure QLYQS_3
Wherein q is i ∈[3,9]Band sequence numbers in the best band combination for class i geographic entities, for each
Figure QLYQS_4
Are shown as an optimal band combination,
wherein, the optimal wave band combination
Figure QLYQS_5
The method is selected by the following steps:
setting optimum band discrimination index
Figure QLYQS_6
Wherein u, v, x, y and z are hyperspectral band numbers of the ith geographic entity, S is standard deviation calculated by hyperspectral bands of a plurality of sampling points, R uv 、R xyz For the correlation coefficients between the bands with the corresponding subscripts sequence numbers, the sum symbols are for all possible totals +.>
Figure QLYQS_7
Taking sum of all times, taking +.>
Figure QLYQS_8
The corresponding band combination is three-band and four-band combination as the optimal band combination;
s2-2 is subjected to band cutting and splicing, and near-ground combined hyperspectral R formed by sequentially combining wavelength values of all the optimal bands i The clipping is to delete all band data except the optimal band, the clipping is to forcedly define the intensity value on the clipping point as a base line value at the clipping point where the head and the tail of the optimal band area are sequentially connected, the size and the range of the wavelength value spanned by spectral lines of each band after the clipping are still defined unchanged, and the peak value at each wavelength except the clipping point is unchanged;
S3, scanning the pixel elements of the remote sensing hyperspectrum one by one, comparing the hyperspectrum of each scanned pixel element with the optimal combination band spectrum of each geographic entity class, thereby determining the geographic entity class to which the currently scanned pixel element belongs, and mapping the identified type with the geographic entity class field to form a semantic system, thereby completing the labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectrum after the scanning is finished, wherein the pixel elements of the remote sensing hyperspectrum are scanned one by one, and comparing the hyperspectrum of each scanned pixel element with the optimal combination band spectrum of each geographic entity class, thereby determining the geographic entity class to which the currently scanned pixel element belongs specifically comprises:
s3-1, defining a region with the pixel element size p being the positive integer M epsilon [1,4] square of the resolution res [0.3M,1M ], namely p=M.res×M.res, and establishing a geographic entity type database which comprises geographic entity classification fields and geographic entity basic parameter fields, wherein the geographic entity classification fields comprise geographic entity codes exclusive to each entity classification;
s3-2, scanning the remote sensing image in the remote sensing hyperspectral by using a pixel element p in a specified scanning mode, and calling the corresponding hyperspectral in each pixel element in the scanning process
Figure QLYQS_9
Wherein t is the serial number of the pixel element currently scanned;
s3-3 will
Figure QLYQS_10
Corresponding to various geographic entities R i Spectral lines corresponding to the wave bands of the wave bands and corresponding various geographic entities R i The similarity comparison is carried out on the spectral lines of the pixels, and when the highest similarity is compared and is greater than 80-90% of a preset value, the pixel element p of the current scanning is identified as the corresponding geographic entity type;
and when the highest similarity is smaller than a preset value, performing the following analysis on the pixel elements of the pixels with the complete image in the p eight neighborhood range of the current pixel element:
if there are already identified geographic entity types in the eight neighborhoods, then
Figure QLYQS_11
Spectral lines corresponding to those bands of near-ground combined hyperspectral bands corresponding to the identified geographic entities are compared with the combined hyperspectral lines classified by the identified geographic entities in similarity; if the undetermined geographic entity type exists in the eight adjacent domains, continuing to identify the geographic entity type to which the adjacent domain of the undetermined geographic entity type belongs;
calculation of
Figure QLYQS_12
Corresponding to those bands of near-ground combined hyperspectral bands of all geographic entities represented by the neighborhood after determining the geographic entity typeThe similarity of the corresponding spectral lines and the spectral lines of the near-ground combined hyperspectrum of the geographic entity represented by the neighborhood after each geographic entity type is determined, the geographic entity type represented by the neighborhood corresponding to the greatest similarity is taken as the geographic entity type identified by the current pixel element p, the neighborhood pixel element with the geographic entity type identified is regarded as being scanned, and scanning identification is not carried out in the later scanning process;
If at least one neighborhood with the similarity smaller than the preset value calculated in the process of identifying the represented geographic entity type also exists in the eight neighbors, the current similarity comparison object is not counted, and only the neighborhood with the similarity larger than the preset value is used as the similarity comparison object;
if all the eight neighborhoods are not counted in the current similarity comparison object, defining the geographic entity type of the current pixel element p as the geographic entity type identified by the previous pixel element or the next pixel element,
the similarity comparison method comprises the following steps:
s3-3-1 will
Figure QLYQS_13
The step of S2-2 is carried out to obtain +.>
Figure QLYQS_14
Corresponding combined hyperspectral->
Figure QLYQS_15
Wherein i represents p->
Figure QLYQS_16
R corresponding to the type of the i-th geographic entity i The forming mode is combined to form;
s3-3-2 pair
Figure QLYQS_17
Integrated intensity value of each band in the best combination band +.>
Figure QLYQS_18
And R is R i Integral intensity value of corresponding wave band
Figure QLYQS_19
Comparing, and obtaining the reciprocal of standard deviation +.>
Figure QLYQS_20
Defined as similarity;
the mapping the identified type and the geographical entity classification field to form a semantic system specifically comprises:
s3-4 assigns a corresponding geo-entity code to the classification field identified by each scanned pixel element,
s3-5, the space-time coordinates of the centers of each pixel element are endowed with corresponding geographic entity basic parameter fields, and the geographic entity basic parameter fields endowed with the space-time coordinates of each scanned pixel element and the classification fields endowed with the geographic entity codes form a semantic system.
2. The method according to claim 1, wherein the specific steps P1-3 are replaced by: the central point of each pixel element scanned by the unmanned aerial vehicle in P1-1 is displayed in the remote sensing hyperspectral remote sensing image dimension, nine-direction inclined images shot at the central point can be displayed by clicking any central point to replace automatic modeling, at the moment, a semantic system is adopted, corresponding geographic entity classification information is obtained by clicking any point in any inclined image and mapping and displaying through the ground projection coordinates of the point, and the classification information can be selected to disappear at any time.
3. The method of claim 2, wherein all vehicles on the road in the nine-direction oblique photography are deleted prior to the P1-3 specific step replacement.
4. A method according to claim 3, wherein the specific deleting method is to fill all pixels of coordinates where the road is located with preset color pixel values.
5. The method according to any one of claims 1-4, wherein after the labeling of the geographical entity semantics of the remote sensing image in the acquired remote sensing hyperspectral is completed in S3, further comprises:
applying the optimal band combination of the classification of the various geographic entities proposed by the S2 to the remote sensing hyperspectral which is subjected to semantic annotation in the S3, and obtaining a remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of the S2' i Using near-ground combined hyperspectral R i For remote sensing combined hyperspectral R' i Performing correction processing to obtain corrected remote sensing combination hyperspectral R cor =f(R' i ,R i ) F (& gt) is the correction operation, f (& gt) is R '& lt & gt' i Correcting the peak and/or integral intensity of the corresponding characteristic band of (c) to R i Specifically fitting each band spectral peak using a gaussian function, to achieve the correction.
6. The method according to claim 5, further comprising, after step S3-5
P1-2 establishes a mapping relation between the ground projection coordinates corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system completed in S3-5;
p1-3 forms nine-direction oblique photographic data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system as in P1-2;
p1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information in the spatial position of the model near the any point, and can select to disappear the classification information at any time.
7. The method according to claim 5 or 6, wherein the flying heights of all unmanned aerial vehicles in S1-1 are consistent, and for various geographic entities, all unmanned aerial vehicles start flying at the same time according to a predetermined route, and all acquisitions are stopped at the same time point;
the prescribed scanning method in P1-1 is a line scanning and/or column scanning method;
the predetermined scanning method in S3-2 is an image line scanning method and/or a column scanning method.
8. The method of claim 7, wherein P1-1 translates the side length distance of one pixel and continues the next pixel when the previous pixel is scanned for the same row and/or column; and S3-2, when the same row and/or the same column are scanned, shifting one pixel side length distance when the scanning of the current pixel is finished, and continuing the scanning of the next pixel.
9. The method of claim 7, wherein the specific steps of P1-3 are replaced by: the central point of each pixel element scanned by the unmanned aerial vehicle in P1-1 is displayed in the remote sensing hyperspectral remote sensing image dimension, nine-direction inclined images shot at the central point can be displayed by clicking any central point to replace automatic modeling, at the moment, a semantic system is adopted, corresponding geographic entity classification information is obtained by clicking any point in any inclined image and mapping and displaying through the ground projection coordinates of the point, and the classification information can be selected to disappear at any time.
10. The method of claim 9, wherein all vehicles on the road in the nine-direction oblique photography are deleted prior to the P1-3 specific step replacement.
11. The method of claim 10, wherein the deleting is performed by filling all pixels of coordinates where the road is located with preset color pixel values.
CN202210860449.XA 2022-05-18 2022-07-21 Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data Active CN115346009B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022105430316 2022-05-18
CN202210543031 2022-05-18

Publications (2)

Publication Number Publication Date
CN115346009A CN115346009A (en) 2022-11-15
CN115346009B true CN115346009B (en) 2023-04-28

Family

ID=83949240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210860449.XA Active CN115346009B (en) 2022-05-18 2022-07-21 Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data

Country Status (1)

Country Link
CN (1) CN115346009B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096663A (en) * 2016-06-24 2016-11-09 长春工程学院 A kind of grader construction method of the target in hyperspectral remotely sensed image being grouped based on sparse isomery

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619711B2 (en) * 2014-09-29 2017-04-11 Digitalglobe, Inc. Multi-spectral image labeling with radiometric attribute vectors of image space representation components
WO2018081929A1 (en) * 2016-11-01 2018-05-11 深圳大学 Hyperspectral remote sensing image feature extraction and classification method and system thereof
JP6392476B1 (en) * 2018-03-19 2018-09-19 大輝 中矢 Biological tissue analysis apparatus and biological tissue analysis program
CN110852300A (en) * 2019-11-19 2020-02-28 中煤航测遥感集团有限公司 Ground feature classification method, map drawing device and electronic equipment
CN114089787A (en) * 2021-09-29 2022-02-25 航天时代飞鸿技术有限公司 Ground three-dimensional semantic map based on multi-machine cooperative flight and construction method thereof
CN114219958A (en) * 2021-11-18 2022-03-22 中铁第四勘察设计院集团有限公司 Method, device, equipment and storage medium for classifying multi-view remote sensing images
CN114494899A (en) * 2021-12-24 2022-05-13 上海普适导航科技股份有限公司 Rapid classification method for characteristic ground objects of remote sensing image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096663A (en) * 2016-06-24 2016-11-09 长春工程学院 A kind of grader construction method of the target in hyperspectral remotely sensed image being grouped based on sparse isomery

Also Published As

Publication number Publication date
CN115346009A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
Jin et al. Stem–leaf segmentation and phenotypic trait extraction of individual maize using terrestrial LiDAR data
Sun et al. Individual tree crown segmentation and crown width extraction from a heightmap derived from aerial laser scanning data using a deep learning framework
EP3022686B1 (en) Automatic generation of multi-scale descriptors from overhead imagery through manipulation of alpha-tree data structures
KR100678395B1 (en) System and method for real time position correction of geometric information data using satellite and aerospace image
González-Yebra et al. Methodological proposal to assess plastic greenhouses land cover change from the combination of archival aerial orthoimages and Landsat data
CN115641412B (en) Three-dimensional semantic map generation method based on hyperspectral data
CN105809194A (en) Method for translating SAR image into optical image
CN112712535A (en) Mask-RCNN landslide segmentation method based on simulation difficult sample
CN111222539B (en) Method for optimizing and expanding supervision classification samples based on multi-source multi-temporal remote sensing image
Li et al. A comparison of deep learning methods for airborne lidar point clouds classification
Zhou et al. Individual tree crown segmentation based on aerial image using superpixel and topological features
CN114241321A (en) Rapid and accurate identification method for high-resolution remote sensing image flat-topped building
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
Saeed et al. Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks
CN113469122A (en) Deep learning based crop space-time generalization classification method and system
Yu et al. Colour balancing of satellite imagery based on a colour reference library
Yadav et al. Urban tree canopy detection using object-based image analysis for very high resolution satellite images: A literature review
CN115346009B (en) Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data
CN116758223A (en) Three-dimensional multispectral point cloud reconstruction method, system and equipment based on double-angle multispectral image
CN114494586B (en) Lattice projection deep learning network broadleaf branch and leaf separation and skeleton reconstruction method
CN116206210A (en) NAS-Swin-based remote sensing image agricultural greenhouse extraction method
Fan et al. An improved Deeplab based model for extracting cultivated land information from high definition remote sensing images
CN113362458B (en) Three-dimensional model interpretation method for simulating multi-view imaging, terminal and storage medium
Cui et al. Optimal spatial resolution of remote-sensing imagery for monitoring cantaloupe greenhouses
CN114694022A (en) Spherical neighborhood based multi-scale multi-feature algorithm semantic segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant