CN115346009A - Geographic entity semantic modeling method based on hyperspectral data and inclined three-dimensional data - Google Patents

Geographic entity semantic modeling method based on hyperspectral data and inclined three-dimensional data Download PDF

Info

Publication number
CN115346009A
CN115346009A CN202210860449.XA CN202210860449A CN115346009A CN 115346009 A CN115346009 A CN 115346009A CN 202210860449 A CN202210860449 A CN 202210860449A CN 115346009 A CN115346009 A CN 115346009A
Authority
CN
China
Prior art keywords
geographic entity
hyperspectral
pixel element
geographic
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210860449.XA
Other languages
Chinese (zh)
Other versions
CN115346009B (en
Inventor
刘俊伟
郭大海
王志伟
刘亚萍
赵雅新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Air Remote Information Technology Co ltd
Original Assignee
Shanghai Air Remote Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Air Remote Information Technology Co ltd filed Critical Shanghai Air Remote Information Technology Co ltd
Publication of CN115346009A publication Critical patent/CN115346009A/en
Application granted granted Critical
Publication of CN115346009B publication Critical patent/CN115346009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data, which comprises the following steps of: s1, selecting a preset atmospheric environment, and collecting near-ground hyperspectral R under typical geographic entity classification li Classifying the collected near-surface hyperspectrum, performing internal spectrum fusion processing, performing oblique photography, and simultaneously collecting remote sensing hyperspectrum R of the same geographic area corresponding to the near-surface hyperspectrum p (ii) a S2, fusion spectrum R under each geographic entity classification ci Extracting optimal band groups for each geographic entity classificationCombining, cutting and splicing wave bands to form the near-ground combined hyperspectrum R with the optimal wave band combination i (ii) a And S3, scanning pixel elements of the remote sensing hyperspectral image one by one to form a semantic system, and finally, automatically modeling oblique photography data of each scanned pixel element or performing a substitution mode associated with a pixel element central point in the remote sensing image to form mapping from an oblique three-dimensional monomer model to geographic entity classification, so that efficient and accurate geographic entity three-dimensional semantic modeling based on hyperspectral data is completed.

Description

Geographic entity semantic modeling method based on hyperspectral data and inclined three-dimensional data
Technical Field
The invention provides a three-dimensional semantic modeling method for a natural resource industry in the field of nature and human geography, relates to semantic labeling adopting hyperspectrum, in particular to a geographic entity semantic modeling method based on hyperspectral data and inclined three-dimensional data, and belongs to the field of geographic landscape semantic indexing.
Background
The hyperspectral data belongs to a three-dimensional cubic data structure of ground object two-dimensional images and one-dimensional spectra, the prior art adopts a remote sensing technology to carry out classification research on ground objects, particularly growth, disaster resistance and soil components of crops, mostly adopts an inversion means to calculate required optical and chemical substance parameters, and does not pay attention to how to utilize hyperspectral data to carry out semantic classification on geographic entities.
For hyperspectral data processing, the prior art usually collects the same geographical classification such as crops, so that the collected images can be easily processed integrally, but when the research range is expanded to a larger geographical area, various geographical entities are difficult to be displayed on remote sensing images, and therefore, the characteristic wave band cannot be extracted integrally. And if the RNN, CNN, SVM or other improved algorithms need different samples for learning calculation, the extraction precision is less than 71%, and the algorithm burden is also greatly increased.
In addition, the remote sensing hyperspectrum is restricted by the atmospheric environment, a complex correction algorithm needs to be adopted, so that the data processing cost and the final semantic annotation efficiency are improved, meanwhile, the correction algorithm is diverse, and the specific situation of the algorithm during remote sensing image acquisition can not be determined, so that in the face of complex and variable complex acquisition environment, some unknown uncontrollable factors need to be considered, and a hyperspectral acquisition method which avoids the restriction of the atmospheric environment needs to be considered.
The use of unmanned aerial vehicle makes atmospheric environment's factor very reduce, because 120 m's limit for height regulation, consequently under the prerequisite of the high region 200m of most cloud cover, uses unmanned aerial vehicle to carry out the acquireing of ground object image and can avoid atmospheric environment's image factor. The oblique photography using unmanned aerial vehicles in the prior art realizes the reconstruction of three-dimensional geographic entities through multi-angle shooting. However, the oblique photography of the prior art is only performed in five directions, namely front, back, left and right, and angular photography of the other four corners of the eight neighborhoods is lacked, so that the information amount is incomplete.
Disclosure of Invention
Based on the current situation that the three-dimensional semantic modeling information of the oblique photography of the geographic entity in the prior art is incomplete, the invention considers a new idea of oblique photography under semantic annotation, namely, the concept of fusing the data of oblique photography under the semantic annotation by adopting a hyperspectral means, and mainly comprises the following aspects: acquiring a remote sensing image of a first geographical area; collecting a hyperspectral spectrum of a second geographic area and extracting a hyperspectral characteristic spectrum section of each geographic entity type; and thirdly, scanning the geographic entity type to which the wave band of each pixel in the remote sensing image belongs in an image pixel scanning mode, and scanning based on octagonal oblique photography by using an unmanned aerial vehicle to form a data set of an oblique three-dimensional monomer by fusing semantic labeling and images.
Based on the consideration, the invention provides a geographical entity semantic modeling method based on hyperspectral data and tilted three-dimensional data, which comprises the following steps:
s1, selecting a preset atmospheric environment, and collecting near-ground hyperspectral R under typical geographic entity classification li And i represents the number of different geographic entity classifications, and performs classification internal spectrum fusion processing on the collected near-ground hyperspectral to obtain a fused spectrum R ci Simultaneously collecting remote sensing hyperspectral R of the same geographic area corresponding to the near-ground hyperspectral p
S2, fusion spectrum R under each geographic entity classification ci Extracting the optimal wave band combination of each geographic entity classification, and forming the near-ground combined hyperspectral R of the optimal wave band combination through wave band cutting and splicing i
S3, scanning pixel elements of the remote sensing hyperspectrum one by one, comparing the hyperspectrum of each scanned pixel element with the optimal combined wave band spectrum of each geographic entity classification, determining the geographic entity classification to which the currently scanned pixel element belongs, mapping the identified type with the geographic entity classification field, and forming a semantic system, so that the geographic entity semantic annotation of the remote sensing image in the acquired remote sensing hyperspectrum is completed after the scanning is finished;
collecting near-surface hyperspectral R under typical geographic entity classification in S1 li The method also comprises the following specific steps of P1-1:
scanning pixel elements of an area where remote sensing hyperspectrum is located one by adopting an unmanned aerial vehicle carrying an oblique photographing device to obtain a plurality of oblique images, recording the coordinate of the unmanned aerial vehicle under a real geographic coordinate system E when each pixel element is scanned, and calculating the space coordinate of each pixel element in each angle oblique image and the corresponding ground projection coordinate;
after step S3 is completed, the following steps are also included:
p1-2, establishing a mapping relation between the ground projection coordinate corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system finished in S3;
p1-3 forms the oblique photography data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation as in P1-2 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
and P1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information at the spatial position of the model near the any point, and can optionally disappear the classification information.
Preferably, the P1-3 specific steps are replaced by: the central point of each pixel element scanned by the unmanned aerial vehicle in P1-1 is displayed in the remote sensing image dimension of the remote sensing hyperspectral image, the inclined image shot at the central point can be displayed by clicking any central point to replace automatic modeling, at the moment, a semantic system is adopted, any point is clicked in any inclined image, the corresponding geographic entity classification information is obtained by mapping and displaying through the ground projection coordinate of the point, and the classification information can be also selected to disappear at any time;
i.e. multiple (e.g. nine) oblique images per scanned pixel element are themselves used as a multi-view model, or can be defined as a three-dimensional model obtained by taking pictures, which directly results in a method that saves several months without modeling.
Preferably, all vehicles on the road in oblique photography are deleted before the P1-3 specific step replacement is performed. Preferably, the specific deletion method is to fill all pixels of the coordinates where the road is located with preset color pixel values.
Preferably, in S3, after completing labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectrum, the following steps are further included:
applying the optimal waveband combination of various geographic entity classifications provided by S2 to the semantically labeled remote sensing hyperspectral in S3, and obtaining a remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of S2' i Use ofNear-ground combined high spectrum R i To remote sensing combined hyperspectral R' i Carrying out correction processing to obtain corrected remote sensing combined hyperspectral R cor =f(R' i ,R i ) F (-) is a corrective action, preferably f (-) is R' i Is corrected to R by the peak value and/or integrated intensity of the corresponding characteristic wave band i The correction is achieved by fitting each band spectral peak specifically using a gaussian function.
Therefore, the semantic annotation of each geographic entity is completed by utilizing the near-ground hyperspectral data, the correction of the semantic remote sensing hyperspectral is also completed, and the high efficiency and the simplification of the semantic annotation are improved.
Wherein S1 specifically comprises:
s1-1, according to the type number N of the geographic entities to be classified, allocating a corresponding number kN of unmanned racks, wherein k =1,2,3,4,5, and carrying out hyperspectral acquisition on a plurality of acquisition points of the geographic entities of various types by a plurality of unmanned aerial vehicles along a preset flight route under a preset meteorological condition;
the method comprises the following specific steps of when a plurality of acquisition points perform hyperspectral acquisition:
p1-1, adopting an unmanned aerial vehicle carrying an oblique photography device, setting a visual field range, constructing a minimum external square shape of the visual field range, taking the minimum external square as a pixel element, scanning the remote sensing hyperspectrum one by one according to a specified scanning mode to obtain a plurality of oblique images, recording the coordinate of the unmanned aerial vehicle under a real geographic coordinate system E when each pixel element is scanned, and calculating the spatial coordinate of each pixel element in the oblique images at all angles and the corresponding ground projection coordinate;
preferably, the predetermined scanning mode is a line scanning mode and/or a column scanning mode. Preferably, for the same row and/or the same column scanning, when the current pixel element is scanned completely, the side length distance of one pixel element is shifted to continue the next pixel element scanning.
Preferably, the flying heights of all unmanned aerial vehicles are consistent, and for various geographic entities, all unmanned aerial vehicles start flying at the same time according to the preset route, and stop all acquisition at the same time point. The method and the device can ensure that various geographic entities are collected in the same time period, and prevent the situation that the collection work between some geographic entities is obviously in two different time periods with different interval durations, so that the hyperspectral data can not accurately reflect the state of the same time period.
S1-2, preprocessing hyperspectral collected by a plurality of collection points of each type of geographic entity to obtain near-ground hyperspectral R under typical geographic entity classification lij J is the number of acquisition points, and each acquisition point acquires a hyperspectral image;
preferably, the pre-treatment comprises: at least one of first derivative, normalization, multivariate scatter correction, standard normal variate transformation, savitzky-Golay smoothing.
S1-3, for each classified interior, carrying out superposition average processing on hyperspectral images collected by a plurality of collection points
Figure BDA0003758201740000041
Wherein, K i Collecting points for the ith type of geographic entity,
Figure BDA0003758201740000042
for each spectrum R lij The intensity at each wavelength of (A) to form R ci Simultaneously collecting remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral p
Wherein S2 specifically comprises:
s2-1 fusion spectrum R under each geographic entity classification ci Extracting optimal band combination of each geographic entity classification
Figure BDA0003758201740000043
Wherein q is i ∈[3,9]The band number in the best band combination for the i-th geographical entity, for each
Figure BDA0003758201740000044
Are represented as an optimal band combination,
wherein the best waveSegment combination
Figure BDA0003758201740000045
The method comprises the following steps:
setting the optimum band discrimination index
Figure BDA0003758201740000046
Or
Figure BDA0003758201740000047
Wherein u, v, x, y and z are hyperspectral band serial numbers of the ith geographic entity, and Q i The sum of hyperspectral bands for the ith geographic entity, S is the standard deviation calculated by hyperspectral bands for a plurality of sampling points, R uv 、R xyz For the correlation coefficients between the bands indexed by the respective indices, the sum symbols are for all possible sums, respectively
Figure BDA0003758201740000048
Taking the sum of the times to obtain
Figure BDA0003758201740000049
The corresponding wave band combination, namely the three-wave band combination and the four-wave band combination, is used as the optimal wave band combination;
it can be seen that we do not aim at artificial intelligence algorithms by means of optimal band combination, but rather consider them as a kind of feature that characterizes a specific geographical entity type for identifying the entity type. The first best band combination mode of different geographic entity types is different, and the second integral intensity of each band is also different, so that the method becomes a characteristic basis for identifying the entity type. For most ground features with less complex components, a maximum of four bands is generally sufficient to distinguish from the best band combinations of other different target ground features.
S2-2, by means of band cutting and splicing, combining the near-ground combined high spectrum R formed according to the order of the wavelength values of the optimal bands i The cutting is to delete all band data except the optimal band, and the splicing is to connect the head and the tail of the optimal band region in sequence at the splicing point and on the splicing pointThe intensity value is defined as a baseline value, the size and the range of the wavelength value spanned by each band spectral line after splicing are still defined to be unchanged, and the peak value at each wavelength except the splicing position is unchanged.
In S3, pixel elements of the remote sensing hyperspectral are scanned one by one, the hyperspectral of each scanned pixel element is compared with the spectrum of the optimal combined wave band of each geographic entity classification, and therefore the geographic entity classification to which the currently scanned pixel element belongs is determined specifically comprises the following steps:
s3-1, defining the pixel element size p as a region with resolution res epsilon [0.3M,1m ] multiplied by a positive integer M epsilon [1,4] square, namely p = M · res × M · res, establishing a geographic entity type database, wherein the geographic entity type database comprises a geographic entity classification field and a geographic entity basic parameter field, and the geographic entity classification field comprises a geographic entity code exclusive to each entity classification;
s3-2, scanning the remote sensing image in the remote sensing hyperspectrum in a specified scanning mode by using a pixel element p, and calling a corresponding hyperspectral image in each pixel element in the scanning process
Figure BDA0003758201740000051
Wherein t is the serial number of the pixel element scanned currently;
preferably, the predetermined scanning mode is a mode of image line scanning and/or column scanning. Preferably, when the scanning of the same row and/or the same column is finished, the scanning of the next pixel element is continued by shifting one pixel element for a long distance when the scanning of the current pixel element is finished;
s3-3 will
Figure BDA0003758201740000052
In correspondence with various geographic entities R i Spectral lines corresponding to those of the bands and corresponding classes of geographic entities R i Comparing the similarity of the spectral lines, and identifying the currently scanned pixel element p as a corresponding geographic entity type when the highest similarity is compared and is 80-90% larger than a preset value;
when the highest similarity is smaller than a preset value, the pixel element of the pixel with the complete image is analyzed in the range of p eight neighborhoods of the current pixel element as follows:
if there is a geographic entity type already identified in the eight neighborhoods, then it will be
Figure BDA0003758201740000053
Comparing the similarity of spectral lines corresponding to the wave bands of the near-ground combined hyperspectral wave bands corresponding to the identified geographic entity with the similarity of the combined hyperspectral spectral lines of the identified geographic entity classification; if the geographic entity type is not determined in the eight neighborhoods, continuously identifying the geographic entity type to which the neighborhood of the geographic entity type is not determined in the neighborhoods;
computing
Figure BDA0003758201740000061
The similarity between the spectrum spectral lines corresponding to the bands of the near-ground combined hyperspectral bands of the geographic entity represented by the neighborhood after all the determined geographic entity types and the spectrum spectral lines of the near-ground combined hyperspectral of the geographic entity represented by the neighborhood after all the determined geographic entity types is taken, the geographic entity type represented by the neighborhood corresponding to the maximum similarity is taken as the geographic entity type identified by the current pixel element p, and the neighborhood pixel element of the identified geographic entity type is regarded as being scanned completely and is not scanned and identified any more in the subsequent scanning process;
if at least one neighborhood which is calculated in the process of identifying the type of the geographic entity represented by the neighborhood is smaller than the preset value exists in the eight neighborhoods, the current similarity comparison object is not included, and only the neighborhoods which are larger than the preset value are taken as the similarity comparison objects;
if all the eight neighborhoods do not count the current similarity comparison object, defining the geographic entity type of the current pixel element p as the geographic entity type identified by the previous pixel element or the next pixel element,
the similarity comparison method comprises the following steps:
s3-3-1 will
Figure BDA0003758201740000062
Carrying out the step of S2-2 to obtain
Figure BDA0003758201740000063
Corresponding combined hyperspectrum
Figure BDA0003758201740000064
Wherein i represents a pair
Figure BDA0003758201740000065
According to the R corresponding to the ith type of geographic entity i The forming modes are combined and formed;
s3-3-2 pairs
Figure BDA0003758201740000066
Integrated intensity value of each wave band in medium-optimal combined wave band
Figure BDA0003758201740000067
And R i Integrated intensity value of the corresponding band
Figure BDA0003758201740000068
Comparing, and calculating the reciprocal of standard deviation
Figure BDA0003758201740000069
Defined as the degree of similarity.
The mapping the identified type and the geographic entity classification field to form a semantic system specifically comprises:
s3-4 assigns a corresponding geo-entity code to the classification field identified by each scanned pixel element,
and S3-5, endowing the spatio-temporal coordinate of the center of each pixel element to a corresponding geographic entity basic parameter field, and forming a semantic system by the geographic entity basic parameter field endowed with the spatio-temporal coordinate of each scanned pixel element and the endowed classification field of the geographic entity code.
Preferably, after step S3-5, further comprising
P1-2, according to the geographic entity semantic system finished in S3-5, establishing a mapping relation between the ground projection coordinate corresponding to each pixel element and the geographic entity classification in the semantic system;
p1-3 forms the oblique photography data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation as in P1-2 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
p1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information at the spatial position of the model near the any point, and can select to disappear the classification information at any time.
Preferably, the P1-3 specific steps are replaced by: the central point of each pixel element scanned by the unmanned aerial vehicle in the P1-1 is displayed in the remote sensing image dimension of the remote sensing hyperspectrum, and the inclined image shot at the central point can be displayed by clicking any central point to replace automatic modeling.
Preferably, all vehicles on the road in oblique photography are deleted before the P1-3 specific step replacement is performed.
Preferably, the specific deletion method is to fill all pixels of the coordinates where the road is located with preset color pixel values.
Advantageous effects
1. The machine learning algorithm is avoided, the hyperspectrum is combined under the optimal waveband combination of 3-4 wavebands of the hyperspectrum as the identification characteristic, the geographic entity data are classified and divided, the algorithm of semantic annotation is simplified, and the annotation efficiency is improved;
2. the idea of pixel element scanning by combining remote sensing images with oblique photography is adopted, entity classification in pixel elements is respectively identified based on combined hyperspectrum, eight neighborhood correction is adopted, the range of each geographic classification is accurately extracted from the remote sensing hyperspectrum, a semantic system between pixel classification and geographic coding is established, semantic labeling is completed, and meanwhile, compared with five-direction oblique photography in the prior art, oblique image acquisition of the complete eight neighborhoods of oblique photography is increased, possible angle dead angles are avoided, and neighborhood defect information of modeling characteristics is increased.
Drawings
FIG. 1 is a schematic diagram of a scanning method of oblique photography image elements based on a square circumscribed by a field of view range and a data structure of an oblique photography database,
FIG. 2 is a schematic diagram of a near-surface combined hyperspectral acquisition process of twelve types of geographic entities,
FIG. 3 is a schematic diagram of a cutting and splicing process for near-surface combined hyperspectral concretely, in which FIG. 3a is a schematic diagram of a cutting wave band, FIG. 3b is a schematic diagram after cutting, FIG. 3c is a schematic diagram of a splicing manner, FIG. 3d is a schematic diagram after splicing,
figure 4 is an area indicated by a box in a typical remote sensing true color map of the exemplary area of figure 1,
figure 5 is a flow chart of a semantic hierarchy forming process,
figure 6 is a map between ground projection coordinates and geographical entity classifications in a oblique photography database,
fig. 7 is a visual ground object classification effect diagram with different colors formed after semantic labeling is performed after all thirteen types of geographic entities are classified.
Detailed Description
Example 1
FIG. 1 shows the true color image dimensions in the hyperspectral remote sensing of typical ground objects in a demonstration area, which are divided into thirteen categories of urban roads, rural roads, greenhouses, buildings, rivers, ponds, woodlands, grasslands, paddy rice, corns, non-main crops, bare lands and other ground objects. A river and a pond are taken as water bodies, so that twelve types are provided, and 12-36 unmanned aerial vehicles carrying hyperspectral equipment are arranged to be respectively responsible for collecting the hyperspectra of twelve types of ground entities, wherein 1000-10000 pieces of hyperspectra are collected within 1 hour, and the collection frequency is 0.36-3.6 times per second.
The flying heights of all unmanned aerial vehicles are consistent to be within the range of 100-120m, and the unmanned aerial vehicles are real for various geographiesAll unmanned aerial vehicles start flying at the same time according to the preset route, and stop all acquisition in 1 hour from the beginning. The hyperspectral collected by 5000 collection points of each category of geographic entities is subjected to first-order derivative preprocessing, normalization, multivariate scattering correction and Savitzky-Golay smoothing processing, and as shown in figure 2, the near-surface hyperspectral { R ] under typical geographic entity classification is obtained l1 ,R l2 ,R l3 ,R l4 ,R l5 ,R l6 ,R l7 ,R l8 ,R l9 ,R l10 ,R l11 ,R l12 Each element comprises hyperspectral samples collected by 5000 collection points, and each collection point collects a hyperspectral;
when near-ground hyperspectrum is collected, as shown in fig. 1, an unmanned aerial vehicle carrying a tilted photographing device is adopted, a view field range set by changing depression angles of other tilted photographing devices in eight neighborhoods of the center overlook photographing device is constructed, a minimum external square of the view field range is constructed, the minimum external square is used as a pixel element, the pixel elements with the minimum external square as the pixel element are scanned one by one in a scanning mode of combining from right to left and from top to bottom in a row under the view angle of fig. 1 for remote sensing hyperspectrum, and after one pixel is scanned, the side length of one square is translated to scan the next pixel element.
In practical applications, the oblique photographing device mounted on the unmanned aerial vehicle may be a nine-lens camera in nine directions, and fig. 1 illustrates the nine-lens camera, in which a view field range in nine directions is set by changing a depression angle of other oblique photographing devices in eight neighborhoods of the central downward-looking photographing device, a minimum external square in the view field range in nine directions is constructed, and the acquired oblique photographing data also corresponds to nine-direction oblique photographing data in a nine-square grid. Of course, in addition to the above-described unmanned aerial vehicle equipped with the nine-direction oblique photographing device, an unmanned aerial vehicle equipped with a five-direction oblique photographing device may be used.
Each pixel scan obtains 8 oblique photography oblique images +1 overhead images. As shown in FIG. 1, the right magnified image of the first pixel scan in the upper right corner of the image is an example. The method comprises the steps that 9 angle ranges of oblique photography of a circumscribed square of a nine-square grid exist, in an image obtained by shooting of a shooting device in each angle range, the coordinates of an unmanned aerial vehicle (namely, the center of a middle overlooking shooting range) under a real geographic coordinate system E when each pixel element is scanned are recorded, the spatial coordinates of each pixel element in the 9-angle oblique image and corresponding ground projection coordinates (hereinafter referred to as coordinates) are calculated, a special oblique photography database is built according to the data and stored, the spatial coordinates of each scanned pixel element corresponding to a-i, the corresponding ground projection coordinates and an additional image storage unit are stored, and the data are stored. It will be appreciated that each of the databases, e.g., the overhead view image represented by e, includes spatial coordinates of the overhead view image and its pixels in each scanned pixel element, and corresponding ground projection coordinates. Thus e and other sub-libraries are actually a multi-dimensional array. For the coordinate storage, one of the first dimensions is data elements arranged in a scanning order of pixel elements (e.g., data element 1 and data element 2 in fig. 1 correspond to two scanned pixel elements in this figure, respectively), the second dimension is corresponding coordinates stored in the data elements in each first dimension, and the second dimension may be arranged in an extraction order of the image in the pixel elements, for example, a two-dimensional array is used to store coordinates corresponding to the position of each pixel in the image (e.g., as shown in the structural analysis of data element 1 in fig. 1, where three points represent omitted symbols).
For each of twelve classes, the hyperspectrum collected by 5000 collection points is subjected to superposition average processing to obtain { R } c1 ,R c2 ,R c3 ,R c4 ,R c5 ,R c6 ,R c7 ,R c8 ,R c9 ,R c10 ,R c11 ,R c12 Each element contains only one fused hyperspectral data
Simultaneously collecting remote sensing hyperspectral R of the same geographic area corresponding to the near-ground hyperspectral p
Example 2
Fused spectrum { R under each geographic entity classification c1 ,R c2 ,R c3 ,R c4 ,R c5 ,R c6 ,R c7 ,R c8 ,R c9 ,R c10 ,R c11 ,R c12 Extraction of optimal band combination for each geographical entity classification
Figure BDA0003758201740000091
Wherein q is i ∈[3,9]The band number in the best band combination for the i-th geographical entity, for each
Figure BDA0003758201740000092
Are all represented as an optimal band combination,
wherein the optimum band combination
Figure BDA0003758201740000093
The method comprises the following steps:
setting the optimum band discrimination index
Figure BDA0003758201740000094
Or
Figure BDA0003758201740000095
Wherein u, v, x, y and z are hyperspectral waveband serial numbers of the i-th class of geographic entities, and Q i =7 is the total number of hyperspectral bands of the geographical entity of class i, S is the standard deviation calculated from the hyperspectral bands of the plurality of sampling points, R uv 、R xyz For the correlation coefficients between the bands indexed by the respective indices, the sum symbols are for all possible sums, respectively
Figure BDA0003758201740000096
Taking the sum of the times to obtain
Figure BDA0003758201740000097
The corresponding band combinations, i.e., three-band, four-band, up to n-band combinations, are the best band combinations. Obtaining the optimal combined wave band of the twelve geographical entity classifications
Figure BDA0003758201740000098
If three bands are found to be F for one of the classes of geographic entities i Expresses the minimum value of each form, then for the geographic entity type, such as a house, then
Figure BDA0003758201740000099
For rice, four bands correspond to
Figure BDA00037582017400000910
FIG. 3a shows one of the merged hyperspectral R in geographic entity types ci . Three spectral peaks with the numbers of 1,2,3 exist in the optimal combined wave band, the spectral data between the peak 2 and the peak 3 are cut to form the cut state shown in figure 3b, and then the peak 2 and the peak 3 are spliced according to the wave band wavelength sequence of the optimal wave bands, namely the wave band wavelength sequence of the peak 1, the peak 2 and the peak 3. Note that the leftmost data point of the band of peak 3 circled in fig. 3c is larger than the baseline value, so that after stitching the data point is forced to be defined as the baseline value, coinciding with the baseline value of the rightmost of the band of peak 2, resulting in the stitched state diagram 3d. Peak 1 and peak 2 are shifted and peak 3 is a shift operation after splicing, except that the leftmost value of the band is defined as the baseline value, the size and range of wavelength values spanned by the peak 3 spectral line remain defined and are therefore not the range of wavelength values indicated by the wavelength axis.
For other geographic entity types, the same method is operated, and finally the near-ground combined hyperspectrum { R 1 ,R 2 ,R 3 ,R 4 ,R 5 ,R 6 ,R 7 ,R 8 ,R 9 ,R 10 ,R 11 ,R 12 As shown in fig. 2.
Example 3
The present embodiment will describe a method for classifying geographic entities based on pixel-scan remote sensing images. FIG. 4a illustrates an exemplary region of FIG. 1, typically a region indicated by a box in a true color map, as one embodiment of a pixel element scan. Setting pixel size to 4m × 4m, resolution to 1m, for the rice region and a nip in FIG. 4aA passage area for miscellaneous non-primary crops. The right side of fig. 4b shows the scan of the area near the boundary between the rice and non-main crops in fig. 4 b. Wherein, the scanning is performed in a scanning mode from right to left and from top to bottom. As shown in FIG. 4b, when a pixel element goes through the position of element 1 in the graph, the hyper-spectral as shown in FIG. 4c in the pixel element is called
Figure BDA0003758201740000101
High spectrum { R combined with near ground surface 1 ,R 2 ,R 3 ,R 4 ,R 5 ,R 6 ,R 7 ,R 8 ,R 9 ,R 10 ,R 11 ,R 12 And 5, calculating the similarity of the spectrum peaks of the corresponding wave bands, and determining the rice type. But in practice the pixel element 1 is also completely within the rice area.
However, pixel 1 in FIG. 4b is the current starting scanning position, and when the scanning passes through the next pixel 2, pixel 2 crosses the boundary between rice and non-main crops, and the combined hyperspectral R of pixel 2 and the representative rice in FIG. 4d 8 The similarity of (2) is the maximum of all near-ground combined hyperspectrum, but less than 80% of the preset value, then eight neighborhoods are made for pixel element 2 as shown in fig. 4b, wherein pixel elements 4 and 7 are completely located in the rice area, pixel elements 6 and 9 are all located in the non-main crop area, the similarity is the lowest, and only pixel elements 5 and 8 are located at the self-boundary. If the similarity with pixel 8 is found to be relatively large through calculation, the type of pixel 8 is obtained, and the pixel 8 and R are 8 The similarity of (c) has been calculated to be more than 80% as the rice type, and thus the type of pixel 2 is defined as rice.
If the pixel element is 1m × 1m in resolution, one pixel element is divided into four in the scanning direction, the leftmost portion is completely within the non-main crop, and 3/4 is substantially within the rice range, so that the two types of rice and non-main crop are more finely distinguished from each other in the area of the pixel element 2. It can be seen that as the pixel size is smaller, the recognition will be finer.
As shown in fig. 5, the classification field identified by each scanned pixel element is assigned with a corresponding geographic entity code, the spatio-temporal coordinate of the center of each pixel element is assigned with a corresponding geographic entity basic parameter field, and the geographic entity basic parameter field assigned with the spatio-temporal coordinate of each scanned pixel element and the classification field assigned with the geographic entity code form a semantic system, thereby completing semantic labeling of the geographic entity type of the remote sensing image in embodiment 1.
According to the completed geographic entity semantic system, establishing a mapping relation between the ground projection coordinate corresponding to each pixel element and the geographic entity classification in the semantic system;
as shown in fig. 6, a sub-library e is taken as an example in the oblique photography database in fig. 1, one of the scanned pixel elements is called, each ground projection coordinate is respectively mapped to the geographic entity basic parameter field in the semantic system, and the corresponding coordinate is found, and the coordinates are associated to the corresponding geographic entity classification in the classification field through association and pointing, so that the mapping relationship is established. Namely, the geographic entity classification to which the coordinates belong can be found from the ground projection coordinates in the oblique image.
Forming oblique photography data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, wherein ground projection coordinates corresponding to all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation as that described in the figure 6 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
selecting any point in the three-dimensional model, obtaining the geographic entity classification in the semantic system corresponding to the any point, displaying the classification information at the spatial position of the model near the any point, and selecting to disappear the classification information at any time.
Preferably, all pixels of coordinates where the road is located in all the oblique photographic images are filled with preset color pixel values, and then the specific steps of the automatic modeling are replaced by: the central point of each pixel element scanned by the unmanned plane in embodiment 1 is displayed in the remote sensing image dimension of the remote sensing hyperspectral, and an oblique image (such as an image in an a-i sub-library in fig. 1) shot at the central point can be displayed by clicking any central point to replace automatic modeling.
Also included thereafter are:
applying the optimal waveband combination of the twelve classes of geographical entity classification to the semantically labeled remote sensing hyperspectral in S3, and obtaining the remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of S2' i Using near-ground combined hyperspectral R i To remote sensing combined hyperspectral R' i Carrying out correction processing to obtain corrected remote sensing combined hyperspectral R cor =f(R' i ,R i ) F (-) is a correction operation, f (-) is R' i Is corrected to R by the peak value and/or integrated intensity of the corresponding characteristic wave band i The correction is achieved by fitting each band spectral peak specifically using a gaussian function.
According to the method, the semantic labels of twelve types of geographic entity types are obtained, wherein fig. 7 is a visual ground object classification effect graph with different colors formed after semantic labels are carried out after all thirteen types of geographic entities are classified.

Claims (12)

1. The geographic entity semantic modeling method based on the hyperspectral data and the oblique three-dimensional data is characterized by comprising the following steps of:
s1, selecting a preset atmospheric environment, and collecting near-ground hyperspectrum R under typical geographic entity classification li And i represents serial numbers of different geographic entity classifications, and performs classification internal spectrum fusion processing on the collected near-ground hyperspectral to obtain a fusion spectrum R ci Simultaneously, the remote sensing hyperspectral R of the same geographic area corresponding to the near-ground hyperspectral image is also collected p
S2, fusion spectrum R under each geographic entity classification ci The optimal wave band combination of each geographic entity classification is extracted, and the near-ground combined hyperspectral R of the optimal wave band combination is formed by cutting and splicing wave bands i
S3, scanning pixel elements of the remote sensing hyperspectrum one by one, comparing the hyperspectrum of each scanned pixel element with the spectrum of the optimal combined wave band of each geographic entity classification, determining the geographic entity classification to which the currently scanned pixel element belongs, mapping the identified type with the geographic entity classification field, and forming a semantic system, so that the labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectrum is completed after the scanning is finished; wherein, the first and the second end of the pipe are connected with each other,
collecting near-surface hyperspectral R under typical geographic entity classification in S1 li The method also comprises the following specific steps of P1-1: adopting an unmanned aerial vehicle carrying an oblique photographing device to scan pixel elements one by one in an area where remote sensing hyperspectrum is located to obtain a plurality of oblique images, recording the coordinate of the unmanned aerial vehicle under a real geographic coordinate system E when each pixel element is scanned, and calculating the spatial coordinate of each pixel element in each angle oblique image and the corresponding ground projection coordinate;
after step S3 is completed, the following steps are also included:
p1-2, establishing a mapping relation between the ground projection coordinate corresponding to each pixel element and the geographic entity classification in the semantic system according to the geographic entity semantic system finished in S3;
the P1-3 forms the oblique photography data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates corresponding to all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in the P1-2, so that the same mapping relation as in the P1-2 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
and P1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information at the spatial position of the model near the any point, and can optionally disappear the classification information.
2. The method according to claim 1, wherein the specific steps P1-3 are replaced by: the central point of each pixel element scanned by the unmanned aerial vehicle in the P1-1 is displayed in the remote sensing image dimension of the remote sensing hyperspectral image, the inclined image shot at the central point can be displayed by clicking any central point to replace automatic modeling, at the moment, a semantic system is adopted, any point is clicked in any inclined image, the corresponding geographic entity classification information is obtained by mapping and displaying through the ground projection coordinate of the point, and the classification information can be selected to disappear at any time.
3. The method of claim 2, wherein all vehicles on the road in the oblique photography are deleted before performing the P1-3 specific step replacement.
4. The method according to claim 3, wherein the specific deletion method is to fill all pixels of the coordinates where the road is located with a preset color pixel value.
5. The method according to any one of claims 1 to 4, wherein the step S3, after the labeling of the geographic entity semantics of the remote sensing image in the acquired remote sensing hyperspectrum, further comprises:
applying the optimal waveband combination of various geographic entity classifications provided by S2 to the semantically labeled remote sensing hyperspectral in S3, and obtaining the remote sensing combination hyperspectral R 'corresponding to the remote sensing hyperspectral according to the method of S2' i Using near-ground combined hyperspectral R i To remote sensing combined hyperspectral R' i Carrying out correction processing to obtain corrected remote sensing combined hyperspectral R cor =f(R' i ,R i ) F (-) is the correction operation, f (-) is R' i Is corrected to R by the peak value and/or integrated intensity of the corresponding characteristic wave band i The correction is achieved by fitting each band spectral peak specifically using a gaussian function.
6. The geographic entity semantic modeling method based on the hyperspectral data and the inclined three-dimensional data is characterized by comprising the following steps of:
s1-1, according to the type number N of geographic entities to be classified, allocating a corresponding number of unmanned racks kN, wherein k =1,2,3,4,5, and performing hyperspectral acquisition on a plurality of acquisition points of various types of geographic entities by a plurality of unmanned aerial vehicles along a preset flight route under a preset meteorological condition;
the method comprises the following specific steps of when a plurality of acquisition points perform hyperspectral acquisition:
p1-1, adopting an unmanned aerial vehicle carrying an oblique photography device, setting a visual field range, constructing a minimum external square of the visual field range, taking the minimum external square as a pixel element, scanning the remote sensing hyperspectrum one by one according to a specified scanning mode to obtain a plurality of oblique images, recording the coordinate of the unmanned aerial vehicle under a real geographic coordinate system E when each pixel element is scanned, and calculating the spatial coordinate of each pixel element in the oblique images at various angles and the corresponding ground projection coordinate;
s1-2, preprocessing hyperspectral collected by a plurality of collection points of each type of geographic entity to obtain near-ground hyperspectral R under typical geographic entity classification lij J is the number of acquisition points, and each acquisition point acquires a hyperspectral image;
the pretreatment comprises the following steps: at least one of first derivative, normalization, multivariate scatter correction, standard normal variate transformation, savitzky-Golay smoothing.
S1-3, carrying out superposition average processing on the hyperspectrum acquired by a plurality of acquisition points in each classification
Figure FDA0003758201730000021
Wherein, K i Collecting points for the i-th type geographic entity,
Figure FDA0003758201730000022
for each spectrum R lij Intensity at each wavelength of (1) to form R ci Simultaneously collecting remote sensing hyperspectral R of the same geographical area corresponding to the near-ground hyperspectral p
S2-1 fusion spectrum R under each geographic entity classification ci Extracting optimal band combination of each geographic entity classification
Figure FDA0003758201730000023
Wherein q is i ∈[3,9]The band number in the best band combination for the i-th geographical entity, for each
Figure FDA0003758201730000024
Are all represented as an optimal band combination,
wherein the optimum band combination
Figure FDA0003758201730000025
The method comprises the following steps:
set the optimum band discrimination index
Figure FDA0003758201730000026
Or
Figure FDA0003758201730000027
Wherein u, v, x, y and z are hyperspectral band serial numbers of the ith geographic entity, and Q i The total number of hyperspectral wavebands of the ith type of geographic entity, S is the standard deviation calculated by hyperspectral wavebands of a plurality of sampling points, R uv 、R xyz For the correlation coefficients between the bands indexed by the respective indices, the sum symbols are for all possible sums, respectively
Figure FDA0003758201730000028
Performing the summation of the times, taking
Figure FDA0003758201730000029
The corresponding wave band combination, namely three-wave band and four-wave band combination is used as the optimal wave band combination;
s2-2, by means of wave band cutting and splicing, combining the near-ground combined hyperspectral images R formed according to the size sequence of the wavelength values of the optimal wave bands i The clipping is to delete all band data except the optimal band, the splicing is to connect the head and the tail of the optimal band region in sequence at splicing points, the intensity value at the splicing points is forcibly defined as a baseline value, and the splicing is carried outThe size and the range of the wavelength value spanned by each band spectral line are still defined unchanged after the connection, and the peak value at each wavelength except the splicing position is unchanged;
s3, pixel elements of the remote sensing hyperspectral image are scanned one by one, the hyperspectral of each scanned pixel element is compared with the optimal combined waveband spectrum of each geographic entity classification, so that the geographic entity classification to which the currently scanned pixel element belongs is determined, the identified type is mapped with the geographic entity classification field, a semantic system is formed, and therefore the geographic entity semantic marking of the remote sensing image in the acquired remote sensing hyperspectral image is completed after the scanning is completed, the pixel elements of the remote sensing hyperspectral image are scanned one by one, the hyperspectral of each scanned pixel element is compared with the optimal combined waveband spectrum of each geographic entity classification, and therefore the geographic entity classification to which the currently scanned pixel element belongs is determined specifically comprises the following steps:
s3-1, defining the pixel element size p as a region with resolution res being an integer M being an integer multiple of [1,4] of [0.3m,1m ], namely p = M · res × M · res, establishing a geographic entity type database, wherein the geographic entity type database comprises a geographic entity classification field and a geographic entity basic parameter field, and the geographic entity classification field comprises geographic entity codes which are exclusive to all entity classifications;
s3-2, scanning the remote sensing image in the remote sensing hyperspectrum in a specified scanning mode by using a pixel element p, and calling a corresponding hyperspectral image in each pixel element in the scanning process
Figure FDA0003758201730000031
Wherein t is the serial number of the pixel element scanned currently;
s3-3 will
Figure FDA0003758201730000032
In correspondence with various geographic entities R i Spectral lines corresponding to those of the bands correspond to the various geographic entities R i The similarity comparison is carried out on the spectral lines, when the highest similarity is compared and is greater than the preset value of 80-90%, the currently scanned pixel element p is identified as the corresponding geographic entity classMolding;
when the highest similarity is smaller than a preset value, the pixel element of the pixel with the complete image is analyzed in the range of p eight neighborhoods of the current pixel element as follows:
if there is a geographic entity type already identified in the eight neighborhoods, then it will be
Figure FDA0003758201730000033
Comparing the similarity of the spectral lines corresponding to the bands corresponding to the near-ground combined hyperspectral bands of the identified geographic entity with the combined hyperspectral spectral lines of the identified geographic entity classification; if the geographic entity type which is not determined exists in the eight neighborhoods, continuously identifying the geographic entity type of the neighborhood which is not determined in the neighborhood;
calculating out
Figure FDA0003758201730000034
The similarity between the spectrum spectral lines corresponding to the wave bands of the near-ground combined hyperspectral bands of the geographic entity represented by the neighborhood after all the determined geographic entity types and the spectrum spectral lines of the near-ground combined hyperspectral of the geographic entity represented by the neighborhood after all the determined geographic entity types is taken, the geographic entity type represented by the neighborhood corresponding to the maximum similarity is taken as the geographic entity type identified by the current pixel element p, and the neighborhood pixel elements of the identified geographic entity type are taken as the geographic entity type which is scanned completely and are not scanned and identified any more in the subsequent scanning process;
if at least one neighborhood with the similarity smaller than the preset value, which is calculated in the process of identifying the type of the represented geographic entity, exists in the eight neighborhoods, the current similarity comparison object is not counted, and only the neighborhoods with the similarity larger than the preset value are taken as the similarity comparison object;
if all the eight neighborhoods do not count the current similarity comparison object, defining the geographic entity type of the current pixel element p as the geographic entity type identified by the previous pixel element or the next pixel element,
the similarity comparison method comprises the following steps:
s3-3-1 will
Figure FDA0003758201730000035
Performing the step of S2-2 to obtain
Figure FDA0003758201730000036
Corresponding combined hyperspectrum
Figure FDA0003758201730000037
Wherein i represents a pair
Figure FDA0003758201730000038
According to the R corresponding to the ith type of geographic entity i The forming modes are combined and formed;
s3-3-2 pairs
Figure FDA0003758201730000039
Integrated intensity value of each wave band in medium-optimal combined wave band
Figure FDA00037582017300000310
And R i Integrated intensity values of the corresponding bands
Figure FDA00037582017300000311
Comparing, and calculating the reciprocal of standard deviation
Figure FDA00037582017300000312
Defining the similarity;
the mapping the identified type and the geographic entity classification field to form a semantic system specifically comprises:
s3-4 assigns a corresponding geo-entity code to the classification field identified by each scanned pixel element,
and S3-5, endowing the spatio-temporal coordinate of the center of each pixel element to a corresponding geographic entity basic parameter field, and forming a semantic system by the geographic entity basic parameter field endowed with the spatio-temporal coordinate of each scanned pixel element and the endowed classification field of the geographic entity code.
7. The method of claim 6, further comprising after step S3-5
P1-2, according to the geographic entity semantic system finished in S3-5, establishing a mapping relation between the ground projection coordinate corresponding to each pixel element and the geographic entity classification in the semantic system;
p1-3 forms the oblique photography data into a three-dimensional model of an oblique three-dimensional monomer set through automatic modeling, and at the moment, the ground projection coordinates of all pixels in the three-dimensional model under the same E are the ground projection coordinates of each pixel element in P1-2, so that the same mapping relation as in P1-2 exists between the ground projection coordinates of all pixels in the three-dimensional model and the geographic entity classification in a semantic system;
p1-4 selects any point in the three-dimensional model in P1-3, acquires the geographic entity classification in the semantic system corresponding to the any point, displays the classification information at the spatial position of the model near the any point, and can select to disappear the classification information at any time.
8. The method according to claim 6 or 7, wherein the flying heights of all the drones in S1-1 are consistent, and for all kinds of geographic entities, all the drones start flying at the same time according to the predetermined route, and stop all the acquisitions at the same time point;
the predetermined scanning mode in P1-1 is a line scanning mode and/or a column scanning mode;
the predetermined scanning mode in S3-2 is a mode of image line scanning and/or column scanning.
9. The method of claim 8, wherein when scanning the same row and/or the same column in P1-1, the scanning of the next pixel element is continued by shifting the side length of one pixel element when the scanning of the current pixel element is finished; in S3-2, when scanning the same row and/or the same column, when the scanning of the current pixel element is finished, the current pixel element is shifted by a long distance to continue the scanning of the next pixel element.
10. The method of claim 8, wherein the specific steps of P1-3 are replaced with: the central point of each pixel element scanned by the unmanned aerial vehicle in the P1-1 is displayed in the remote sensing image dimension of the remote sensing hyperspectral image, the inclined image shot at the central point can be displayed by clicking any central point to replace automatic modeling, at the moment, a semantic system is adopted, any point is clicked in any inclined image, the corresponding geographic entity classification information is obtained by mapping and displaying through the ground projection coordinate of the point, and the classification information can be selected to disappear at any time.
11. The method of claim 10, wherein all vehicles on the road in the oblique photography are deleted before performing the P1-3 specific step replacement.
12. The method of claim 11, wherein the deleting is performed by filling all pixels of coordinates where the link is located with a preset color pixel value.
CN202210860449.XA 2022-05-18 2022-07-21 Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data Active CN115346009B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210543031 2022-05-18
CN2022105430316 2022-05-18

Publications (2)

Publication Number Publication Date
CN115346009A true CN115346009A (en) 2022-11-15
CN115346009B CN115346009B (en) 2023-04-28

Family

ID=83949240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210860449.XA Active CN115346009B (en) 2022-05-18 2022-07-21 Geographic entity semantic modeling method based on hyperspectral data and oblique three-dimensional data

Country Status (1)

Country Link
CN (1) CN115346009B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093056A1 (en) * 2014-09-29 2016-03-31 Digitalglobe, Inc. Multi-spectral image labeling with radiometric attribute vectors of image space representation components
CN106096663A (en) * 2016-06-24 2016-11-09 长春工程学院 A kind of grader construction method of the target in hyperspectral remotely sensed image being grouped based on sparse isomery
WO2018081929A1 (en) * 2016-11-01 2018-05-11 深圳大学 Hyperspectral remote sensing image feature extraction and classification method and system thereof
CN110852300A (en) * 2019-11-19 2020-02-28 中煤航测遥感集团有限公司 Ground feature classification method, map drawing device and electronic equipment
US20210072219A1 (en) * 2018-03-19 2021-03-11 Daiki NAKAYA Biological tissue analyzing device, biological tissue analyzing program, and biological tissue analyzing method
CN114089787A (en) * 2021-09-29 2022-02-25 航天时代飞鸿技术有限公司 Ground three-dimensional semantic map based on multi-machine cooperative flight and construction method thereof
CN114219958A (en) * 2021-11-18 2022-03-22 中铁第四勘察设计院集团有限公司 Method, device, equipment and storage medium for classifying multi-view remote sensing images
CN114494899A (en) * 2021-12-24 2022-05-13 上海普适导航科技股份有限公司 Rapid classification method for characteristic ground objects of remote sensing image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093056A1 (en) * 2014-09-29 2016-03-31 Digitalglobe, Inc. Multi-spectral image labeling with radiometric attribute vectors of image space representation components
CN106096663A (en) * 2016-06-24 2016-11-09 长春工程学院 A kind of grader construction method of the target in hyperspectral remotely sensed image being grouped based on sparse isomery
WO2018081929A1 (en) * 2016-11-01 2018-05-11 深圳大学 Hyperspectral remote sensing image feature extraction and classification method and system thereof
US20210072219A1 (en) * 2018-03-19 2021-03-11 Daiki NAKAYA Biological tissue analyzing device, biological tissue analyzing program, and biological tissue analyzing method
CN110852300A (en) * 2019-11-19 2020-02-28 中煤航测遥感集团有限公司 Ground feature classification method, map drawing device and electronic equipment
CN114089787A (en) * 2021-09-29 2022-02-25 航天时代飞鸿技术有限公司 Ground three-dimensional semantic map based on multi-machine cooperative flight and construction method thereof
CN114219958A (en) * 2021-11-18 2022-03-22 中铁第四勘察设计院集团有限公司 Method, device, equipment and storage medium for classifying multi-view remote sensing images
CN114494899A (en) * 2021-12-24 2022-05-13 上海普适导航科技股份有限公司 Rapid classification method for characteristic ground objects of remote sensing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李玉;李奕燃;王光辉;石雪;: "基于加权指数函数模型的高光谱图像分类方法" *
秦进春: "高光谱遥感影像植被分类提取技术研究" *

Also Published As

Publication number Publication date
CN115346009B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
Jin et al. Stem–leaf segmentation and phenotypic trait extraction of individual maize using terrestrial LiDAR data
Sun et al. Individual tree crown segmentation and crown width extraction from a heightmap derived from aerial laser scanning data using a deep learning framework
Hall et al. Characterising and mapping vineyard canopy using high-spatial-resolution aerial multispectral images
EP3022686B1 (en) Automatic generation of multi-scale descriptors from overhead imagery through manipulation of alpha-tree data structures
CN113128405A (en) Plant identification and model construction method combining semantic segmentation and point cloud processing
Jurado et al. Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
CN113112504A (en) Plant point cloud data segmentation method and system
US11170215B1 (en) System and method for discriminating and demarcating targets of interest in a physical scene
CN115641412B (en) Three-dimensional semantic map generation method based on hyperspectral data
Li et al. A comparison of deep learning methods for airborne lidar point clouds classification
Risse et al. Software to convert terrestrial LiDAR scans of natural environments into photorealistic meshes
CN111918547B (en) Crown recognition device, crown recognition method, program, and recording medium
d’Andrimont et al. Monitoring crop phenology with street-level imagery using computer vision
Murray et al. The novel use of proximal photogrammetry and terrestrial LiDAR to quantify the structural complexity of orchard trees
Zhou et al. Individual tree crown segmentation based on aerial image using superpixel and topological features
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
Saeed et al. Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks
CN116758223A (en) Three-dimensional multispectral point cloud reconstruction method, system and equipment based on double-angle multispectral image
CN116994029A (en) Fusion classification method and system for multi-source data
CN115346009A (en) Geographic entity semantic modeling method based on hyperspectral data and inclined three-dimensional data
Sertel et al. Vineyard parcel identification from Worldview-2 images using object-based classification model
Fan et al. An improved Deeplab based model for extracting cultivated land information from high definition remote sensing images
Tang et al. Accuracy test of point-based and object-based urban building feature classification and extraction applying airborne LiDAR data
Nazeri Evaluation of multi-platform LiDAR-based leaf area index estimates over row crops
Li et al. Construction of a small sample dataset and identification of Pitaya trees (Selenicereus) based on UAV image on close-range acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant