CN103090868A - Method for preparing typical ground object reference picture by using multispectral image classification - Google Patents

Method for preparing typical ground object reference picture by using multispectral image classification Download PDF

Info

Publication number
CN103090868A
CN103090868A CN2012105948618A CN201210594861A CN103090868A CN 103090868 A CN103090868 A CN 103090868A CN 2012105948618 A CN2012105948618 A CN 2012105948618A CN 201210594861 A CN201210594861 A CN 201210594861A CN 103090868 A CN103090868 A CN 103090868A
Authority
CN
China
Prior art keywords
landmark
multispectral image
image
preparing
radiation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105948618A
Other languages
Chinese (zh)
Other versions
CN103090868B (en
Inventor
张天序
边小勇
颜露新
李姣
高慧杰
彭凡
张力
周刚
朱虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201210594861.8A priority Critical patent/CN103090868B/en
Publication of CN103090868A publication Critical patent/CN103090868A/en
Application granted granted Critical
Publication of CN103090868B publication Critical patent/CN103090868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method for preparing a typical ground object reference picture by using multispectral image classification. An interesting type is extracted from a multispectral image and is taken as a landmark, and the reference picture comprising the landmark can be prepared and is used for indirectly positioning and identifying a target. The method specifically comprises the following steps of: selecting the interesting type from the multispectral image, extracting a spectral-spatial texture characteristic of the interesting type, and classifying the multispectral image according to the extracted spectral-spatial texture characteristic; on the basis of classification, extracting the complete interesting type, and taking the interesting type as the interesting type of the landmark according to a landmark selection rule; calculating an infrared radiation amount of the landmark under an atmospheric parameter condition by using an infrared radiation formula according to a landmark material type; and mapping the infrared radiation amount to obtain a gray scale value, and preparing the reference picture comprising a landmark shape, the gray scale of the landmark shape and a background gray scale. According to the method, accuracy and automation are relatively high, and the performance of a matching and identification algorithm based on the reference picture can be improved.

Description

Preparation method of typical ground object reference image by utilizing multispectral image classification
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a reference image preparation method by utilizing multispectral image classification, which is used for preparing a reference image of a typical ground object and is suitable for target recognition in aircraft navigation guidance.
Background
With the development of the infrared imaging guidance technology, the research on the target background radiation characteristic and the infrared imaging characteristic thereof is more and more emphasized by researchers at home and abroad, and the method has important significance in the field of aircraft navigation guidance research. Under the conditions of different time phases, different climates, different illumination and remote imaging, optical images acquired by the imaging sensor have different characteristics and are complicated and changeable, and particularly under the condition that the target characteristics are not obvious enough or the target is shielded, the direct navigation positioning identification method is often unreliable. However, in a large-scale high-resolution image at a certain distance, a feature with a certain significant characteristic often appears in the peripheral region of the target, which is called a typical feature, such as a river, a bridge, a road, a building group and the like, and has a certain size, a certain geometric shape, a certain visibility and a certain radiation characteristic, and the feature is used as a ground mark (a landmark for short) to indirectly position the target, which is helpful for solving the problem of target identification in a complex environment. Therefore, it is important to prepare a feature reference map containing landmarks.
Guo Wenpu et al put forward a visible light foresight reference image preparation method for scene matching in the thesis "foresight reference image generation method based on 3D visualization GIS", the core steps of the method are to utilize satellite image stereopair on the planning track, generate digital elevation model data through a three-dimensional reconstruction algorithm, superimpose ortho images, finally generate a foresight reference image sequence, thus match with the foresight real-time image. However, the target reference image prepared by the method cannot meet the requirement of navigation guidance when the target characteristics are not significant, and the generated target reference image does not consider the constraint on the target gray scale under the actual imaging condition. Chinese patent 200910273308.2 discloses a scheme of simulating a flight scene on a preset airway using a digital surface model, determining planar landmarks used for navigation on the airway, then making a landmark multi-view point multi-scale optical feature map, calculating relevant parameters of each landmark, and preparing a reference map with parameters. The reference image prepared by the method does not consider gray scale assignment of landmarks in actual imaging, mainly participates in preparation manually, and lacks automation. In addition, Liu Jing et al put forward a multi-feature forward reference map generation method in the thesis "forward looking infrared scene matching guidance reference map generation research" (microcomputer information, Vol.23, No. 30, 2007), which takes visible light images as data sources, extracts geometric views of a target and a background, respectively calculates temperature and radiation features, synthesizes and adds atmospheric effects, and finally obtains a forward looking infrared scene matching multi-viewpoint multi-scale reference map according to different viewpoints. According to the method, a background and a target geometric view are respectively obtained on the basis of image segmentation and multi-image extraction, modeling and Vega infrared simulation are carried out on a target to generate a reference image, the input parameters are more, the calculation amount is large, the accuracy of a calculation result is closely related to methods such as image segmentation and target modeling, and the automation and the real-time performance are insufficient.
The method basically prepares the feature reference image containing the landmark/target in an off-line manual participation mode, has a certain degree of subjectivity and low efficiency, gives an empirical gray value to the landmark/target, and does not consider the infrared radiation characteristic of the landmark/target under the actual imaging condition, so that the prepared reference image does not predict the infrared characteristic of the landmark/target and has direct influence on the target matching identification algorithm. For the reference image containing the target, the problem of navigation and positioning when the target is weak in appearance in long-distance imaging is not considered, and the prepared reference image cannot meet the requirement of accurate navigation when the target is far away.
Disclosure of Invention
The invention aims to provide a reference image preparation method by utilizing multispectral image classification, which is characterized in that multispectral images are rapidly classified by utilizing various characteristic information, more complete interesting classes are extracted, part of the multispectral images are selected to be used as landmarks, the infrared characteristics of the landmarks are predicted, the problem of indirect target positioning and identification in a complex environment is solved by preparing a characteristic reference image containing the landmarks under the condition that the target characteristics are not obvious in the navigation guidance process of an aircraft, and support is provided for forward-looking navigation time-based target positioning and identification of the aircraft.
The invention discloses a method for preparing a reference image by utilizing multispectral image classification, which comprises the following steps of:
(1) selecting an interested class from the multispectral image, extracting the spectrum-space texture features of the interested class, and classifying the multispectral image according to the extracted spectrum-space texture features;
(2) on the basis of classification, optimizing the interested classes by using the shape of the ground objects, a mathematical morphology method and the space relation of the ground objects, extracting more complete interested classes from the interested classes, and selecting the interested classes serving as landmarks from the interested classes by using a landmark selection criterion;
(3) calculating the radiation quantity of the landmark according to the material characteristics and radiation characteristic parameters of the landmark and an infrared radiation formula;
(4) and carrying out gray mapping on the radiation quantity value, and preparing a reference map according to the gray mapping.
As an improvement of the invention, the spectrum-space texture features are formed by respectively extracting and normalizing the spectrum features and the space texture features of the interested class and then splicing the spectrum features and the space texture features.
In the extraction of the spatial texture features of the interest class, firstly, a rotation invariant balanced local binary pattern value of a central pixel on each image data cube is calculated on a multispectral image, and the value of the pattern occurrence frequency is obtained according to the mapping relation between the pattern value and the pattern occurrence frequency and is used as the spatial texture features.
As an improvement of the invention, the infrared radiation formula is as follows:
Ld=(Ldirect+Lthermal)*τpath+Lb
in the formula, LdFor the amount of radiation in imaging the detector surface, LdirectReflecting part of the solar radiation for the object, LthermalFor self-irradiation of the object, LbFor path radiation of atmospheric transmission, τpathThe average atmospheric transmittance.
As a refinement of the present invention, said mapping of the infrared radiation amount to the gray-scale value specifically means mapping said maximum radiation amount and 0 to the gray-scale values 255 and 0, respectively, and linearly mapping the intermediate values to normalize to [0,255 ].
As an improvement of the present invention, the prepared reference picture is a downward-looking reference picture, which specifically comprises: and extracting the landmarks after gray mapping, and superposing the landmarks on a background image with the original size to prepare a bottom-view reference image.
As an improvement of the invention, the lower view reference image is subjected to perspective transformation to obtain a front view reference image.
As a refinement of the present invention, after the landmark is determined, the spatial position constraint relationship between the object and the landmark in the multispectral image can be calculated for object matching.
As an improvement of the present invention, before selecting and classifying an interested class from the multispectral image, the multispectral image to be processed may be preprocessed to obtain an input image suitable for feature extraction.
As an improvement of the present invention, as an optional operation, the multispectral image is also preprocessed;
as an improvement of the invention, the invention also determines the interested class used as the plane landmark in the field of view along the flight path, and calculates the space position constraint relation between the object and the landmark in the multispectral image.
According to the invention, the reference map containing the landmark infrared characteristic prediction is prepared through rapid classification and infrared radiation amount calculation, and important navigation datum data are provided for a navigation system based on matching identification, so that the matching identification performance based on the reference map is better. The method has high accuracy, high efficiency and automation, can solve the problem that the target cannot be directly identified under the condition of remote imaging, and provides support for forward-looking target identification based on the matching reference image.
Drawings
FIG. 1 is a general flow diagram of a method according to an embodiment of the invention;
FIG. 2 is a multi-spectral image including rivers, roads, building complexes according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the spectrum-space combination features rich in discrimination for extracting the interested class according to the embodiment of the present invention;
FIG. 4 is a diagram illustrating the result of classifying rivers, roads and building groups according to an embodiment of the present invention;
FIG. 5 is a diagram showing the result of extracting rivers, roads and building groups selected as landmarks on the basis of FIG. 4;
FIG. 6 is a diagram illustrating a relative relationship between a landmark and an object according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of solar angle calculation using infrared radiation according to an embodiment of the present invention;
FIG. 8 is a graph showing the temperature change and the self-radiation amount of a typical material at different observation times according to an embodiment of the present invention;
FIG. 9 is a bottom view reference after IR radiation calculation and gray scale mapping according to an embodiment of the present invention;
FIG. 10 is a spatial constraint relationship diagram of a set of landmark special points and a target centroid according to an embodiment of the present invention;
fig. 11 is a reference diagram of the multi-scale multi-view front view after perspective transformation of fig. 9.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention more clear, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. The following examples are illustrative only and are not to be construed as limiting the present invention, and all modifications which do not depart from the spirit of the present invention are intended to be included within the scope of the present invention.
Under certain imaging distance, flight height and proper flight angle, the visibility of the ground features such as rivers, roads, building groups and the like can be ensured, so that the invention is considered under the condition that the ground features are visible. The invention is described in further detail below using the multispectral image shown in fig. 2 as an example.
As shown in fig. 1, a specific flow of reference map preparation in this embodiment is as follows:
(1) and selecting an interested class in the multispectral image, extracting the spectral-spatial texture features of the interested class, and classifying by using the spectral-spatial texture features.
And (1.1) spectrum-space texture feature extraction. As shown in fig. 2 (a) - (c), in the high-resolution multispectral image, the feature with certain significance, visibility and contrast in the local region of interest on the airway is selected as the interest class by analyzing the feature around the target in the scene. And extracting the spectral-spatial texture characteristics of the selected interest class or a part of the interest class as a training sample. In particular, the method comprises the following steps of,
first, on an input image, inCalculating a rotation invariant balanced Local Binary Pattern (LBP) of the center pixel on each image data cube in a sliding window mode of the center pixelriu2) And calculating the rotation invariant equilibrium histogram feature (namely the spatial texture feature) of the pixel according to the mapping relation between the mode value and the mode occurrence frequency, and repeating the process to calculate the rotation invariant equilibrium histogram feature for all the predefined training samples.
Then, calculating the spectral characteristics of the interested class, respectively normalizing the spectral characteristics and the spatial texture characteristics, and splicing the spectral characteristics and the spatial texture characteristics into a comprehensive characteristic vector which is recorded as
Figure BDA00002695752500071
TS∈Rd. The calculation of spectral characteristics is well known in the art and can be performed in a variety of ways in the present embodiment.
Repeating the calculation process on the test image to obtain a test sample set, and recording the test sample set as
Figure BDA00002695752500072
TS∈Rd. Wherein l, d and u respectively represent the number of training samples, the feature dimension and the number of testing samples. LBP for high dimensions is usually requiredriu2The features are reduced to make the features in different classes more separable, and the dimension of the new feature is recorded as d'.
Fig. 3 shows a schematic diagram of extracted spectral-spatial features in some typical information classes, which includes 3 spectral features and 10 compact spatial histogram features, and it can be seen that the classifier introducing the spatial features will improve the classification performance.
And (1.2) classifying.
In this embodiment, the classification is performed by using a Support Vector Data Domain (SVDD) based on a kernel, but other methods may be used, for example, a class of support vector machine (OCSVM) classification.
Firstly, the training samples of the interesting class obtained in the step (1.1) are proportioned in a ratio of 5: 1: and 4, randomly dividing the test image into a training subset, a cross validation subset and a test subset, and training a kernel-based Support Vector Data Domain (SVDD) model to rapidly classify the test image. Second, since one class classification only needs to provide samples of the class of interest, cross validation will be performed based on the training subset and the testing subset to obtain the optimal kernel parameters (C, gamma), which are used to train a final kernel SVDD model. And finally, rapidly classifying the region of interest of the image by using the nuclear SVDD model. The core SVDD model can accurately identify the interested class and reject the abnormal class, and can perform precision analysis on the basis of real class data.
In order to verify the effectiveness of the spatial structure feature extraction method, the embodiment simultaneously extracts 5 texture features (contrast, entropy, angular second moment, inverse difference moment and correlation) of data samples (training samples and test samples) based on a gray level co-occurrence matrix (GLCM) for comparison. Fig. 4 (a) - (c) are the classification results of the feature extraction method in this embodiment on three experimental images, respectively. As can be seen from fig. 4, the feature extraction method in this embodiment is effective in spatial classification, especially when the interested classes are randomly distributed.
Before feature extraction and classification, as an optional step, the multispectral image to be processed may be preprocessed to obtain an input image suitable for feature extraction. For example, data bit width conversion (16-bit to 8-bit), data normalization, and dimension reduction are performed.
(2) And (5) extracting the ground objects. On the basis of classification, the interested class is optimized based on knowledge such as ground feature shape information, mathematical morphology processing and spatial relationship, and a relatively complete interested class is extracted.
Through the analysis of interesting classes such as rivers, roads and building groups in the scene, the interesting classes such as rivers, roads and building groups can be known to have different shape information, for example, the rivers are in a curve shape, the roads are in a straight shape, the building groups are approximate to rectangles, and the interesting classes such as the rivers, the roads and the building groups have different length-width ratios and eccentricity; in the space geometrical relationship, two kinds of ground objects can appear simultaneously, for example, a building group is often adjacent to a river, and roads are often found around an airport. Therefore, the initial interested class (target class) is extracted by comprehensively utilizing the shape information of the ground objects and the spatial relation knowledge, and the mathematical morphology method is used for enhancing to obtain the more complete final interested class. Fig. 5 (a) - (c) are the results of river-like, road, building clusters of interest extracted from a multispectral image over a field of view of 1156 meters × 725 meters.
(3) And selecting the landmark. One or more interested objects are selected to be used as landmarks according to the geometric size, shape, visibility, radiation characteristic and other criteria of the interested class.
And analyzing the geometrical size, the shape, the visibility and the reflected radiation characteristics of the extracted interest class, screening the extracted multiple interest objects meeting the set value of a user, and selecting one or more known interest objects which are relatively uniform in material and remarkable in optical imaging to serve as landmarks. The present example uses the river, road, building group extracted as described above as a landmark. As shown in fig. 6 (a) - (c), the diagrams are relative relationship between the landmarks of the river, the road and the building group and the target, the straight-line distances between the landmark centroids of the river, the road and the building group and the target centroids of the landmark and the target are 236 meters and 154 meters respectively, the straight-line distances between the landmark centroids of the 3 building groups and the target centroids are 225 meters, 220 meters and 190 meters respectively, and the directions of the flight paths are also shown in the diagrams.
For the subsequent matching identification by using the reference diagram, the spatial position relationship between the landmark and the target is calculated, the spatial position relationship between the landmark and the target comprises landmark special points, target centroid coordinates and the relative position between the landmark and the target, and the position of the landmark is subtracted by the position of the target. And writing the spatial position relation information of the landmark and the target into a parameter file bound with the reference graph so as to directly call when matching. In this embodiment, the parameter file is in a text file format, and includes:
reference picture resolution: r meter
Flight height: h rice
Flight entrance angle: alpha degree
Landmark visible pitch angle: (beta1, beta2), unit: degree of rotation
Range of distance in which landmarks appear: (D)i1,Di2) The unit: rice and its production process
Landmark-to-target position deviation: (X)LT,YLT)
(4) And calculating the infrared radiation quantity of the landmark. The infrared radiation amount of the landmark under the atmospheric parameter condition is calculated by using an infrared radiation formula through material mapping.
The radiation characteristics of typical materials such as water, soil, asphalt, etc. are mainly affected by the reflection of solar radiation by objects, the radiation of the objects themselves, and the radiation of the atmospheric transmission path. The intensity of radiation from the surface of an object is closely related to the temperature of the surface of the object, which varies with the solar angle, season and altitude. In the daytime, the radiation on the surface of the object mainly comes from the radiation of the sun, the atmosphere and the like; at night, the solar radiation is weakened, the radiation part of the object is more prominent, and the surface temperature of the object is reduced. FIG. 7 is a graph showing the temperature change of a typical material in comparison with the self-radiation amount at different observation times.
In order to obtain the object radiation amount under the actual imaging condition, the ground object radiation amount is calculated by using an infrared radiation formula according to an infrared radiation mechanism, a heat conduction mechanism and a radiation mechanism.
The infrared radiation calculation formula determined in this embodiment is as follows:
Ld=(Ldirect+Lthermal)*τpath+Lb
in the formula, LdFor the amount of radiation in imaging the detector surface, LdirectIs the part of the object reflecting solar radiation, equal to rho.E0(CCF·τsd) Wherein CCF is cloud cover coefficient, 1 is taken when weather is clear, otherwise 0, E0Is the sun constant, τsdThe atmospheric transmittance and the scattering rate respectively correspond to the direct solar radiation and the scattered radiation, and the calculation formula is as follows:
τs=(1+0.034cos(2πn/365))·Pmcosθ
τ d = 1 2 sinh 1 - P 1 - 1.4 ln P cos 2 β 2
wherein,
cosθ=cosβsinh+sinβcoshcosγ, cos γ = sinh sin φ - sin δ cosh cos φ , δ = 23.45 sin ( 360 × 284 + n 365 )
where h, θ, γ, δ, β, Φ are the solar altitude angle, the solar incident angle, the solar azimuth angle, the solar declination angle, and the object inclination angle and the local latitude, respectively, and the relationship between these angles is shown in fig. 8. And m and P are respectively the atmospheric mass and the atmospheric transparency. The atmospheric mass can be calculated from the following formula (wherein h has the same meaning as above):
Figure BDA00002695752500114
wherein P and m are respectively the atmospheric transparency and the atmospheric quality, and the two are related. m is preferably 2 in this embodiment, and the atmospheric transparency P corresponding to the value of m can be obtained from the data of solar radiation observation.
LthermalIs the object's own radiation, which is equal to ε σ T4ε, σ, and T are the emissivity of the object, the Stefan-Boltzmann constant, and the absolute temperature of the surface of the object (in K), respectively.
LbIs the path radiation of atmospheric transmission, the calculation complexity of the factor is large, but the influence is small.
τpathIs the average atmospheric transmittance equal to
Figure BDA00002695752500116
It can be found by looking up a predefined data table, interpolation calculation and iterative integration.
Since the object selected as the landmark is more uniform in material and the landmark is much smaller relative to the background image. Therefore, when calculating the radiation amount of the object itself, the object is regarded as an isothermal body. Or modeling the object, dividing the object into a plurality of grid nodes, predicting the temperature of each node, and then calculating the self-radiation quantity.
The amount of radiation calculated by the above formula is a result of taking into account all radiation attenuated by the atmosphere in co-operation with the detector spectral response. Meanwhile, the radiation quantity is calculated according to the material characteristics and the radiation characteristics of the ground objects, so that the radiation quantity can reflect the actual imaging effect.
(5) And (4) gray level mapping, namely mapping the infrared radiation amount calculated in the step (4) to a gray level value.
In order to obtain a final infrared reference image, gray mapping should be performed on each infrared radiation amount, specifically: the maximum and minimum radiation levels are mapped to gray scale values 255 and 0, respectively, with the intermediate values being linearly mapped, with the results normalized to [0,255 ]. In addition, according to the obtained image gray value and the radiation quantity, the mapping relation between the two can be solved.
(6) And calculating the spatial relation between the landmark and the target. And establishing a spatial constraint relation feature library of the landmark and the target to be identified in the multispectral image.
In this embodiment, as shown in fig. 9 (a) - (c), centroids and maximum curvature points of rivers, roads and building groups are selected to construct a large point set of landmarks. 3 river landmark points from left to right in FIG. 9 (a) are maximum curvature points LM1,1,LM1,2And centroid LM1,3The coordinates (Y, X) are (365,435), (295,480), (363,640), respectively, and the centroid coordinate T of the object1Is (422,412). 3 road landmark points from left to right in FIG. 9 (b) are the maximum curvature points LC1,1Centroid LC1,2And the point of maximum curvature LC1,3The coordinates (Y, X) are (312,182), (254,354), (316,546), respectively, and the centroid coordinate T of the object2Is (103,386). Taking the average position of 3 river landmark special points as LM, LM is equal to (LM)1,1+LM1,2+LM1,3) And/3, its value is (341,518). Similarly, the average position LC of the road sign point is obtained, and its value is (294,361). In FIG. 9 (c), the centroids of the landmarks of the 3 building groups from left to right are LB1,1,LB1,2And LB1,3The coordinates (Y, X) are (455,442), (125,521) and (508,650), respectively.
In this embodiment, the spatial geometric constraint relationship between each landmark and the target is shown in fig. 10, which includes: first, as shown in fig. 10 (a 1) - (a 3), a constraint relationship between a river course scale point and a target centroid Δ LM 1 i = [ ( Y T 1 - Y LM 1 , i ) ( X T 1 - X LM 1 , i ) ] , i=1,2,3,
Figure BDA00002695752500133
Is the coordinate of the ith landmark point,
Figure BDA00002695752500134
is a target centroid coordinate; secondly, as shown in fig. 10 (b 1) - (b 3), the constraint relationship between the road scale point and the target centroid
Figure BDA00002695752500135
Δ LC 1 j = [ ( Y T 2 - Y LC 1 , i ) ( X T 2 - X LC 1 , j ) ] , j=1,2,3,
Figure BDA00002695752500137
Is the coordinate of the jth landmark shape point,
Figure BDA00002695752500138
is the target centroid coordinate. The space position of the target in the front view is obtained by resolving the relative position relationship between the landmark and the target established before and the landmark position value obtained in the real-time image, so that the target is indirectly positioned. As shown in fig. 10 (c 1) - (c 3), the constraint relationship between the special points of the building group and the target centroid
Figure BDA00002695752500139
Δ LB 1 k = [ ( Y T 3 - Y LB 1 , k ) ( X T 3 - X LB 1 , k ) ] , k is 1,2 and 3 are serial numbers of three building groups,
Figure BDA000026957525001311
is the coordinate of the centroid of the kth landmark. Likewise, the position coordinates of the target in the front view can be obtained.
In this embodiment, the parameters in the parameter file bound to the reference map corresponding to fig. 9 (a) are actually:
reference picture resolution (meters): 3
Flying height (m): 4000
Flight entrance angle (degrees): 180
Landmark visible pitch angle (degrees): 0 to 40
Range of landmark occurrences (meters): 0 to 10000
Landmark to target position deviation (meters): 59-228
(7) Preparation of landmark reference maps. And (5) preparing an infrared reference image containing the shape, the gray level and the background gray level of the landmark according to the gray level value obtained in the step (5).
And extracting the landmarks after gray mapping, and superposing the landmarks on a background image with the original size to prepare a landmark lower view reference image which mainly comprises landmark shapes, gray and background gray. The assignment to the background gray scale can be obtained in a similar way to the above calculation, or the background can be extracted based on a real-time image. According to the knowledge of the radiation characteristics of the ground objects used as the landmarks in the daytime and at night, the reference map for the daytime and night undersight of the landmarks can be prepared. Fig. 9 shows an under-landmark view reference map after infrared radiation calculation and gray mapping.
Here, the background is a simplified process, and is regarded as a simple object with a radiation amount of 0. The background class may be divided into regions, and the infrared radiation amount of each region of the background may be obtained through the above steps (1) - (2), (4) - (5).
A front reference image may also be prepared using a bottom reference image. The forward-looking reference map can be considered as a reference map transformed under the condition of simulating the aircraft imaging parameters by using the downward-looking reference map. Wherein the aircraft track parameters include: fly-height, imaging distance, azimuth angle and pitch angle, wherein the pitch angle can be found by dividing the fly-height by the imaging distance. And obtaining the front-view reference picture by using the lower-view reference picture and the track parameters. The characteristic views of the landmarks under different scales and different viewpoints can be obtained by carrying out perspective transformation according to the flight path parameters, and the foresight reference image and the infrared real-time image which are calculated by infrared radiation show the maximum similarity on certain characteristics, so that the method has better matching performance.
The specific way of performing perspective transformation on the downward-looking infrared reference image in the embodiment is as follows:
let p be the sensor viewpoint, h be the imaging height, alpha be the azimuth angle, theta be the pitch angle, phi be the sensor longitudinal field angle,for the sensor lateral field angle, the field coverage is ROW × COL. T is0(x0,y0) For aiming point of optical axis of sensor, T1(x1,y1) For an imaging point, T is in front view (real time view)0Is (COL/2, ROW)/2). Let T1The position in the front view is (T)1_ROW,T1COL), then calculate T1ROW and T1The procedure for COL is as follows:
OT0=h/tanθ
OM=OT0+(y1-y0)×cosα+(x1-x0)×sinα
tan(∠OMP)=h/OM
T1_ROW=ROW/2+(∠OMP-θ)*ROW/φ
Figure BDA00002695752500152
wherein, OT0Is the intersection point T of the optical axis pointing direction and the horizontal plane of the earth0Distance from the point O projected to the earth by the imager, and M is T1Point projection to optical axis longitudinal direction and OT0The intersection of the lines. The position of a point in the front view can be determined according to the position deviation between the point and the aiming point of the optical axis in the geodetic coordinate system. Thereby transforming the bottom view reference map to the front view reference map and providing important reference data for matching identification based on the front view reference map.
FIG. 11 shows a front view reference taken at different altitudes, different imaging distances and different viewpoints, where the perspective transformation parameters of FIGS. 11 (a) - (b) correspond to an altitude of 4km, imaging distances of 6km, 10km, and an azimuth of 180 °; the perspective transformation parameters corresponding to fig. 11 (c) - (d), (e) - (f) are all 5km for flying height, 6km and 10km for imaging distance, and 0 ° for azimuth angle. The coordinate positions between the respective landmarks and the target in fig. 11 (a) to (f) can be found by the coordinate positions in the downward-looking reference diagram and the parameters of the perspective transformation. The marks of the corresponding target and the landmark special point coordinates are as follows in sequence:
T1′,LM1,1′-LM1,3′;T1″,LM1,1″-LM1,3″;
T2′,LC1,1′-LC1,3′;T2″,LC1,1″-LC1,3″;
T3′,LB1,1′-LB1,3′;T3″,LB1,1″-LB1,3″。
and carrying out error analysis based on matching identification on the prepared reference map so as to verify the performance of the prepared reference map. Such as analyzing the preparation of the landmark preparation and the size of the selected template, and improving to prepare a more accurate reference picture.

Claims (9)

1. A method for preparing a reference image by using multispectral image classification is used for extracting an interested class from a multispectral image to be used as a landmark and preparing a reference image containing the landmark to be used for matching and realizing target positioning identification, and specifically comprises the following steps:
selecting an interested ground object from the multispectral image, extracting the spectrum-space texture features of the interested ground object, and classifying the multispectral image according to the extracted spectrum-space texture features;
optimizing the interested ground objects according to the classification result, and selecting one or more optimized interested ground objects as landmarks;
according to the landmark material type, determining the infrared radiation amount of the landmark under the atmospheric parameter condition;
and mapping the infrared radiation quantity to a gray value, and preparing a reference image containing background gray, landmark shape and gray thereof according to the gray value.
2. The method according to claim 1, wherein the spectral-spatial texture features are obtained by extracting and normalizing the spectral features and spatial texture features of the interest class respectively.
3. The method according to claim 2, wherein in extracting spatial texture features of the interest class, a rotation invariant balanced local binary pattern value of the central pixel on each image data cube is first calculated on the multispectral image, a value of the pattern occurrence frequency is obtained according to a mapping relationship between the pattern value and the pattern occurrence frequency, and a rotation invariant balanced histogram feature of the pixel, that is, the spatial texture feature, is obtained according to the value.
4. The method for preparing a reference map using multispectral image classification according to any one of claims 1 to 3, wherein the infrared radiation amount of the landmark is calculated by the following infrared radiation formula:
Ld=(Ldirect+Lthermal)*τpath+Lb
in the formula, LdFor the amount of radiation in imaging the detector surface, LdirectReflecting part of the solar radiation for the object, LthermalFor self-irradiation of the object, LbFor path radiation of atmospheric transmission, τpathThe average atmospheric transmittance.
5. The method for preparing a reference map using multispectral image classification as claimed in any one of claims 1 to 4, wherein the mapping of the IR radiation amount to gray scale values means that the maximum radiation amount and 0 are mapped to gray scale values of 255 and 0, respectively, and the intermediate values are linearly mapped so as to be normalized to [0,255 ].
6. The method for preparing a reference map using multispectral image classification according to any one of claims 1 to 5, wherein the prepared reference map is a downward-looking reference map, which is specifically: and extracting the landmarks after gray mapping, and superposing the landmarks on a background image with the original size to prepare a bottom-view reference image.
7. The method according to claim 6, wherein the front-view reference map is obtained by perspective transformation of the bottom-view reference map.
8. The method for preparing a reference map using multispectral image classification according to any one of claims 1-7, wherein after the landmark is determined, a spatial position constraint relationship between the object and the landmark in the multispectral image is calculated for object matching.
9. The method for preparing a reference map using multispectral image classification as claimed in any one of claims 1-8, wherein the multispectral image to be processed is preprocessed before the multispectral image is selected and classified to obtain an input image suitable for feature extraction.
CN201210594861.8A 2012-12-31 2012-12-31 A kind of preparation method utilizing the typical feature reference diagram of Classification of Multispectral Images Active CN103090868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210594861.8A CN103090868B (en) 2012-12-31 2012-12-31 A kind of preparation method utilizing the typical feature reference diagram of Classification of Multispectral Images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210594861.8A CN103090868B (en) 2012-12-31 2012-12-31 A kind of preparation method utilizing the typical feature reference diagram of Classification of Multispectral Images

Publications (2)

Publication Number Publication Date
CN103090868A true CN103090868A (en) 2013-05-08
CN103090868B CN103090868B (en) 2016-01-20

Family

ID=48203734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210594861.8A Active CN103090868B (en) 2012-12-31 2012-12-31 A kind of preparation method utilizing the typical feature reference diagram of Classification of Multispectral Images

Country Status (1)

Country Link
CN (1) CN103090868B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103743293A (en) * 2013-12-31 2014-04-23 华中科技大学 Reference diagram preparation method utilizing large-scale vegetation region forward-looking infrared guidance
CN103760619A (en) * 2014-01-07 2014-04-30 中国神华能源股份有限公司 Method and device for monitoring coal field fire zone
CN103976742A (en) * 2014-05-13 2014-08-13 桂林电子科技大学 Measuring device and method aiming at local heat radiation quantity at tail end of finger of human body
CN108537770A (en) * 2018-01-11 2018-09-14 中国人民解放军战略支援部队航天工程大学 It is a kind of without calibration satellite imagery infrared radiation characteristics inversion method and system
CN109872295A (en) * 2019-02-20 2019-06-11 北京航空航天大学 Typical target material properties extracting method and device based on spectrum video data
CN110162812A (en) * 2018-05-24 2019-08-23 北京机电工程研究所 Target sample generation method based on infrared simulation
CN110765944A (en) * 2019-10-23 2020-02-07 长光禹辰信息技术与装备(青岛)有限公司 Target identification method, device, equipment and medium based on multi-source remote sensing image
CN111914869A (en) * 2019-05-08 2020-11-10 国际商业机器公司 Online utility-driven spatial reference data collection for classification
CN112213264A (en) * 2020-09-22 2021-01-12 武汉工程大学 Airport reference map preparation method for scene matching guidance
US11151222B1 (en) * 2018-05-09 2021-10-19 STWRD, Inc. Sky exposure and solar radiation for forms of life
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241173A (en) * 2007-02-07 2008-08-13 南京理工大学 Infrared stereoscopic vision thermal image method and its system
CN101726297A (en) * 2009-12-18 2010-06-09 华中科技大学 Plane landmark selection and reference map preparation method for front-view navigation guidance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241173A (en) * 2007-02-07 2008-08-13 南京理工大学 Infrared stereoscopic vision thermal image method and its system
CN101726297A (en) * 2009-12-18 2010-06-09 华中科技大学 Plane landmark selection and reference map preparation method for front-view navigation guidance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘婧等: "《前视红外景象匹配制导基准图生成研究》", 《微计算机信息》 *
郭文普等: "《基于3D可视化GIS的前视参考图生成方法》", 《计算机测量与控制》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103743293B (en) * 2013-12-31 2015-07-22 华中科技大学 Reference diagram preparation method utilizing large-scale vegetation region forward-looking infrared guidance
CN103743293A (en) * 2013-12-31 2014-04-23 华中科技大学 Reference diagram preparation method utilizing large-scale vegetation region forward-looking infrared guidance
CN103760619A (en) * 2014-01-07 2014-04-30 中国神华能源股份有限公司 Method and device for monitoring coal field fire zone
CN103976742A (en) * 2014-05-13 2014-08-13 桂林电子科技大学 Measuring device and method aiming at local heat radiation quantity at tail end of finger of human body
CN108537770B (en) * 2018-01-11 2021-08-03 中国人民解放军战略支援部队航天工程大学 Method and system for inverting radiation characteristics of imaging target of non-calibration satellite
CN108537770A (en) * 2018-01-11 2018-09-14 中国人民解放军战略支援部队航天工程大学 It is a kind of without calibration satellite imagery infrared radiation characteristics inversion method and system
US11151222B1 (en) * 2018-05-09 2021-10-19 STWRD, Inc. Sky exposure and solar radiation for forms of life
CN110162812A (en) * 2018-05-24 2019-08-23 北京机电工程研究所 Target sample generation method based on infrared simulation
CN110162812B (en) * 2018-05-24 2022-11-11 北京机电工程研究所 Target sample generation method based on infrared simulation
CN109872295B (en) * 2019-02-20 2020-05-15 北京航空航天大学 Typical target material property extraction method and device based on spectral video data
CN109872295A (en) * 2019-02-20 2019-06-11 北京航空航天大学 Typical target material properties extracting method and device based on spectrum video data
CN111914869A (en) * 2019-05-08 2020-11-10 国际商业机器公司 Online utility-driven spatial reference data collection for classification
CN111914869B (en) * 2019-05-08 2024-09-24 国际商业机器公司 Online utility driven spatial reference data collection for classification
CN110765944A (en) * 2019-10-23 2020-02-07 长光禹辰信息技术与装备(青岛)有限公司 Target identification method, device, equipment and medium based on multi-source remote sensing image
CN112213264A (en) * 2020-09-22 2021-01-12 武汉工程大学 Airport reference map preparation method for scene matching guidance
CN112213264B (en) * 2020-09-22 2024-04-05 武汉工程大学 Airport reference map preparation method for scene matching guidance
CN113610839A (en) * 2021-08-26 2021-11-05 北京中星天视科技有限公司 Infrared target significance detection method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN103090868B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
CN103090868B (en) A kind of preparation method utilizing the typical feature reference diagram of Classification of Multispectral Images
Daneshmand et al. 3d scanning: A comprehensive survey
CN111553245A (en) Vegetation classification method based on machine learning algorithm and multi-source remote sensing data fusion
Zhang et al. Urban building density estimation from high-resolution imagery using multiple features and support vector regression
Miao et al. Object-based detection of Arctic sea ice and melt ponds using high spatial resolution aerial photographs
Jiao et al. Evaluation of four sky view factor algorithms using digital surface and elevation model data
Kassner et al. Analysis of the solar potential of roofs by using official lidar data
Wang et al. Automatic feature-based geometric fusion of multiview TomoSAR point clouds in urban area
US10432915B2 (en) Systems, methods, and devices for generating three-dimensional models
Wood et al. Measurement of three dimensional volcanic plume properties using multiple ground based infrared cameras
Zietara Creating Digital Elevation Model (DEM) based on ground points extracted from classified aerial images obtained from Unmanned Aerial Vehicle (UAV)
Matei et al. Image to lidar matching for geotagging in urban environments
Douté et al. Small-scale topographical characterization of the Martian surface with in-orbit imagery
Wang et al. Unmanned aerial vehicle and structure‐from‐motion photogrammetry for three‐dimensional documentation and digital rubbing of the Zuo River Valley rock paintings
Jurado et al. An efficient method for acquisition of spectral BRDFs in real-world scenarios
Bi et al. Multi-View Analysis of High-Resolution Geomorphic Features in Complex Mountains Based on UAV–LiDAR and SfM–MVS: A Case Study of the Northern Pit Rim Structure of the Mountains of Lufeng, China
Remetean et al. Philae locating and science support by robotic vision techniques
CN110580468B (en) Single wood structure parameter extraction method based on image matching point cloud
Shan et al. Feasibility of Accurate Point Cloud Model Reconstruction for Earthquake‐Damaged Structures Using UAV‐Based Photogrammetry
Bai et al. Application of unmanned aerial vehicle multi-vision image 3D modeling in geological disasters
Chen et al. Development of panoramic infrared images for surface temperature analysis of buildings and infrastructures
Zhu A pipeline of 3D scene reconstruction from point clouds
Han et al. Mapping road surface features using single-camera images acquired by a mobile mapping system
Liu et al. Linear target change detection from a single image based on three‐dimensional real scene
Hernández-López et al. Towards Unattended Maintenance of Photovoltaic Plants Using Drone Photogrammetry. Drones 2023, 7, 129

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant