CN117218003A - Quick splicing method based on aerial hyperspectral image - Google Patents

Quick splicing method based on aerial hyperspectral image Download PDF

Info

Publication number
CN117218003A
CN117218003A CN202311238630.8A CN202311238630A CN117218003A CN 117218003 A CN117218003 A CN 117218003A CN 202311238630 A CN202311238630 A CN 202311238630A CN 117218003 A CN117218003 A CN 117218003A
Authority
CN
China
Prior art keywords
image
hyperspectral
images
pseudo
atmospheric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311238630.8A
Other languages
Chinese (zh)
Inventor
周健
聂聪
任海鹏
焦迎杰
王少奇
刘亮亮
谢泽阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Modern Control Technology
Original Assignee
Xian Institute of Modern Control Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Modern Control Technology filed Critical Xian Institute of Modern Control Technology
Priority to CN202311238630.8A priority Critical patent/CN117218003A/en
Publication of CN117218003A publication Critical patent/CN117218003A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of artificial intelligence, and particularly relates to a rapid splicing method based on aerial hyperspectral images, which comprises the following steps: step 1: an image preprocessing step; step 2: a wave band selection step; step 3: a pseudo-color image stitching step; step 4: a hyperspectral image stitching step; the method realizes the splicing of the hyperspectral images acquired by the airborne spectrometer, and can play the advantages of the hyperspectral images in the face of various real scenes. According to the technical scheme, the machine-mounted direction and the overlapping area are defined, the machine-mounted direction and the overlapping area can be used as priori knowledge, and the splicing time and the calculated amount are reduced; according to the technical scheme, 3 wave bands with the largest information are extracted from the hyperspectral image to form the pseudo-color image, the 3 wave bands are spliced, and the rest wave bands are spliced according to the pseudo-color image. The quality of splicing is guaranteed, dimension disasters are avoided, and time and calculation cost are saved.

Description

Quick splicing method based on aerial hyperspectral image
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a rapid splicing method based on aerial hyperspectral images.
Background
The hyperspectral image has higher spectral resolution and spatial information, so that the hyperspectral image is widely applied to the fields of aviation, agriculture, military industry and the like. The imaging spectrometer acquires hyperspectral images through carrying different space platforms, but is limited by small field angle range of hyperspectral equipment, and the hyperspectral images are needed to be spliced to acquire information of larger scenes. For this reason, a study of hyperspectral image stitching needs to be carried out, and the existing image stitching process comprises the following 2 steps:
1. image registration
The image registration is the core in the image stitching process, and the principle is that the overlapping area is obtained through calculation, and then the overlapping area of the images is aligned in space by means of translation, rotation, scaling, projection and the like. Methods for deriving the overlap region based on the search feature region include MAD, SSD, NCC. The method for obtaining the overlapping area based on the characteristic point matching comprises a Harris angular momentum detection method, SIFT, SURF and the like.
2. Image fusion
Because the acquired images are directly fused and are uncoordinated in the splicing area due to the difference of brightness, the image fusion is needed to be processed, and the existing algorithm can be divided into a pixel level and a characteristic area level. Methods of image fusion based on pixel level have weighted average and direct average of pixel gray scale. The image fusion method based on the characteristic region level comprises a reduced overlapping region, a Fourier transform domain value method, a wavelet transform domain value method, a curve transform domain value method and the like.
Disclosure of Invention
First, the technical problem to be solved
The invention aims to solve the technical problems that: aiming at the problem that the existing method cannot splice hyperspectral images, the invention provides a rapid splicing method based on aerial hyperspectral images, which realizes the splicing of hyperspectral images acquired by an airborne spectrometer and can play the advantages of hyperspectral images in the face of various real scenes.
(II) technical scheme
In order to solve the technical problems, the invention provides a rapid splicing method based on aerial hyperspectral images, which comprises the following steps:
step 1: an image preprocessing step;
step 2: a wave band selection step;
step 3: a pseudo-color image stitching step;
step 4: and (3) a hyperspectral image stitching step.
In the step 1, a hyperspectral image acquired on site by an airborne spectrometer is aimed at; because the collected hyperspectral image has interference factors of atmosphere, light intensity and ground object reflection, the hyperspectral image is subjected to atmosphere correction and relative radiation correction, and the influence caused by the difference between an atmosphere layer and the environment is eliminated.
In the step 1, the hyperspectral image subjected to aerial photography is interfered by atmospheric molecules and ground object reflection, and the subsequent splicing work is affected; therefore, the interference of atmospheric molecules and ground reflection is treated by adopting rapid atmospheric correction; obtaining a received total radiation value L according to the linear relation of the airborne spectrometer sensor;
L=iDN+w (1)
where i is the gain coefficient, DN is the output luminance value, and w is the drift coefficient; the atmospheric radiation value L can be known by a lambertian model formula a
Wherein A is the atmospheric transmittance, B is the simulated atmospheric coefficient, e is the equilibrium scattering coefficient, P is the object reflectance, and S is the atmospheric reflectance; correction of atmospheric radiation value L by PSF function a Obtaining the radiation value L after the atmospheric correction z
X=Fy (5)
Wherein X is an atmospheric corrected hyperspectral image, y is an original hyperspectral image, and the total radiation value L and the atmospheric corrected radiation value L are utilized z And obtaining a proportionality coefficient F, namely the true reflectivity, and performing linear transformation on the hyperspectral image and the true reflectivity F to finish the rapid atmospheric correction.
In the step 1, in order to solve the influence of different environmental differences on hyperspectral images during shooting, the whiteboard data acquired on site are subjected to relative radiation correction;
Y=kX (6)
wherein Y is the hyperspectral image corrected for relative radiation, k is the gain, and X is the hyperspectral image corrected for atmosphere; b 1 Is a set of whiteboard data, b 2 Is another set of whiteboard data; the white board is used for collecting pure reflection images, and the white board data of the images to be spliced are compared to adjust the gain k, so that the difference between the corresponding wave bands of the images to be spliced can be corrected;
interference and differences in sample images acquired at different times are eliminated by atmospheric correction and relative radiation correction.
In the step 2, selecting a wave band;
aiming at the problem that hyperspectral images cannot be spliced at present, the step 2 provides that pseudo-color images formed by wave bands are extracted from the hyperspectral images to be spliced, and then the rest wave bands are spliced one by one according to the process;
the similarity between the combined wave bands is determined by calculating the relative entropy of the mutual combination of different wave bands, and 3 wave bands with the lowest similarity and stronger representativeness are selected;
wherein Y is i Is the ith band, Y, of the pre-processed hyperspectral image j Is the j-th band of the pre-processed hyperspectral image, p (Y i ) And p (Y) j ) Represents the information quantity of the ith wave band and the information quantity of the jth wave band respectively, D kl (Y i ) The relative entropy of the ith wave band of the hyperspectral image after pretreatment;
as can be seen from the formula (8), if the two band information are the same, the relative entropy is 0; if the difference of the two wave band information is larger, the relative entropy is larger at the moment; the larger the relative entropy is, the larger the difference between the contained information is, and the more representative is; then through the calculation of the relative entropy, selecting the pseudo-color image formed by 3 wave bands with the maximum sum of the relative entropy as the most representative and the most abundant image information;
selecting 3 wave bands to form a pseudo-color image by calculating the relative entropy of the preprocessed hyperspectral image; as the relative entropy increases, the image information is more rich and representative.
In the step 3, pseudo color image stitching is performed;
forming a pseudo-color image by the selected wave bands, determining an overlapping area based on priori knowledge, performing pseudo-color image registration on the overlapping area by adopting a SIFT algorithm, and performing pseudo-color image fusion based on a Fourier transform threshold method;
the known position information shot by the airborne camera is used as priori knowledge, so that an overlapping area can be rapidly obtained; the related information of the shooting area can be obtained through the internal parameters of the hyperspectral camera;
where θ is the diagonal angle at the time of camera shooting, H is the height of the camera relative to the ground, f is the resolution of the camera, a is the length of the shooting area, and b is the width of the shooting area;
the position information of the overlapped area C can be determined through the position information of the airborne camera during shooting and the obtained image information; according to the shooting center (x 0 ,y 0 ) The center (x 1 ,y 1 ) Obtaining the position information of the overlapping area according to the length and width of the image;
wherein C is 1 、C 2 、C 3 、C 4 And the coordinates of the vertexes of the overlapped area C are respectively determined, and the overlapped area can be determined through the vertex coordinates.
In the step 3, the image registration is performed on the determined overlapping area by using a SIFT algorithm, firstly, gaussian convolution is performed on the determined area, and pixel points with similar change rates are selected to be used as key points of the registration;
wherein x, y are coordinates of the pixel points; m and n are coordinates of the pixel point after Gaussian change; b is a multiple of the dimensional change, defaulting to 1.6 times the original size each time; g is Gaussian convolution;
and registering the selected key points along the airborne angle according to the airborne direction which is clear in advance.
In the step 3, after registration, performing fourier transform threshold on the registration area to complete image fusion;
F(u,v)=∫∫f(x,y)e -iwxy dxdy (13)
where F (x, y) is the original image, F (u, v) is the Fourier-transformed image, iw is the frequency variable; the low-frequency component and the high-frequency component can be filtered through Fourier change, and only the middle frequency component is reserved, namely, the pixel points with overlarge color differences in the image are filtered; and then the processed Fourier image is restored through inverse change, so that the pseudo-color image stitching is completed.
In the step 4, the hyperspectral image stitching is performed: and splicing all the remaining wave bands one by one according to the process of the splicing of the pseudo-color images.
(III) beneficial effects
Compared with the prior art, the technical scheme of the invention has the following advantages:
(1) The technical proposal of the invention defines the airborne direction and the overlapping area, can use the airborne direction and the overlapping area as priori knowledge, and reduces the splicing time and the calculated amount
(2) According to the technical scheme, 3 wave bands with the largest information are extracted from the hyperspectral image to form the pseudo-color image, the 3 wave bands are spliced, and the rest wave bands are spliced according to the pseudo-color image. The quality of splicing is guaranteed, dimension disasters are avoided, and time and calculation cost are saved.
Drawings
Fig. 1 is a schematic diagram of a hyperspectral image.
Fig. 2 is an overall flowchart.
Fig. 3 is a preprocessing flow chart.
Fig. 4 is a schematic view of pretreatment visualization.
Fig. 5 is a schematic diagram of a pseudo color image.
Fig. 6 is a pseudo color image stitching flow chart.
Fig. 7 is a schematic diagram of an overlapping area.
Fig. 8 is a schematic view of region image registration.
Fig. 9 is a schematic diagram of pseudo color image stitching.
Detailed Description
For the purposes of clarity, content, and advantages of the present invention, a detailed description of the embodiments of the present invention will be described in detail below with reference to the drawings and examples.
Aiming at the problem that the existing method cannot splice hyperspectral images, the applicant of the invention is inspired by splicing common images (RGB (red, green and blue) images), provides a rapid splicing method based on aerial hyperspectral images, and aims at the hyperspectral images acquired by an airborne spectrometer and shown in the figure 1, so that the splicing of the hyperspectral images is realized, and the advantages of the hyperspectral images can be exerted in the face of various real scenes.
In order to solve the above technical problems, the present invention provides a fast stitching method based on aerial hyperspectral images, as shown in fig. 2, the method includes:
step 1: an image preprocessing step;
step 2: a wave band selection step;
step 3: a pseudo-color image stitching step;
step 4: and (3) a hyperspectral image stitching step.
In the step 1, as shown in fig. 3, a hyperspectral image acquired on site by an onboard spectrometer is targeted; because the collected hyperspectral image has interference factors of atmosphere, light intensity and ground object reflection, the hyperspectral image is subjected to atmosphere correction and relative radiation correction, and the influence caused by the difference between an atmosphere layer and the environment is eliminated.
In the step 1, the hyperspectral image subjected to aerial photography is interfered by atmospheric molecules and ground object reflection, and the subsequent splicing work is affected; thus, rapid atmospheric correction (FLAASH) is used to address interference of atmospheric molecules and ground object reflections; obtaining a received total radiation value L according to the linear relation of the airborne spectrometer sensor;
L=iDN+w (1)
where i is the gain coefficient, DN is the output luminance value, and w is the drift coefficient; the atmospheric radiation value L can be known by a lambertian model formula a
Wherein A is the atmospheric transmittance, B is the simulated atmospheric coefficient, e is the equilibrium scattering coefficient, P is the object reflectance, and S is the atmospheric reflectance; correction of atmospheric radiation value L by PSF function a Obtaining the radiation value L after the atmospheric correction z
X=Fy (5)
Wherein X is an atmospheric corrected hyperspectral image, y is an original hyperspectral image, and the total radiation value L and the atmospheric corrected radiation value L are utilized z And obtaining a proportionality coefficient F, namely the true reflectivity, and performing linear transformation on the hyperspectral image and the true reflectivity F to finish the fast atmospheric correction (FLAASH) processing.
In the step 1, in order to solve the influence of different environmental differences on hyperspectral images during shooting, the whiteboard data acquired on site are subjected to relative radiation correction;
Y=kX (6)
wherein Y is the hyperspectral image corrected for relative radiation, k is the gain, and X is the hyperspectral image corrected for atmosphere; b 1 Is a set of whiteboard data, b 2 Is another set of whiteboard data; the white board is used for collecting pure reflection image by contrastThe whiteboard data of the images to be spliced are subjected to adjustment of gain k, so that the difference between corresponding wave bands of the images to be spliced can be corrected;
interference and differences in sample images acquired at different times are eliminated by atmospheric correction and relative radiation correction. As shown in fig. 4, it is ensured that the hyperspectral images to be stitched correspond in spectral dimension.
In the step 2, selecting a wave band;
aiming at the problem that hyperspectral images cannot be spliced at present, the step 2 provides that pseudo-color images formed by wave bands are extracted from the hyperspectral images to be spliced, and then the rest wave bands are spliced one by one according to the process;
the similarity between the combined wave bands is determined by calculating the relative entropy (Kullback-Leibler divergence, KL) of the mutual combination of different wave bands, and 3 wave bands with the lowest similarity and stronger representativeness are selected;
wherein Y is i Is the ith band, Y, of the pre-processed hyperspectral image j Is the j-th band of the pre-processed hyperspectral image, p (Y i ) And p (Y) j ) Represents the information quantity of the ith wave band and the information quantity of the jth wave band respectively, D kl (Y i ) The relative entropy of the ith wave band of the hyperspectral image after pretreatment;
as can be seen from the formula (8), if the two band information are the same, the relative entropy is 0; if the difference of the two wave band information is larger, the relative entropy is larger at the moment; the larger the relative entropy is, the larger the difference between the contained information is, and the more representative is; then through the calculation of the relative entropy, selecting the pseudo-color image formed by 3 wave bands with the maximum sum of the relative entropy as the most representative and the most abundant image information;
selecting 3 wave bands to form a pseudo-color image by calculating the relative entropy of the preprocessed hyperspectral image; the effect of selecting different wave bands to form the pseudo-color image is shown in fig. 5, and the left graph of fig. 5 is the pseudo-color image formed when the relative entropy is minimum. The graph in fig. 5 shows a pseudo color image formed at the time of the relative entropy intermediate value. The right graph of fig. 5 is a pseudo color image composed when the relative entropy is maximum. It can be seen that as the relative entropy increases, the image information is more rich and representative.
In the step 3, pseudo color image stitching is performed;
the selected wave bands are formed into a pseudo-color image, an overlapping area is determined based on priori knowledge, the pseudo-color image registration is carried out on the overlapping area by adopting a SIFT algorithm, and then the pseudo-color image fusion is carried out based on a Fourier transform threshold method, wherein the specific technical scheme is shown in figure 6;
the known position information shot by the airborne camera is used as priori knowledge, so that an overlapping area can be rapidly obtained; the related information of the shooting area can be obtained through the internal parameters of the hyperspectral camera;
where θ is the diagonal angle at the time of camera shooting, H is the height of the camera relative to the ground, f is the resolution of the camera, a is the length of the shooting area, and b is the width of the shooting area;
the position information of the overlapped area C can be determined through the position information of the airborne camera during shooting and the obtained image information; as shown in fig. 7, according to the shooting center (x 0 ,y 0 ) The center (x 1 ,y 1 ) Obtaining the position information of the overlapping area according to the length and width of the image;
wherein C is 1 、C 2 、C 3 、C 4 Coordinates of the C vertices of the overlapping region respectivelyAs shown in fig. 7, the overlapping area can be determined by vertex coordinates.
In the step 3, the image registration is performed on the determined overlapping area by using a SIFT algorithm, firstly, gaussian convolution is performed on the determined area, and pixel points with similar change rates are selected to be used as key points of the registration;
wherein x, y are coordinates of the pixel points; m and n are coordinates of the pixel point after Gaussian change; b is a multiple of the dimensional change, defaulting to 1.6 times the original size each time; g is Gaussian convolution;
according to the previously defined airborne directions, the selected keypoints are registered along the airborne angles as shown in fig. 8.
In the step 3, after registration, performing fourier transform threshold on the registration area to complete image fusion;
F(u,v)=∫∫f(x,y)e -iwxy dxdy (13)
where F (x, y) is the original image, F (u, v) is the Fourier-transformed image, iw is the frequency variable; the low-frequency component and the high-frequency component can be filtered through Fourier change, and only the middle frequency component is reserved, namely, the pixel points with overlarge color differences in the image are filtered; the processed fourier image is then restored by inverse transformation, as shown in fig. 9, and pseudo-color image stitching is completed.
In the step 4, the hyperspectral image stitching is performed: and splicing all the remaining wave bands one by one according to the process of the splicing of the pseudo-color images.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (10)

1. The quick splicing method based on the aerial hyperspectral image is characterized by comprising the following steps of:
step 1: an image preprocessing step;
step 2: a wave band selection step;
step 3: a pseudo-color image stitching step;
step 4: and (3) a hyperspectral image stitching step.
2. The rapid splicing method based on aerial hyperspectral images as claimed in claim 1, wherein in the step 1, the hyperspectral images acquired on site by an onboard spectrometer are aimed at; because the collected hyperspectral image has interference factors of atmosphere, light intensity and ground object reflection, the hyperspectral image is subjected to atmosphere correction and relative radiation correction, and the influence caused by the difference between an atmosphere layer and the environment is eliminated.
3. The rapid splicing method based on aerial hyperspectral images as claimed in claim 2, wherein in the step 1, the aerial hyperspectral images are interfered by atmospheric molecules and ground object reflections, and the subsequent splicing work is affected; therefore, the interference of atmospheric molecules and ground reflection is treated by adopting rapid atmospheric correction; obtaining a received total radiation value L according to the linear relation of the airborne spectrometer sensor;
L=iDN+w (1)
where i is the gain coefficient, DN is the output luminance value, and w is the drift coefficient; the atmospheric radiation value L can be known by a lambertian model formula a
Wherein A is the atmospheric transmittance, B is the simulated atmospheric coefficient, e is the equilibrium scattering coefficient, P is the object reflectance, and S is the atmospheric reflectance; correction of atmospheric radiation by PSF functionInjection value L a Obtaining the radiation value L after the atmospheric correction z
X=Fy (5)
Wherein X is an atmospheric corrected hyperspectral image, y is an original hyperspectral image, and the total radiation value L and the atmospheric corrected radiation value L are utilized z And obtaining a proportionality coefficient F, namely the true reflectivity, and performing linear transformation on the hyperspectral image and the true reflectivity F to finish the rapid atmospheric correction.
4. The rapid splicing method based on aerial hyperspectral images as claimed in claim 3, wherein in the step 1, in order to solve the influence of different environmental differences on hyperspectral images during shooting, the relative radiation correction is performed through whiteboard data acquired on site;
Y=kX (6)
wherein Y is the hyperspectral image corrected for relative radiation, k is the gain, and X is the hyperspectral image corrected for atmosphere; b 1 Is a set of whiteboard data, b 2 Is another set of whiteboard data; the white board is used for collecting pure reflection images, and the white board data of the images to be spliced are compared to adjust the gain k, so that the difference between the corresponding wave bands of the images to be spliced can be corrected;
interference and differences in sample images acquired at different times are eliminated by atmospheric correction and relative radiation correction.
5. The rapid splicing method based on aerial hyperspectral images as claimed in claim 4, wherein in the step 2, band selection is performed;
aiming at the problem that hyperspectral images cannot be spliced at present, the step 2 provides that pseudo-color images formed by wave bands are extracted from the hyperspectral images to be spliced, and then the rest wave bands are spliced one by one according to the process;
the similarity between the combined wave bands is determined by calculating the relative entropy of the mutual combination of different wave bands, and 3 wave bands with the lowest similarity and stronger representativeness are selected;
wherein Y is i Is the ith band, Y, of the pre-processed hyperspectral image j Is the j-th band of the pre-processed hyperspectral image, p (Y i ) And p (Y) j ) Represents the information quantity of the ith wave band and the information quantity of the jth wave band respectively, D kl (Y i ) The relative entropy of the ith wave band of the hyperspectral image after pretreatment;
as can be seen from the formula (8), if the two band information are the same, the relative entropy is 0; if the difference of the two wave band information is larger, the relative entropy is larger at the moment; the larger the relative entropy is, the larger the difference between the contained information is, and the more representative is; then through the calculation of the relative entropy, selecting the pseudo-color image formed by 3 wave bands with the maximum sum of the relative entropy as the most representative and the most abundant image information;
selecting 3 wave bands to form a pseudo-color image by calculating the relative entropy of the preprocessed hyperspectral image; as the relative entropy increases, the image information is more rich and representative.
6. The rapid stitching method based on aerial hyperspectral images as claimed in claim 5, wherein in the step 3, false color image stitching is performed;
forming a pseudo-color image by the selected wave bands, determining an overlapping area based on priori knowledge, performing pseudo-color image registration on the overlapping area by adopting a SIFT algorithm, and performing pseudo-color image fusion based on a Fourier transform threshold method;
the known position information shot by the airborne camera is used as priori knowledge, so that an overlapping area can be rapidly obtained; the related information of the shooting area can be obtained through the internal parameters of the hyperspectral camera;
where θ is the diagonal angle at the time of camera shooting, H is the height of the camera relative to the ground, f is the resolution of the camera, a is the length of the shooting area, and b is the width of the shooting area;
the position information of the overlapped area C can be determined through the position information of the airborne camera during shooting and the obtained image information; according to the shooting center (x 0 ,y 0 ) The center (x 1 ,y 1 ) Obtaining the position information of the overlapping area according to the length and width of the image;
wherein C is 1 、C 2 、C 3 、C 4 And the coordinates of the vertexes of the overlapped area C are respectively determined, and the overlapped area can be determined through the vertex coordinates.
7. The rapid splicing method based on aerial hyperspectral images according to claim 6, wherein in the step 3, image registration is performed on the determined overlapping area by using a SIFT algorithm, gaussian convolution is performed on the determined area first, and pixel points with similar change rates are selected as key points of registration;
wherein x, y are coordinates of the pixel points; m and n are coordinates of the pixel point after Gaussian change; b is a multiple of the dimensional change, defaulting to 1.6 times the original size each time; g is Gaussian convolution;
and registering the selected key points along the airborne angle according to the airborne direction which is clear in advance.
8. The method for rapid stitching based on aerial hyperspectral images as recited in claim 7, wherein in the step 3, after registration, fourier transform domain values are performed on the registration areas to complete image fusion;
F(u,v)=∫∫f(x,y)e -iwxy dxdy (13)
where F (x, y) is the original image, F (u, v) is the Fourier-transformed image, iw is the frequency variable; the low-frequency component and the high-frequency component can be filtered through Fourier change, and only the middle frequency component is reserved, namely, the pixel points with overlarge color differences in the image are filtered; and then the processed Fourier image is restored through inverse change, so that the pseudo-color image stitching is completed.
9. The rapid stitching method based on aerial hyperspectral images as claimed in claim 8, wherein in step 4, hyperspectral image stitching is performed: and splicing all the remaining wave bands one by one according to the process of the splicing of the pseudo-color images.
10. The rapid splicing method based on aerial hyperspectral images as claimed in claim 9 is characterized in that the method realizes the splicing of hyperspectral images acquired by an onboard spectrometer, and can exert the advantages of hyperspectral images in the face of various real scenes.
CN202311238630.8A 2023-09-25 2023-09-25 Quick splicing method based on aerial hyperspectral image Pending CN117218003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311238630.8A CN117218003A (en) 2023-09-25 2023-09-25 Quick splicing method based on aerial hyperspectral image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311238630.8A CN117218003A (en) 2023-09-25 2023-09-25 Quick splicing method based on aerial hyperspectral image

Publications (1)

Publication Number Publication Date
CN117218003A true CN117218003A (en) 2023-12-12

Family

ID=89036845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311238630.8A Pending CN117218003A (en) 2023-09-25 2023-09-25 Quick splicing method based on aerial hyperspectral image

Country Status (1)

Country Link
CN (1) CN117218003A (en)

Similar Documents

Publication Publication Date Title
CN111079556B (en) Multi-temporal unmanned aerial vehicle video image change region detection and classification method
CN108492274B (en) Long-wave infrared polarization feature extraction and fusion image enhancement method
CN105205781B (en) Transmission line of electricity Aerial Images joining method
CN107610164B (en) High-resolution four-number image registration method based on multi-feature mixing
CN109584193A (en) A kind of unmanned plane based on target preextraction is infrared and visible light image fusion method
CN106096604A (en) Multi-spectrum fusion detection method based on unmanned platform
CN112258579A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
Teke Satellite image processing workflow for RASAT and Göktürk-2
Zheng et al. Single-image vignetting correction
Guo et al. Haze and thin cloud removal using elliptical boundary prior for remote sensing image
Wei et al. Ship detection in remote sensing image based on faster R-CNN with dilated convolution
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
CN111210396A (en) Multispectral polarization image defogging method combined with sky light polarization model
CN117575953B (en) Detail enhancement method for high-resolution forestry remote sensing image
CN112907493A (en) Multi-source battlefield image rapid mosaic fusion algorithm under unmanned aerial vehicle swarm cooperative reconnaissance
CN117218201A (en) Unmanned aerial vehicle image positioning precision improving method and system under GNSS refusing condition
CN114694043A (en) Ground wounded person identification method, device and medium of airborne multispectral multi-domain preferred features under complex scene
Ying et al. Region-aware RGB and near-infrared image fusion
Zhao et al. FOV expansion of bioinspired multiband polarimetric imagers with convolutional neural networks
CN117218003A (en) Quick splicing method based on aerial hyperspectral image
CN115731456A (en) Target detection method based on snapshot type spectrum polarization camera
CN113409225B (en) Retinex-based unmanned aerial vehicle shooting image enhancement algorithm
Liu et al. An improved APAP algorithm via line segment correction for UAV multispectral image stitching
CN115439349A (en) Underwater SLAM optimization method based on image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination