CN113658078A - SPOT-6 satellite image RFM orthorectification method based on FPGA hardware - Google Patents

SPOT-6 satellite image RFM orthorectification method based on FPGA hardware Download PDF

Info

Publication number
CN113658078A
CN113658078A CN202110958456.9A CN202110958456A CN113658078A CN 113658078 A CN113658078 A CN 113658078A CN 202110958456 A CN202110958456 A CN 202110958456A CN 113658078 A CN113658078 A CN 113658078A
Authority
CN
China
Prior art keywords
image
coordinates
pixel
rfm
ortho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110958456.9A
Other languages
Chinese (zh)
Inventor
张荣庭
周国清
张广运
朱强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202110958456.9A priority Critical patent/CN113658078A/en
Publication of CN113658078A publication Critical patent/CN113658078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Abstract

An SPOT-6 satellite image RFM orthorectification method based on FPGA hardware comprises the following steps: acquiring geodetic coordinates of pixels of the orthoimage; performing coordinate conversion by using an RFM (radio frequency mass spectrometer), namely converting geodetic coordinates of ground points into pixel row coordinates and pixel column coordinates of an original image; performing gray level interpolation of the orthoimage; and repeating the process until the gray values of all pixel points on the orthoimage are obtained, thereby obtaining the orthorectified image. The SPOT-6 satellite image RFM ortho-rectification method based on FPGA hardware disclosed by the invention takes the FPGA as a hardware acceleration platform and the Verilog hardware language as a design language, realizes the RFM ortho-rectification of the remote sensing image under the condition of limited hardware resources, and meets the requirements of timeliness, portability and miniaturization of satellite image ortho-rectification in the field of satellite remote sensing.

Description

SPOT-6 satellite image RFM orthorectification method based on FPGA hardware
Technical Field
The invention relates to a satellite remote sensing image processing method. In particular to an SPOT-6 satellite image RFM orthorectification method based on FPGA hardware.
Background
With the continuous development of technologies such as spaceflight and sensing, the resolution ratio of satellite remote sensing images is continuously improved, the number of wave bands is continuously increased, the revisit period is continuously shortened, and meanwhile, the quantity of the obtained image data is increased at an incredible speed. However, the general user wants to obtain the interested satellite image thematic product, and needs to go through the following processes: the satellite sensor acquires data, the satellite downloads the data to the data receiving center, then ground professionals process the data according to user requirements, and finally processing results are distributed to users. The process is long, and the user needs to wait for at least 1 month to obtain the image special product, so that the requirement of the user on timeliness cannot be really guaranteed, and a large amount of image data cannot be effectively utilized. Research shows that the on-orbit data processing technology is one of key technologies meeting real-time requirements of users on data processing and information extraction, and the on-satellite data real-time processing plays an important role in realizing an intelligent earth observation satellite system. The data volume of the special product after the on-board image is processed in real time can be greatly reduced, so that the data transmission pressure is reduced, and common users can directly use the special product. FPGAs have a smaller size, lighter weight, and lower power consumption than Central Processing Units (CPUs) and Graphics Processing Units (GPUs). The FPGA not only solves the defect of a customized circuit, but also overcomes the defect of insufficient gate circuits of the original editable device. Therefore, the FPGA is one of the preferred schemes for real-time processing of on-board data.
One of the prerequisites that a common user can directly use an image thematic product processed on the satellite in real time is as follows: the orthorectification is performed on the satellite image before the special product is manufactured. This is because the satellite images are affected by the terrain, camera tilt, and earth curvature when acquiring the images, and thus cause projection errors and geometric distortions in the satellite images. The orthorectified satellite image is not only accompanied by geometric information of a map, but also can more intuitively express information and be updated more easily than a general map. Because of having rich information with strong readability, the ortho-image has been widely applied to various fields: such as emergency disaster relief, military reconnaissance, national economic construction, "construction of digital cities," and land validation.
Over the last several decades, researchers have proposed numerous orthorectification algorithmic models. The RFM model has been widely used in the orthorectification of satellite remote sensing images because of its simple mathematical form, no need of providing sensor parameters to users, and the ability to achieve correction accuracy consistent with that of a strict geometric model. However, most of the existing orthorectification methods based on the RFM model are implemented based on a ground platform. The ground platform-based RFM orthorectification method cannot meet application scenes with higher requirements on timeliness (such as disaster prevention and reduction, dynamic target tracking and the like), and the main reasons include: (1) when the ground platform is used for carrying out the remote sensing image ortho-rectification based on the RFM model, the satellite remote sensing image is waited to be downloaded to the rear of a ground receiving station for processing, and the process is time-consuming; (2) ground platforms often process data in a serial manner, resulting in slow processing speeds.
Therefore, in order to perform the orthorectification of the satellite remote sensing image in real time, it is necessary to study the satellite image RFM orthorectification method based on the FPGA hardware on the one hand, and to solidify the specific data processing and application model on the FPGA hardware on the other hand.
Disclosure of Invention
The invention aims to solve the technical problem of providing an SPOT-6 satellite image RFM orthorectification method based on FPGA hardware, which can realize RFM orthorectification of remote sensing images under the condition of limited hardware resources.
The technical scheme adopted by the invention is as follows: an SPOT-6 satellite image RFM orthorectification method based on FPGA hardware comprises the following steps:
1) acquiring geodetic coordinates of pixels of the orthoimage;
2) performing coordinate conversion by using an RFM (radio frequency mass spectrometer), namely converting geodetic coordinates of ground points into pixel row coordinates and pixel column coordinates of an original image;
3) performing gray level interpolation of the orthoimage;
4) and repeating the steps 1) to 3) until the gray values of all pixel points on the orthoimage are obtained, thereby obtaining the orthorectified image.
The SPOT-6 satellite image RFM ortho-rectification method based on FPGA hardware disclosed by the invention takes the FPGA as a hardware acceleration platform and the Verilog hardware language as a design language, realizes the RFM ortho-rectification of the remote sensing image under the condition of limited hardware resources, and meets the requirements of timeliness, portability and miniaturization of satellite image ortho-rectification in the field of satellite remote sensing.
Drawings
FIG. 1 is an FPGA hardware architecture of SPOT-6 satellite image RFM orthorectification method based on FPGA hardware;
FIG. 2 is an FPGA hardware architecture of the GETCORD module of FIG. 1;
FIG. 3 is an FPGA hardware architecture of the GetLonLat module of FIG. 2;
FIG. 4 is an FPGA hardware architecture of the InterpolateHei module of FIG. 2;
FIG. 5 is an FPGA hardware architecture of the ORTHOM module of FIG. 1;
FIG. 6 is an FPGA hardware architecture of the GetTerm module of FIG. 5;
FIG. 7 is an FPGA hardware architecture of the CoeffMTermPE module of FIG. 5;
FIG. 8 is an FPGA hardware architecture of the GetRowClm module of FIG. 5;
FIG. 9 is an FPGA hardware architecture of the InterpolateGrey module of FIG. 5;
FIG. 10 is a comparison graph of the RFM orthorectification results of SPOT-6 images obtained from PC and FPGA;
fig. 11 is a difference image between the corrected image obtained by the PC and the corrected image obtained by the FPGA.
Detailed Description
The following provides a detailed description of the vertical wave buoy detection device and method based on the lead screw and the linear guide rail according to the present invention with reference to the following embodiments and accompanying drawings.
The invention discloses an SPOT-6 satellite image RFM orthorectification method based on FPGA hardware, which comprises the following steps:
1) acquiring geodetic coordinates of pixels of the orthoimage; the method comprises the following steps:
let 4 vertexes of the original image be p respectively1、p2、p3And p4Their corresponding ground points P1、P2、P3And P4Respectively (Lon1, Lat1), (Lon2, Lat2),(Lon3, Lat3) and (Lon4, Lat4), then the ground area covered by the orthoimages is determined according to the following formula:
Figure BDA0003221208500000031
after the range of the ortho image is determined, the number of rows and columns of the ortho image is determined according to the given longitude and latitude ground sampling intervals STEPLon and STEPLat, as shown in the following formula:
Figure BDA0003221208500000032
in the formula, NjOrthAnd NiOrthThe number of rows and columns of the ortho image respectively;
setting an arbitrary pixel point p on an ortho-imageOrthHas a pixel coordinate of (i)Orth,jOrth) Then pixel point pOrthCorresponding ground point Pi,i=1,2,3,…,NjOrth×NiOrthGround point PiIs obtained by the following formula:
Figure BDA0003221208500000033
in the formula (i)LB,jLB) Is the pixel coordinate of the pixel at the lower left corner of the ortho image, iLB=0,jLB=NjOrth(ii) a Lon is longitude and Lat is latitude;
obtaining a ground point PiAfter the earth coordinates (Lon, Lat), the ground point P is calculated according to the following formulaiLine and column coordinates (i) in DEMD,jD) Namely:
Figure BDA0003221208500000034
wherein, LonDminAnd LatDminAre respectively DEMMinimum of medium longitude and latitude; stLonD and stLatD are the sampling intervals of the DEM. At the time of obtaining the ground point PiLine and column coordinates (i) in DEMD,jD) Then, the following parallel bilinear interpolation method is adopted to calculate the ground point PiThe corresponding height value of the Hei,
Figure BDA0003221208500000035
wherein h11 and h12 are intermediate variables; clmFDEM and rowFDEM are row-column coordinates (i) of a ground point P in DEM (Digital Elevation Model (DEM for short)D,jD) The fractional part of (a); h1, h2, h3 and h4 are elevation values corresponding to the four-neighbor domain pixels; hei is the final elevation interpolation result.
2) Performing coordinate conversion by using an RFM (radio frequency mass spectrometer), namely converting geodetic coordinates of ground points into pixel row coordinates and pixel column coordinates of an original image;
the earth coordinate Lon, Lat, Hei of the ground point is converted into the pixel column coordinate and the pixel row coordinate i of the original image by using the following RFM modelOrig,jOrig
Figure BDA0003221208500000036
Wherein ioff, iScale, joff and jScale are pixel coordinate regularization parameters; the polynomials NumL, DenL, NumS and DenS are calculated by the following formulas,
NumL=a1+a2L+a3P+a4H+a5LP+a6LH+a7PH+
a8L2+a9P2+a10H2+a11PLH+a12L3+a13LP2+
a14LH2+a15L2P+a16P3+a17PH2+a18L2H+a19P2H+a20H3
DenL=b1+b2L+b3P+b4H+b5LP+b6LH+b7PH+
b8L2+b9P2+b10H2+b11PLH+b12L3+b13LP2+
b14LH2+b15L2P+b16P3+b17PH2+b18L2H+b19P2H+b20H3
NumS=c1+c2L+c3P+c4H+c5LP+c6LH+c7PH+
c8L2+c9P2+c10H2+c11PLH+c12L3+c13LP2+
c14LH2+c15L2P+c16P3+c17PH2+c18L2H+c19P2H+c20H3
DenS=d1+d2L+d3P+d4H+d5LP+d6LH+d7PH+
d8L2+d9P2+d10H2+d11PLH+d12L3+d13LP2+
d14LH2+d15L2P+d16P3+d17PH2+d18L2H+d19P2H+d20H3
wherein, a1~a20、b1~b20,c1~c20,d1~d20Are RFM model parameters; p, L and H are the regularization coordinates of geodetic coordinates Lon, Lat, and Hei, respectively; the regularized coordinates are respectively expressed by the following formulasThe calculation results in that,
Figure BDA0003221208500000041
wherein Lonoff, LonScale, Latoff, LatScale, hoff and hScale are geodetic regularization parameters.
3) Performing gray level interpolation of the orthoimage;
the pixel coordinate (i) of the original image is calculated by adopting the following parallel bilinear interpolation methodOrig,jOrig) Gray value greyffpga of (a), namely:
Figure BDA0003221208500000042
where clmFGrey and rowFGrey are pixel coordinates (i) of the original video image, respectivelyOrig,jOrig) The fractional part of (a); gy1, gy2, gy3, gy4 are pixel coordinates (i) of the original imageOrig,jOrig) Gray values corresponding to the four adjacent domain pixels, gy11 and gy12 are intermediate variables; and finally, assigning the gray value GreyFPGA to an orthoimage pixel point with a pixel coordinate (ClmFPGA, RowFPGA).
4) And repeating the steps 1) to 3) until the gray values of all pixel points on the orthoimage are obtained, thereby obtaining the orthorectified image.
Specific examples are given below.
The embodiment of the invention adopts an Xilinx FPGA chip, utilizes Verilog hardware language to carry out algorithm design and program writing on a Vivado2016.4 software platform, designs a hardware architecture as shown in figure 1, and mainly comprises a GETCRD module and an ORTHOM module. When the signal isStart is 1' b1, the getcordid module will start to provide the ORTHOM module with input data LonOrth [63:0], LatOrth [63:0] and HeiOrth [63:0] by calculation, which are the longitude, latitude and elevation of the orthoimage, respectively, where LonOrth [63:0] and LatOrth [63:0] can be calculated from the sampling distance and HeiOrth [63:0] can be interpolated from the DEM from LonOrth [63:0] and LatOrth [63:0 ].
The ortho module is mainly responsible for orthorectification of the image, i.e. coordinate transformation and gray level interpolation. When the getcordid module starts to produce earth coordinates, i.e. the signal isgetcordid is 1' b1, the ORTHOM module will start to perform the calculation of the coordinate transformation. After the coordinate transformation is completed, the ORTHOM module will obtain gray values gy1[63:0] gy4[63:0] of the 4 neighborhoods from ROM through row coordinates rowGrey [63:0] and column coordinates clmGrey [63:0] and perform bilinear interpolation.
The specific process of the embodiment of the invention is as follows:
step 1, acquiring the ground coordinates of the pixels of the orthoimage.
The 4 vertexes of the original image are p respectively1、p2、p3And p4Their corresponding ground points P1、P2、P3And P4Are (Lon1, Lat1), (Lon2, Lat2), (Lon3, Lat3), and (Lon4, Lat4), respectively, the ground range covered by the orthoimage can be determined according to equation (1), i.e.:
Figure BDA0003221208500000051
after the range of the ortho image is determined, the number of rows and columns of the ortho image can be determined according to the given ground sampling interval STEPLon and STEPLat, as shown in formula (2):
Figure BDA0003221208500000052
in the formula, NjOrthAnd NiOrthThe number of rows and columns of the ortho image, respectively.
Setting an arbitrary image point p on the ortho-imageOrthHas a pixel coordinate of (i)Orth,jOrth) Then image point pOrthThe geodetic coordinates (Lon, Lat) of the corresponding ground point P can be obtained by equation (3), i.e.:
Figure BDA0003221208500000053
in the formula (i)LB,jLB) Is the pixel coordinate of the vertex at the lower left corner of the ortho image, iLB=0,jLB=NjOrth
After geodetic coordinates (Lon, Lat) of the ground point P are obtained, row-column coordinates (i) of the ground point P in the DEM are calculated according to a formula (4)D,jD) Namely:
Figure BDA0003221208500000054
wherein, LonDminAnd LatDminRespectively, the minimum value of longitude and latitude in the DEM; stLonD and stLatD are the sampling intervals of the DEM. Obtaining the row-column coordinates (i) of the ground point P in the DEMD,jD) Then, the elevation value Hei corresponding to the ground point P is calculated by adopting a parallel bilinear interpolation method shown in a formula (5),
Figure BDA0003221208500000055
wherein h11 and h12 are intermediate amounts; clmFDEM and rowFDEM are row-column coordinates (i) of the ground point P in the DEMD,jD) The fractional part of (a); h1, h2, h3 and h4 are elevation values corresponding to the four-neighbor domain pixels; hei is the final elevation interpolation result.
According to the analysis, an FPGA hardware architecture as shown in FIG. 2 is designed for the GETCORD module. The hardware architecture mainly comprises a GetLonLat module for acquiring geodetic coordinates LonOrth [63:0] and LatOrth [63] and an interpolarHei module for interpolating elevations HeiOrth [63:0 ]. To ensure that LonOrth [63:0] and LatOrth [63:0] can be input to the ORTHOM module in synchronization with HeiOrth [63:0], LonOrth [63:0] and LatOrth [63:0] are delayed by the shift register.
According to the formula (3), the FPGA hardware architecture shown in fig. 3 is designed for the GetLonLat module. When the start signal isStart is 1' b1, the GetLonLat module will start to perform active operations. After a certain latency time, the "Fixed 2 Float" will have LonOrth [63:0] and LatOrth [63:0] coordinate data output in succession, and the signal isGetLonLat will be pulled high. When the counter countLat and the counter countLon are respectively equal to the maximum row number orthnnumlat and OrthNumLon of the ortho-image, i.e., countLat is OrthNumLat-1 and countLon is OrthNumLon-1, it indicates that the geodetic coordinates of all the pixels on the ortho-image are obtained, and therefore, the completion signal isDoneOrth is pulled up to a high level.
When the InterpolateHei module detects that the signal isGetLonLat sent by the GetLonLat module is high, the InterpolateHei module performs bilinear interpolation on the DEM according to the formula (4) and the formula (5). The FPGA hardware architecture of the InterpolateHei module is shown in FIG. 4. In the hardware architecture, ROWDEM [63:0] and CLMDEM [63:0] are 64-bit floating point numbers, and in order to separate the integer part and the decimal part of the ROWDEM [63:0] and the CLMDEM [63:0] into 64-bit fixed point RowFx [63:0] and ClmFx [63:0] in Q40 format respectively. This gives rise to integer parts rowDEM ═ RowFx [63:0], 40'd0} and clmDEM ═ ClmFx [63:40],40'd0} respectively, and fractional parts rowFdem ═ {24'd, RowFx [39:0] } and clmFdem {24'd, ClmFx [39:0] }. In one aspect, the elevations h1[63:0], h1[63:0], h1[63:0] and h4[63:0] of the 4 neighborhood ground points may be read from memory using rowDEM [63:0], clmDEM [63:0] and the signal isGetNeighborH. The fixed point numbers rowFdem [63:0] and clmFdem [63:0], on the other hand, are reconverted to 64-bits of floating point numbers rowFDEM [63:0] and clmFDEM [63:0] for subsequent calculations.
And 2, performing coordinate conversion by using the RFM model.
The FPGA hardware architecture of the ORTHOM module is shown in fig. 5. The ORTHOM module mainly utilizes an RFM model to convert the earth coordinates LonOrth [63:0], LatOrth [63:0] and HeiOrth [63:0] of pixel points of the orthoimage into pixel coordinates RowFPGA [63:0] and ClmFPGA [63:0] of the original image, and then interpolates the gray value GreyFPGA [63:0] of the pixel points according to the RowFPGA [63:0] and the ClmFPGA [63:0 ]. Therefore, the hardware architecture mainly comprises a GetTerm module, a CoeffMTermPE 1-CoeffMTermPE 4 module, a GetRowClm module and an InterpolateGrey module. The functions of the modules are as follows:
GetTerm Module
The GetTerm module mainly obtains OrthTerm1[63:0] to OrthTerm20[63:0] through 4-level operations shown in formulas (6) to (9). The GetTerm module uses a pipeline structure to obtain OrthTerm1[63:0] OrthTerm20[63:0 ]. According to the formula (6) to the formula (9), an FPGA hardware architecture as shown in fig. 6 is designed for the GetTerm module. Because the GetTerm module adopts a pipeline structure, in order to ensure that OrtTerm 1[63:0] OrtTerm 20[63:0] can be output simultaneously, OrtTerm 1[63:0] OrtTerm 10[63:0] needs to be delayed by using a delay unit. When the signal isDoneOrthNor is high, it indicates that the GetTerm module has begun sending OrthTerm1[63:0] OrthTerm20[63:0] data streams to the CoeffMTermPE1 and like modules.
Stage 1:
rL=LonOrth-Lonoff,rP=LatOrth-Latoff,rH=HeiOrth-hoff (6)
stage 2:
L=rL×InvLonScale,P=rP×InvLatScale,H=rH×InvhScale (7)
stage 3:
Figure BDA0003221208500000071
and 4, stage:
Figure BDA0003221208500000072
CoeffMTermPE Module
The CoeffMTermPE 1-CoeffMTermPE 4 modules have the same hardware structure. They obtain the values of polynomials NumL, DenL, NumS, and DenS, i.e., OrthNumL [63:0], OrthDenL [63:0], OrthNumS [63:0], and OrthDenS [63:0], mainly according to equations (10) to (15). The CoeffMTermPE module adopts a pipeline structure.
Stage 1:
Figure BDA0003221208500000073
stage 2:
Figure BDA0003221208500000074
stage 3:
Figure BDA0003221208500000075
and 4, stage:
Figure BDA0003221208500000081
stage 5:
cmtL51=cmtL41+cmtL42,cmtL35_delay2=cmtL35_delay1, (14)
stage 6:
ResultsCMT=cmtL51+cmtL35_delay2 (15)
wherein, cmtL 11-cmtL 42 are intermediate variables; ResultsCMT is the value of the polynomial.
According to the formula (10) to the formula (15), a reusable CoeffMTermPE module is designed, and the FPGA hardware architecture of the module is shown in fig. 7. The FPGA hardware architecture of the CoeffMTermPE module is a pipeline structure, and 20 floating point number multiplications 'Mul' and 19 floating point number additions 'Add' are used in total. When the signal isDoneOrthNor sent by the GetTerm module to the CoeffMTermPE 1-CoeffMTermPE 4 modules is high, the CoeffMTermPE 1-CoeffMTermPE 4 modules start to perform computations, and after a certain latency, the CoeffMTermPE 1-CoeffMTermPE 4 modules will simultaneously output data streams OrthNumL [63:0], OrthDenL [63:0], OrthNumS [63:0] and OrthDenS [63:0], respectively, and the signal isDoneCMT (i.e., signal isGetNumL in FIG. 5) will be pulled high.
GetRowClm Module
After obtaining the values of RFM model polynomial OrthNumL [63:0], OrthDenL [63:0], OrthNumS [63:0] and OrthDenS [63:0], the pixel coordinates ClmFPGA [63:0] and RowFPGA [63:0] of the original image pixel points can be obtained according to the formulas (16) to (18).
Stage 1:
Figure BDA0003221208500000082
stage 2:
TempClm=NorClm×iScale,TempRow=NorRow×jScale(17)
stage 3:
ClmFPGA=TempClm+ioff,RowFPGA=TempRow+joff(18)
in the formula, iScale, jScale, ioff and joff are regularization parameters of pixel coordinates; NorClm and NorRow are regularized pixel coordinates; TempClm and TempRow are intermediate amounts.
According to the formula (16) to the formula (18), the pipeline structure is adopted, and the FPGA hardware architecture shown in FIG. 8 is designed for the GetRowClm module. In the hardware architecture, firstly, NorRow [63:0] and NorClm [63:0] are calculated in parallel, and then regularization parameters iScale [63:0], jScale [63:0], ioff [63:0] and joff [63:0] are utilized to reversely solve pixel coordinates RowFPGA [63:0] and ClmFPGA [63:0] of an original image. RowFPGA [63:0] and ClmFPGA [63:0] obtained by the GetRowClm module are 64bits floating point numbers.
InterpolateGrey module
The InterpolateGrey module interpolates the gray level of the pixel point by using a parallel bilinear interpolation algorithm as shown in formula (19), so that the integer part and the decimal part of the pixel coordinates RowFPGA and clmftab fpga need to be separated. According to the integer parts rowGrey and clmGrey, the positions of 4 neighborhood pixels on the image can be determined to be (clmGrey, rowGrey), (clmGrey +1, rowGrey), (clmGrey, rowGrey +1) and (clmGrey +1, rowGrey +1), so that the gray values gy1, gy2, gy3 and gy4 can be obtained. The fractional parts of the pixel coordinates RowFPGA and ClmFPGA are rowFGrey and clmFGrey, respectively. The gray level grey fpga of the pixel point (clmnfpga, RowFPGA) can be calculated by formula 19, that is:
Figure BDA0003221208500000091
wherein gy11 and gy12 are intermediate amounts.
According to formula (19), the FPGA hardware architecture shown in fig. 9 is designed for the InterpolateGrey module. When the signal isDoneGetRowClm sent by the GetRowClm module to the InterpolateGrey module is high, it indicates that the GetRowClm module starts to output data RowFPGA [63:0] and ClmFPGA [63:0] in sequence, and the InterpolateGrey module starts to perform gray interpolation:
first, the pixel coordinates RowFPGA [63:0] and ClmFPGA [63:0] are converted to 64bits Fixed point RowFxG [63:0] and ClmFxG [63:0] in Q40 format, respectively, using a "Float 2 Fixed" IP kernel.
Then, the integer part (shown in formula (20)) and the fractional part (shown in formula (21)) of fixed point RowFxG [63:0] and ClmFxG [63:0] are separated by means of bit splicing, that is,
Figure BDA0003221208500000092
Figure BDA0003221208500000093
in the formula, rowGrey, clmGrey, rowFGreyQ40 and clmFGreQ 40 are 64bits fixed point numbers in Q40 format.
When the signal isGetNeighborG is high, the gray values of 4 neighborhood pixels, gy1[63:0], gy2[63:0], gy3[63:0] and gy4[63:0], can be obtained from the memory according to the values of rowGrey [63:0] and clmGrey [63:0 ].
RowFGreyQ40[63:0] and clmFGreQ 40[63:0] will be reconverted to 64bits of floating point numbers RowFGrey [63:0] and clmFGrey [63:0 ]. After the Fixed point number is converted to a floating point number by the "Fixed 2 Float" IP core, the mtvalid _ clmfdry signal is pulled high to drive the calculation of equation 20, i.e., perform grayscale interpolation. When the signal isGetGrey changes from low level to high level, it indicates that there will be gray values of pixels being successively output.
Experiment:
under a Windows10 system, algorithm design and program compiling are completed on a Vivado2016.4 software platform by using Verilog hardware language, and an orthorectification simulation experiment without a control point is performed by using SPOT-6 satellite image data. After SPOT-6 satellite image RFM orthorectification based on FPGA hardware is carried out, 10 Independent Check Points (ICPs) are randomly selected respectively to verify the rectification precision. As shown in Table 1, for SPOT-6 images, the RMSE at the plane of the ICPs was 5.2399 pixels, respectively. In addition, the corrected images of the FPGA and the PC were compared (as shown in fig. 10), and pixel coordinate deviation statistics were performed for each of the randomly selected 50 inspection points (as shown in table 2).
As can be seen from table 1 and table 2, and fig. 10 and fig. 11, the SPOT-6 satellite image RFM orthorectification method based on the FPGA hardware has a great correction accuracy potential.
TABLE 1 correction accuracy (unit: pixel) of RFM orthorectification algorithm for FPGA hardware
Figure BDA0003221208500000101
TABLE 2 statistics of pixel coordinate deviations (units: pixels) at checkpoints
Figure BDA0003221208500000102
While the present invention has been described with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments, which are illustrative only and not restrictive, and various modifications which do not depart from the spirit of the present invention and which are intended to be covered by the claims of the present invention may be made by those skilled in the art.

Claims (4)

1. An SPOT-6 satellite image RFM orthorectification method based on FPGA hardware is characterized by comprising the following steps:
1) acquiring geodetic coordinates of pixels of the orthoimage;
2) performing coordinate conversion by using an RFM (radio frequency mass spectrometer), namely converting geodetic coordinates of ground points into pixel row coordinates and pixel column coordinates of an original image;
3) performing gray level interpolation of the orthoimage;
4) and repeating the steps 1) to 3) until the gray values of all pixel points on the orthoimage are obtained, thereby obtaining the orthorectified image.
2. The SPOT-6 satellite imagery RFM orthorectification method based on FPGA hardware of claim 1, wherein step 1) comprises:
let 4 vertexes of the original image be p respectively1、p2、p3And p4Their corresponding ground points P1、P2、P3And P4Are (Lon1, Lat1), (Lon2, Lat2), (Lon3, Lat3), and (Lon4, Lat4), respectively, the ground range covered by the orthographic image is determined according to the following formula:
Figure FDA0003221208490000011
after the range of the ortho image is determined, the number of rows and columns of the ortho image is determined according to the given longitude and latitude ground sampling intervals STEPLon and STEPLat, as shown in the following formula:
Figure FDA0003221208490000012
in the formula, NjOrthAnd NiOrthThe number of rows and columns of the ortho image respectively;
setting an arbitrary pixel point p on an ortho-imageOrthHas a pixel coordinate of (i)Orth,jOrth) Then pixel point pOrthCorresponding ground point Pi,i=1,2,3,…,NjOrth×NiOrthGround point PiIs obtained by the following formula:
Figure FDA0003221208490000013
in the formula (i)LB,jLB) Is the pixel coordinate of the pixel at the lower left corner of the ortho image, iLB=0,jLB=NjOrth(ii) a Lon is longitude and Lat is latitude;
obtaining a ground point PiAfter the earth coordinates (Lon, Lat), the ground point P is calculated according to the following formulaiLine and column coordinates (i) in DEMD,jD) Namely:
Figure FDA0003221208490000014
wherein, LonDminAnd LatDminRespectively, the minimum value of longitude and latitude in the DEM; stLonD and stLatD are the sampling intervals of the DEM. At the time of obtaining the ground point PiLine and column coordinates (i) in DEMD,jD) Then, the following parallel bilinear interpolation method is adopted to calculate the ground point PiThe corresponding height value of the Hei,
Figure FDA0003221208490000021
wherein h11 and h12 are intermediate variables; clmFDEM and rowFDEM are row-column coordinates (i) of the ground point P in the DEMD,jD) The fractional part of (a); h1, h2, h3 and h4 are elevation values corresponding to the four-neighbor domain pixels; hei is the final elevation interpolation result.
3. The FPGA hardware-based SPOT-6 satellite image RFM ortho-rectification method as claimed in claim 1, wherein the step 2) is to convert the geodetic coordinates Lon, Lat, Hei of the ground points into pixel columns of the original image by using the following RFM modelCoordinates and line coordinates iOrig,jOrig
Figure FDA0003221208490000022
Wherein ioff, iScale, joff and jScale are pixel coordinate regularization parameters; the polynomials NumL, DenL, NumS and DenS are calculated by the following formulas,
NumL=a1+a2L+a3P+a4H+a5LP+a6LH+a7PH+a8L2+a9P2+a10H2+a11PLH+a12L3+a13LP2+a14LH2+a15L2P+a16P3+a17PH2+a18L2H+a19P2H+a20H3
DenL=b1+b2L+b3P+b4H+b5LP+b6LH+b7PH+b8L2+b9P2+b10H2+b11PLH+b12L3+b13LP2+b14LH2+b15L2P+b16P3+b17PH2+b18L2H+b19P2H+b20H3
NumS=c1+c2L+c3P+c4H+c5LP+c6LH+c7PH+c8L2+c9P2+c10H2+c11PLH+c12L3+c13LP2+c14LH2+c15L2P+c16P3+c17PH2+c18L2H+c19P2H+c20H3
DenS=d1+d2L+d3P+d4H+d5LP+d6LH+d7PH+d8L2+d9P2+d10H2+d11PLH+d12L3+d13LP2+d14LH2+d15L2P+d16P3+d17PH2+d18L2H+d19P2H+d20H3
wherein, a1~a20、b1~b20,c1~c20,d1~d20Are RFM model parameters; p, L and H are the regularization coordinates of geodetic coordinates Lon, Lat, and Hei, respectively; the regularization coordinates are calculated by the following formulas respectively,
Figure FDA0003221208490000023
wherein Lonoff, LonScale, Latoff, LatScale, hoff and hScale are geodetic regularization parameters.
4. The FPGA hardware-based SPOT-6 satellite image RFM orthorectification method as claimed in claim 1, wherein the step 3) is to calculate the pixel coordinates (i) of the original image by using the following parallel bilinear interpolation methodOrig,jOrig) Gray value greyffpga of (a), namely:
Figure FDA0003221208490000031
where clmFGrey and rowFGrey are pixel coordinates (i) of the original video image, respectivelyOrig,jOrig) The fractional part of (a); gy1, gy2, gy3, gy4 are pixel coordinates (i) of the original imageOrig,jOrig) Gray values corresponding to the four adjacent domain pixels, gy11 and gy12 are intermediate variables; and finally, assigning the gray value GreyFPGA to an orthoimage pixel point with a pixel coordinate (ClmFPGA, RowFPGA).
CN202110958456.9A 2021-08-20 2021-08-20 SPOT-6 satellite image RFM orthorectification method based on FPGA hardware Pending CN113658078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110958456.9A CN113658078A (en) 2021-08-20 2021-08-20 SPOT-6 satellite image RFM orthorectification method based on FPGA hardware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110958456.9A CN113658078A (en) 2021-08-20 2021-08-20 SPOT-6 satellite image RFM orthorectification method based on FPGA hardware

Publications (1)

Publication Number Publication Date
CN113658078A true CN113658078A (en) 2021-11-16

Family

ID=78491720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110958456.9A Pending CN113658078A (en) 2021-08-20 2021-08-20 SPOT-6 satellite image RFM orthorectification method based on FPGA hardware

Country Status (1)

Country Link
CN (1) CN113658078A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817443A (en) * 2022-06-30 2022-07-29 广东省科学院广州地理研究所 Tile-based satellite remote sensing image data processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张荣庭: "面向FPGA硬件的卫星影像GA-RLS-RFM正射纠正优化算法研究", 《万方数据知识服务平台》, pages 3 - 174 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817443A (en) * 2022-06-30 2022-07-29 广东省科学院广州地理研究所 Tile-based satellite remote sensing image data processing method and device

Similar Documents

Publication Publication Date Title
CN108876792B (en) Semantic segmentation method, device and system and storage medium
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN105528500A (en) Imaging simulation method and system for decimeter-scale satellite-borne TDI CCD stereoscopic mapping camera
CN103473797B (en) Spatial domain based on compressed sensing sampling data correction can downscaled images reconstructing method
CN115082358B (en) Image enhancement method and device, computer equipment and storage medium
CN111080648B (en) Real-time image semantic segmentation algorithm based on residual learning
CN101477682B (en) Method for remotely sensing image geometric correction by weighted polynomial model
Lin et al. An Efficient Architecture of Extended Linear Interpolation for Image Processing.
CN113658078A (en) SPOT-6 satellite image RFM orthorectification method based on FPGA hardware
CN104899835A (en) Super-resolution processing method for image based on blind fuzzy estimation and anchoring space mapping
CN113469896A (en) Method for improving geometric correction precision of geosynchronous orbit satellite earth observation image
CN114511473A (en) Hyperspectral remote sensing image denoising method based on unsupervised adaptive learning
CN107516291B (en) Night scene image ortho-rectification processing method
Lin et al. Real-time FPGA architecture of extended linear convolution for digital image scaling
CN111179171A (en) Image super-resolution reconstruction method based on residual module and attention mechanism
Zhou et al. Real-time ortho-rectification for remote-sensing images
CN112464433B (en) Recursive least square solution RFM model parameter optimization algorithm for FPGA hardware
Bilal Resource‐efficient FPGA implementation of perspective transformation for bird's eye view generation using high‐level synthesis framework
CN115908531A (en) Vehicle-mounted distance measuring method and device, vehicle-mounted terminal and readable storage medium
CN116243350A (en) Product data processing method and device for ionosphere product and computer equipment
Liu et al. FPGA-based on-board cubic convolution interpolation for spaceborne georeferencing
CN113671550A (en) SPOT-6 satellite image direct geographic positioning method based on FPGA hardware
Wu et al. Hrlinknet: Linknet with high-resolution representation for high-resolution satellite imagery
CN114494009A (en) Demosaicing method and demosaicing device
CN109146886B (en) RGBD image semantic segmentation optimization method based on depth density

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211116