CN110335220B - Image fusion method based on parallel computing algorithm - Google Patents

Image fusion method based on parallel computing algorithm Download PDF

Info

Publication number
CN110335220B
CN110335220B CN201910405981.0A CN201910405981A CN110335220B CN 110335220 B CN110335220 B CN 110335220B CN 201910405981 A CN201910405981 A CN 201910405981A CN 110335220 B CN110335220 B CN 110335220B
Authority
CN
China
Prior art keywords
data
image
pixel
calculation
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910405981.0A
Other languages
Chinese (zh)
Other versions
CN110335220A (en
Inventor
王晓慧
孟献策
谭炳香
冯林艳
肖鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongtian Gongchuang Technology Beijing Co ltd
Research Institute Of Forest Resource Information Techniques Chinese Academy Of Forestry
Original Assignee
Zhongtian Gongchuang Technology Beijing Co ltd
Research Institute Of Forest Resource Information Techniques Chinese Academy Of Forestry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongtian Gongchuang Technology Beijing Co ltd, Research Institute Of Forest Resource Information Techniques Chinese Academy Of Forestry filed Critical Zhongtian Gongchuang Technology Beijing Co ltd
Priority to CN201910405981.0A priority Critical patent/CN110335220B/en
Publication of CN110335220A publication Critical patent/CN110335220A/en
Application granted granted Critical
Publication of CN110335220B publication Critical patent/CN110335220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image fusion method based on a parallel computing algorithm belongs to the technical field of image processing. Constructing a two-dimensional data table to describe the spatial position and pixel information of the image by resampling data of the two-scene image data, and constructing a topological corresponding relation of the two-scene image at a pixel level by the two-dimensional data table; on the basis of constructing the pixel-level one-to-one corresponding topological relation of the two images, the image data is sliced in groups according to the row direction (or the column direction, or the data block), and corresponding calculation processing flows are synchronously started for parallel calculation in a multi-thread and multi-task mode according to the specific situation of calculation resources, so that the rapid fusion calculation of the two images is realized.

Description

Image fusion method based on parallel computing algorithm
Technical Field
The invention relates to an image fusion method based on a parallel computing algorithm, and belongs to the technical field of image processing.
Background
The image fusion technology can retain the space detail expression capability of the high-spatial-resolution full-color image and the special ground feature spectral characteristic information of the multispectral image with lower spatial resolution, can greatly improve the utilization rate of image information, enhance the reliability of image interpretation and improve the image interpretation effect, and has wide application and great effect in forest resource monitoring, wetland resource monitoring, desertification land monitoring, change analysis and other aspects. The image fusion technology has been developed into a basic technology which is very common in the practical application of the remote sensing technology.
However, the conventional image fusion algorithm is an application program compiled based on a serial computing algorithm of a conventional CPU architecture, and for remote sensing image data, the data volume of each scene image is larger and larger as the spatial resolution is improved, and when image fusion is performed, the requirements of people on the image data processing speed are less and less satisfied due to the limitations of computer resources and the conventional serial computing algorithm.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an image fusion method based on a parallel computing algorithm.
Aiming at the problems, the invention provides an image fusion method based on a parallel computing algorithm based on a distributed computing environment supporting multithread parallel computing hardware environment and rich computing resources, and a computing technology and hardware resources with parallel computing characteristics such as a GPU (graphics processing Unit) and an FPGA (field programmable Gate array) computing accelerator card, wherein the algorithm adopts a pixel-level image fusion technology, the technology is based on information fusion on a basic data level, and the image fusion method can furthest retain detailed information of an original image, which is a technical advantage that other types of fusion technologies do not have at present.
However, the image element level image fusion also has the non-negligible limitation, and because the image element level image fusion faces to a processing process that each image element participates in calculation, the special conditions that the data size is huge, the calculation processing time is long, and the requirement on timeliness is high are difficult to meet during calculation; especially, when a large amount of image fusion processing is required, the method brings great expenditure and waste on manpower and material resources, and has great hidden dangers on processing efficiency and data reliability.
An image fusion method based on a parallel computing algorithm comprises the following steps; constructing a two-dimensional data table to describe the spatial position and pixel information of the image by resampling data of the two-scene image data, and constructing a topological corresponding relation of the two-scene image at a pixel level by the two-dimensional data table; on the basis of constructing the pixel-level one-to-one corresponding topological relation of the two images, the image data is sliced in groups according to the row direction (or the column direction, or the data block), and corresponding calculation processing flows are synchronously started for parallel calculation in a multi-thread and multi-task mode according to the specific situation of calculation resources, so that the rapid fusion calculation of the two images is realized.
An image fusion method based on parallel computing algorithm also comprises the following steps;
the method comprises the steps of firstly, acquiring basic information of two scenes of images needing to be fused;
secondly, creating a data cache region of the two images;
thirdly, fusing and calculating the two scene images;
step four, saving the fused image file;
and step five, finishing the fusion.
The method comprises the following steps of firstly, acquiring basic information of two scenes needing to be fused, wherein the basic information comprises the following steps;
step 1.1, acquiring row and column number information of two scene images;
step 1.2, acquiring the spatial resolution of two scene images;
step 1.3, acquiring the geographic coordinate information of the spatial position of the image;
step 1.4, the number of rows and columns of the fused image, the number of wave bands and the geographic coordinate range of the fused image;
and step 1.5, obtaining the size proportion relation of each pixel of the two images to be fused in the space coverage range.
The second step, establish the data buffer of two scenes of pictures, contain the following steps;
step 2.1, creating a panchromatic image TX11_ HC data buffer area;
2.2, creating a multispectral image data cache region;
2.3, creating a fused image data cache region;
step 2.4, determining the corresponding relation of the pixel data in the data cache region;
and 2.5, obtaining the size proportion relation of each pixel in the two images to be fused in the space coverage range.
The third step, carry on the fusion and calculation of two scenes of images, include the following steps; grouping and slicing images to be fused in a mode of fusing and calculating N lines every time, and simultaneously starting a plurality of calculation processes to synchronously perform parallel calculation; and reading one line of data from the data cache every time, reading four lines of data from the data cache, performing fusion calculation according to the requirement of the fusion calculation, and storing the calculation result into the data cache.
The fourth step, keep the image file after fusing, contain the following steps; and constructing a fused image data file according to the space geographic coordinate information, the maximum row-column number information and all pixel information of the image TX 14.
The invention has the advantages that the two-scene image data are subjected to a data resampling method to construct a two-dimensional data table for describing the spatial position and the pixel information of the image, and the topological corresponding relation of the two-scene image at the pixel level is constructed through the two-dimensional data table.
The invention has simple and clear technical process, does not need manual operation and manual judgment work at all, and greatly improves the working efficiency, so that the invention has high efficiency, rapidness and automation as the core technology.
Because the invention establishes the corresponding relations and adopts a data parallel computing method, three very obvious characteristics are brought to the image fusion algorithm provided by the invention, and the complexity of the fusion algorithm on the logic relation is greatly simplified; secondly, the program structure is simpler and clearer; thirdly, the robustness of the system is greatly enhanced.
Drawings
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein the accompanying drawings are included to provide a further understanding of the invention and form a part of this specification, and wherein the illustrated embodiments of the invention and the description thereof are intended to illustrate and not limit the invention, as illustrated in the accompanying drawings, in which:
FIG. 1 is a schematic diagram of the spatial correspondence between panchromatic and multispectral scene images according to the present invention.
FIG. 2 is an overall flow chart of image fusion according to the present invention.
FIG. 3 is a flowchart of a two-scene image fusion calculation according to the present invention.
FIG. 4 is a flow chart of a fusion calculation process of the present invention.
FIG. 5 is a flow chart of the multitask nested parallel computing of the present invention.
The invention is further illustrated with reference to the following figures and examples.
Detailed Description
It will be apparent that those skilled in the art can make many modifications and variations based on the spirit of the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element, component or section is referred to as being "connected" to another element, component or section, it can be directly connected to the other element or section or intervening elements or sections may also be present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art.
The following examples are further illustrative in order to facilitate the understanding of the embodiments, and the present invention is not limited to the examples.
Example 1: as shown in fig. 1, fig. 2, fig. 3, fig. 4, and fig. 5, an image fusion method based on a parallel computing algorithm performs fusion operation on panchromatic and multispectral images with the same spatial range, which are formed by performing correction, registration, and cropping on the same region, so as to form a new image with panchromatic and multispectral information characteristics.
The invention provides a technical idea of changing time in space, and relates a space coordinate relationship between two scene images and an image pixel corresponding relationship by establishing two-dimensional data tables to form a very visual one-to-one corresponding relationship in space, as shown in fig. 1, a space corresponding relationship diagram of panchromatic and multispectral two scene images shown in fig. 1 is provided, in fig. 1, an image TX44 is assumed to have 4-meter space resolution, and an image TX11 is assumed to have 1-meter space resolution, so that the space range of one pixel of an image TX44 just covers the space range of 16 pixels of the image TX 11.
The invention also discloses a core technology for realizing the high-efficiency and high-speed processing capability of the fusion algorithm on image data in order to realize multi-task parallel computation, optimize and simplify the logic corresponding relation of the fusion algorithm, strengthen the program structure, improve the automation degree of the program and simplify the operation flow.
An image fusion method based on a parallel computing algorithm comprises the following steps;
the size of the number of image rows and columns is used as a data template to create three image data buffer areas of panchromatic TX11_ HC, multispectral TX44_ HC and fused image TX14_ HC; through a method of resampling spatial coordinates of two images, pixels in a full-color image TX11 and a multispectral image TX44 are resampled into data buffer templates TX11_ HC and TX44_ HC, two-dimensional tables constructed by spatial coordinates and the pixels are formed, namely, a complete one-to-one correspondence relationship between the space and the pixels is established between 1 row of data in a TX44_ HC data cache and 4 rows of data in a TX11_ HC data cache, and fusion calculation between the two images can be completed as long as fusion calculation operation is performed on the data according to a certain algorithm.
The TX14_ HC data buffer is a two-dimensional data table with the same structure as the TX11_ HC data buffer, and is used for storing fused image data generated after fusion calculation is performed on two images.
An image fusion method based on a parallel computing algorithm comprises the steps of the overall flow of image fusion computing operation, as shown in FIG. 2;
the method comprises the steps of firstly, acquiring basic information of two scenes of images needing to be fused;
secondly, creating a data cache region of the two images;
thirdly, fusing and calculating the two scene images;
step four, saving the fused image file;
and step five, finishing the fusion.
The following describes the image fusion algorithm in detail according to the general flow chart of image fusion calculation:
the method comprises the steps of firstly, acquiring basic information of two scenes of images needing to be fused;
the original images for the experiment are two images in the same spatial range after registration, correction and cutting, wherein one image is a single-waveband panchromatic image TX11 with the spatial resolution of 1 meter, and the other image is a multispectral image TX44 with the spatial resolution of 4 meters and 4 wavebands.
Step 1.1, acquiring the number information of the rows and the columns of the two scenes. The image TX11 is 2000 × 2000, i.e., the number of rows (X direction) TX11_ HS of the image is 2000, and the number of columns (Y direction) TX11_ LS is 2000. The image TX44 is 500 × 500, i.e., the number of rows (X direction) TX44_ HS of the image is 500, and the number of columns (Y direction) TX44_ LS is 500.
And step 1.2, acquiring the spatial resolution of the two images, wherein the spatial resolution of an image TX11 pixel is 1 m/1 m, and the spatial resolution of an image TX44 pixel is 4 m/4 m. The ground coverage of the image TX11 is then: the length 2000X 1 m in the X direction is 2000 m, and the length 2000X 1 m in the Y direction is 2000 m. The ground coverage of the image TX44 is then: the length 500 × 4 m in the X direction is 2000 m, and the length 500 × 4 m in the Y direction is 2000 m.
And step 1.3, acquiring the geographic coordinate information of the spatial position of the image. The images TX11 (geographical coordinate unit: meter) are geographic coordinates of the lower left corner (200 ) and geographic coordinates of the upper right corner (2200 ), and the corresponding identifiers are the coordinates of the lower left corner (TX11_ X1, TX11_ Y1) and the coordinates of the upper right corner (TX11_ X2, TX11_ Y2). The images TX44 (geographical coordinate unit: meter) are geographic coordinates of the lower left corner (200 ) and geographic coordinates of the upper right corner (2200 ), and the corresponding identifiers are the coordinates of the lower left corner (TX44_ X1, TX44_ Y1) and the coordinates of the upper right corner (TX44_ X2, TX44_ Y2).
In step 1.4, the number of rows and columns of the fused image TX14 is 2000 × 2000, and the number of bands is 4. Namely, the number of lines (X direction) TX14_ HS is 2000, the number of columns (Y direction) TX14_ LS is 2000, the geographic coordinate range of the fused image, the lower left corner geographic coordinate (200 ), and the upper right corner geographic coordinate (2200 ), which correspond to the lower left corner coordinate (TX14_ X1, TX14_ Y1), and the upper right corner coordinate (TX14_ X2, TX14_ Y2).
Step 1.5, the size ratio of each pixel of the two images to be fused in the spatial coverage range is 4 times, that is, the coverage ratio FGBL is 4 m/1 m/4, that is, a pixel with a resolution of 4 m covers 16 pixels with a resolution of 1 m, as shown in fig. 1.
Secondly, creating a data cache region of the two images;
in the invention, the number of rows and columns of the two images to be fused is used as a data template, and the one-to-one corresponding matching relation of the two images on the spatial position and the pixel is constructed in a way of creating a data cache region, thereby realizing the fusion calculation of the two images on the basis.
Step 2.1, create a panchromatic image TX11_ HC data buffer, the data buffer size being 2000 x 2000.
Step 2.2, create multispectral image TX44_ HC data buffer, the size of which is 500 × 500. Since the image has 4 bands, 4 identical data buffers need to be created, and for the brief description and convenience of the fusion algorithm, a band will be taken as an example in the introduction of the fusion algorithm for detailed description.
And 2.3, creating a fused image TX14_ HC data buffer area, wherein the data buffer area is completely the same as the full-color image TX11_ HC data buffer area, and the size of the data buffer area is also 2000 × 2000. However, if the fused image has 4 bands, 4 identical data buffers are also required to be created, and for the brief introduction and convenience of the description of the fusion algorithm, a band will be taken as an example in the introduction of the fusion algorithm for detailed description.
And 2.4, corresponding relation of the pixel data in the data buffer areas TX11_ HC and TX44_ HC. As can be seen from the schematic diagram of fig. 1, each row of data in the TX44_ HC data buffer corresponds to four rows of data in the TX11_ HC data buffer, that is, there is a one-to-one spatial topological adjacency relationship between a spatial position and each pixel.
Step 2.5, the size ratio of each pixel in the two images to be fused in the spatial coverage range is 4 times, that is, the coverage ratio FGBL is 4 m/1 m/4, that is, a pixel with a resolution of 4 m covers 16 pixels with a resolution of 1 m.
Thirdly, fusing and calculating the two scene images;
as can be seen from the above procedure for creating the data buffer, each row of data in the TX44_ HC data buffer corresponds to four rows of data in the TX11_ HC data buffer, that is, there is a one-to-one spatial topological adjacency relationship between the spatial position and each pixel. Due to the spatial topological relation between the TX11_ HC and the TX44_ HC data caches, a reliable basic guarantee is provided for the multitask parallel fusion calculation algorithm provided by the application, in the two-scene image fusion calculation process, the images to be fused are grouped and sliced in a mode of fusing and calculating N lines every time, and meanwhile, a plurality of calculation processes are started to synchronously perform parallel calculation, so that the purpose of quick and efficient calculation is achieved.
In the calculation flow of the present application, a band in the image TX44 will be taken as an example, that is, data in the TX44_ HC data buffer is the pel data of a certain band in the image TX 44.
Reading one line of data from the TX44_ HC data buffer each time, reading four lines of data from the TX11_ HC data buffer, performing fusion calculation according to the requirements of the fusion calculation, and storing the calculation result into the TX14_ HC data buffer. The flow of the two-scene image fusion calculation is shown in fig. 3.
The two-scene image fusion calculation process comprises the following steps;
step 1, setting pixel RH _ HS of several lines fused each time, wherein a constant is N; the fused row number YRH _ HS with an initial value of 0; setting the pixel number H11_ XYS of each row of TX11_ HC, and a constant; the image TX44_ HC pixel number per line H44_ XYS, constant; the pixel number of each row of the fused image TX14_ HC H14_ XYS is constant; setting a TX11_ HC data storage address TX11_ DZ; TX44_ HC data storage address TX44_ DZ; the fused TX14_ HC data storage address TX14_ DZ; setting a row of pixels of TX44_ HC to temporarily store an array TX44_ SZ; setting a row of pixel storage arrays TX11_ SZ of TX11_ HC; setting a row of pixel storage arrays TX14_ SZ of TX14_ HC; covering a proportion parameter FGBL for the pixels of the two scenes, wherein a constant is 4; setting reading high-resolution image line number GFBL _ HS with an initial value of 0;
step 2, reading one row of data of TX44_ HC from the TX44_ DZ address, and storing the row of data into TX44_ SZ;
step 3, reading one row of data of TX11_ HC from the TX11_ DZ address, and storing the data to TX11_ SZ; GFBL _ HS + 1;
step 4, judging whether GFBL _ HS is FGBL? If yes, go to step 5; if not, turning to step 8;
step 5, GFBL _ HS is 0, YRH _ HS + 1; judging whether: RH _ HS? If yes, go to step 5; if not, turning to step 7;
and 6, finishing the fusion calculation process of the two scenes.
Step 7, no, calculating the address TX44_ DZ ═ TX44_ DZ + YRH _ HS 44_ XYS;
address calculation TX11_ DZ ═ TX11_ DZ + YRH _ HS ═ FGBL × H11_ XYS;
address calculation TX14_ DZ ═ TX14_ DZ + YRH _ HS ═ FGBL × H14_ XYS;
step 8, if not, entering the step of the fusion calculation processing flow shown in the figure 4;
calculating an address; TX11_ DZ ═ TX11_ DZ + GFBL _ HS ═ H11_ XYS;
and 9, turning to the step 3.
In the two-scene image fusion calculation process, which fusion algorithm is adopted for the images is only one specific fusion expression of the application, and the algorithm of the application is only exemplified by a weighted average fusion mode. The principle of the weighted average image fusion algorithm is as follows: directly taking the same weight value for the pixel value of the original image, and then carrying out weighted average to obtainTo the pixel values of the fused image. Formula is Ri=(Bi+ Q)/2, wherein BiThe pixel value of the i-th wave band multispectral image, i is the wave band number, i is 1,2, …, n, Q is the pixel value of the high-resolution image, RiThe fused image pixel values.
The fusion calculation process flow is shown in fig. 4.
The fusion calculation processing flow comprises the following steps:
step 1, receiving a TX11_ HC row pixel storage array TX11_ SZ; receiving a TX44_ HC row pixel storage array TX44_ SZ; receiving a TX14_ HC row pixel storage array TX14_ SZ; receiving a TX11_ HC pixel number per row H11_ XYS and a constant; the image TX44_ HC pixel number per line H44_ XYS, constant; the pixel number of each row of the fused image TX14_ HC H14_ XYS is constant; the pixel coverage proportion parameter FGBL of the two scenes image is 4 and is constant; receiving a fused TX14_ HC data storage address TX14_ DZ; a TX44 pel value parameter, TX44_ XYZ; four TX11 pel-valued parameters, TX11_ XYZ [ FGBL ]; the array starting position YJSSY, the initial value is 0;
step 2, calculating YJSY (YJSY) FGBL at the array starting position;
reading a pixel value from TX44_ SZ, and putting the pixel value into a parameter TX44_ XYZ;
sequentially reading FGBL pixel values from TX11_ SZ and putting the FGBL pixel values into a parameter TX11_ XYZ [ ];
step 3, weighted average fusion calculation: i is 0-3;
TX14_HC[YJSXY+i]=(TX11_XYZ[i]+TX44_XYZ)/2;
step 4, YJSSY + 1;
and 5, judging whether: h44_ XYS? If yes, go to step 6; if not, turning to the step 2;
and 6, finishing the fusion calculation processing flow.
As can be seen from the flowchart of the two-scene image fusion calculation in fig. 3, the fusion calculation is performed only on N rows of data once in the description of the algorithm of the present application, which is essential to perform packet slicing on data, and on the basis of data packet, a plurality of two-scene image fusion calculation processes can be simultaneously started, and each process performs calculation only on selected N rows of packet data, thereby achieving the purpose of multi-task parallel calculation.
It can also be seen from the flowchart of the fusion calculation processing in fig. 4 that, in this process, only two rows of extracted data are subjected to a fusion calculation process of one data to four data according to a given fusion algorithm rule, and since the fusion calculation process also has the characteristic of performing data grouping slicing, the method in the flowchart of the fusion calculation of two images in fig. 3 can be completely adopted to perform grouping slicing processing on two rows of extracted pixel data, and the purpose of the multi-task parallel calculation of the flowchart of the fusion calculation processing in fig. 4 can also be achieved, so that the purpose of nesting several multi-task parallel calculations in each task in the multi-task parallel calculation is achieved, and the purpose of high-performance parallel calculation is achieved. The flow chart of the multitask nested parallel computing is shown in figure 5.
The multi-task nested parallel computing flow comprises the following steps:
step 1, data grouping and slicing: dividing the image data into n groups according to lines, and simultaneously starting n tasks for parallel calculation;
step 2, synchronously implementing the two-scene image fusion calculation flow step shown in figure 3 for each of the n tasks;
step 3, grouping and slicing the two lines of extracted pixel data into m groups, and simultaneously starting m tasks for parallel computing;
step 4, synchronously implementing the step of the fusion calculation processing flow shown in the figure 4 for each task in the m tasks;
step 5, from task 1 to task n of the n tasks: and (6) ending.
From the above description, it can be seen that the core content of the present application is to implement the multitask nested parallel computation by constructing a two-dimensional data table based on the one-to-one corresponding spatial topological relation between the spatial position and the pixel between two images and adopting a data grouping and slicing manner, thereby achieving the purpose of high-speed, high-efficiency and high-performance parallel computation.
Step four, saving the fused image file;
and constructing a fused image data file according to the space geographic coordinate information, the maximum row-column number information and all pixel information of the image TX 14.
And step five, finishing the fusion process and exiting.
As described above, although the embodiments of the present invention have been described in detail, it will be apparent to those skilled in the art that many modifications are possible without substantially departing from the spirit and scope of the present invention. Therefore, such modifications are also all included in the scope of protection of the present invention.

Claims (1)

1. An image fusion method based on a parallel computing algorithm is characterized by comprising the following steps; constructing a two-dimensional data table to describe the spatial position and pixel information of the image by resampling data of the two-scene image data, and constructing a topological corresponding relation of the two-scene image at a pixel level by the two-dimensional data table; on the basis of constructing the pixel-level one-to-one corresponding topological relation of the two images, the image data is sliced in groups according to the row direction or the column direction or the data block, and corresponding calculation processing flows are synchronously started in a multi-thread and multi-task mode for parallel calculation according to the specific situation of calculation resources, so that the rapid fusion calculation of the two images is realized;
the size of the number of image rows and columns is used as a data template to create three image data buffer areas of panchromatic TX11_ HC, multispectral TX44_ HC and fused image TX14_ HC; the method comprises the steps of resampling pixels in a full-color image TX11 and a multispectral image TX44 into data buffer templates TX11_ HC and TX44_ HC by a method of resampling spatial coordinates of two images, forming two-dimensional tables constructed by spatial coordinates and the pixels, establishing a complete one-to-one correspondence relationship between the space and the pixels by 1 row of data in a TX44_ HC data cache and 4 rows of data in a TX11_ HC data cache, and completing fusion calculation between the two images as long as fusion calculation operation is performed on the data according to a certain algorithm;
also comprises the following steps;
the method comprises the steps of firstly, acquiring basic information of two scenes of images needing to be fused;
secondly, creating a data cache region of the two images;
thirdly, fusing and calculating the two scene images;
step four, saving the fused image file;
step five, finishing the fusion;
the method comprises the following steps of firstly, acquiring basic information of two scenes needing to be fused, wherein the basic information comprises the following steps;
step 1.1, acquiring row and column number information of two scene images;
step 1.2, acquiring the spatial resolution of two scene images;
step 1.3, acquiring the geographic coordinate information of the spatial position of the image;
step 1.4, the number of rows and columns of the fused image, the number of wave bands and the geographic coordinate range of the fused image;
step 1.5, obtaining the size proportion relation of each pixel of the two images to be fused in the space coverage range;
the second step, establish the data buffer of two scenes of pictures, contain the following steps;
step 2.1, creating a panchromatic image TX11_ HC data buffer area;
2.2, creating a multispectral image data cache region;
2.3, creating a fused image data cache region;
step 2.4, determining the corresponding relation of the pixel data in the data cache region;
step 2.5, obtaining the size proportion relation of each pixel in the two images to be fused in the space coverage range;
the third step, carry on the fusion and calculation of two scenes of images, include the following steps; grouping and slicing images to be fused in a mode of fusing and calculating N lines every time, and simultaneously starting a plurality of calculation processes to synchronously perform parallel calculation; reading one line of data from the data cache every time, reading four lines of data from the data cache, performing fusion calculation according to the requirement of the fusion calculation, and storing the calculation result into the data cache;
grouping and slicing images to be fused in a mode of fusing and calculating N rows every time, simultaneously starting a plurality of calculation processes to synchronously perform parallel calculation, taking data in a TX44_ HC data buffer which is a wave band in an image TX44 as pixel data of a certain wave band in the image TX44,
reading one line of data from a TX44_ HC data buffer each time, reading four lines of data from a TX11_ HC data buffer, performing fusion calculation according to the requirement of the fusion calculation, saving the calculation result in a TX14_ HC data buffer,
the two-scene image fusion calculation process comprises the following steps;
step 1, setting pixel RH _ HS of several lines fused each time, wherein a constant is N; the fused row number YRH _ HS with an initial value of 0; setting the pixel number H11_ XYS of each row of TX11_ HC, and a constant; the image TX44_ HC pixel number per line H44_ XYS, constant; the pixel number of each row of the fused image TX14_ HC H14_ XYS is constant; setting a TX11_ HC data storage address TX11_ DZ; TX44_ HC data storage address TX44_ DZ; the fused TX14_ HC data storage address TX14_ DZ; setting a row of pixels of TX44_ HC to temporarily store an array TX44_ SZ; setting a row of pixel storage arrays TX11_ SZ of TX11_ HC; setting a row of pixel storage arrays TX14_ SZ of TX14_ HC; covering a proportion parameter FGBL for the pixels of the two scenes, wherein a constant is 4; setting reading high-resolution image line number GFBL _ HS with an initial value of 0;
step 2, reading one row of data of TX44_ HC from the TX44_ DZ address, and storing the row of data into TX44_ SZ;
step 3, reading one row of data of TX11_ HC from the TX11_ DZ address, and storing the data to TX11_ SZ; GFBL _ HS + 1;
step 4, judging whether GFBL _ HS is FGBL? If yes, go to step 5; if not, turning to step 8;
step 5, GFBL _ HS is 0, YRH _ HS + 1; judging whether: RH _ HS? If yes, go to step 5; if not, turning to step 7;
step 6, ending the fusion calculation process of the two scenes,
step 7, no, calculating the address TX44_ DZ ═ TX44_ DZ + YRH _ HS 44_ XYS;
address calculation TX11_ DZ ═ TX11_ DZ + YRH _ HS ═ FGBL × H11_ XYS;
address calculation TX14_ DZ ═ TX14_ DZ + YRH _ HS ═ FGBL × H14_ XYS;
step 8, if not, entering a fusion calculation processing flow step;
calculating an address; TX11_ DZ ═ TX11_ DZ + GFBL _ HS ═ H11_ XYS;
step 9, turning to the step 3,
the fusion calculation processing flow comprises the following steps:
step 1, receiving a TX11_ HC row pixel storage array TX11_ SZ; receiving a TX44_ HC row pixel storage array TX44_ SZ; receiving a TX14_ HC row pixel storage array TX14_ SZ; receiving a TX11_ HC pixel number per row H11_ XYS and a constant; the image TX44_ HC pixel number per line H44_ XYS, constant; the pixel number of each row of the fused image TX14_ HC H14_ XYS is constant; the pixel coverage proportion parameter FGBL of the two scenes image is 4 and is constant; receiving a fused TX14_ HC data storage address TX14_ DZ; a TX44 pel value parameter, TX44_ XYZ; four TX11 pel-valued parameters, TX11_ XYZ [ FGBL ]; the array starting position YJSSY, the initial value is 0;
step 2, calculating YJSY (YJSY) FGBL at the array starting position;
reading a pixel value from TX44_ SZ, and putting the pixel value into a parameter TX44_ XYZ;
sequentially reading FGBL pixel values from TX11_ SZ and putting the FGBL pixel values into a parameter TX11_ XYZ [ ];
step 3, weighted average fusion calculation: i is 0-3;
TX14_HC[YJSXY+i]=(TX11_XYZ[i]+TX44_XYZ)/2;
step 4, YJSSY + 1;
and 5, judging whether: h44_ XYS? If yes, go to step 6; if not, turning to the step 2;
step 6, the fusion calculation processing flow is finished;
grouping and slicing the data, starting a plurality of two-scene image fusion calculation processes on the basis of data grouping, calculating each process only aiming at the selected N rows of grouped data, completing the fusion calculation processing process of one data to four data according to a given fusion algorithm rule, grouping and slicing the extracted two rows of pixel data as the fusion calculation processing process also has the characteristic of data grouping and slicing, performing multi-task parallel calculation on the fusion calculation processing process, nesting a plurality of multi-task parallel calculations in each task in the multi-task parallel calculation,
the multi-task nested parallel computing flow comprises the following steps:
step 1, data grouping and slicing: dividing the image data into n groups according to lines, and simultaneously starting n tasks for parallel calculation;
step 2, each task in the n tasks synchronously implements a two-scene image fusion calculation flow;
step 3, grouping and slicing the two lines of extracted pixel data into m groups, and simultaneously starting m tasks for parallel computing;
step 4, synchronously implementing a fusion calculation processing flow step for each task in the m tasks;
step 5, from task 1 to task n of the n tasks: finishing;
the fourth step, keep the image file after fusing, contain the following steps; and constructing a fused image data file according to the space geographic coordinate information, the maximum row-column number information and all pixel information of the image TX 14.
CN201910405981.0A 2019-05-16 2019-05-16 Image fusion method based on parallel computing algorithm Active CN110335220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910405981.0A CN110335220B (en) 2019-05-16 2019-05-16 Image fusion method based on parallel computing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910405981.0A CN110335220B (en) 2019-05-16 2019-05-16 Image fusion method based on parallel computing algorithm

Publications (2)

Publication Number Publication Date
CN110335220A CN110335220A (en) 2019-10-15
CN110335220B true CN110335220B (en) 2021-08-24

Family

ID=68139603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910405981.0A Active CN110335220B (en) 2019-05-16 2019-05-16 Image fusion method based on parallel computing algorithm

Country Status (1)

Country Link
CN (1) CN110335220B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114690391A (en) * 2020-12-29 2022-07-01 光原科技(深圳)有限公司 Light sheet fluorescence microscope, image processing system and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521815A (en) * 2011-11-02 2012-06-27 薛笑荣 Fast fusion system and fast fusion method for images
CN106991665A (en) * 2017-03-24 2017-07-28 中国人民解放军国防科学技术大学 Method based on CUDA image co-registration parallel computations
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109493331A (en) * 2018-11-06 2019-03-19 中国林业科学研究院资源信息研究所 A kind of two scape image overlapping region fast acquiring methods based on parallel algorithms

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521815A (en) * 2011-11-02 2012-06-27 薛笑荣 Fast fusion system and fast fusion method for images
CN106991665A (en) * 2017-03-24 2017-07-28 中国人民解放军国防科学技术大学 Method based on CUDA image co-registration parallel computations
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109493331A (en) * 2018-11-06 2019-03-19 中国林业科学研究院资源信息研究所 A kind of two scape image overlapping region fast acquiring methods based on parallel algorithms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于集群的海洋遥感图像融合并行计算策略;李先涛 等;《计算机应用与软件》;20120131;第29卷(第1期);第84-87页 *

Also Published As

Publication number Publication date
CN110335220A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN107610131A (en) A kind of image cropping method and image cropping device
CN105678683B (en) A kind of two-dimensional storage method of threedimensional model
CN111402374B (en) Multi-path video and three-dimensional model fusion method, device, equipment and storage medium thereof
CN103632359B (en) A kind of video super-resolution disposal route
CN102184522A (en) Vertex data storage method, graphic processing unit and refinement device
CN102270236A (en) Rasterized geographic information system (GIS)-based spatial relationship judging method and system
KR101591427B1 (en) Method for Adaptive LOD Rendering in 3-D Terrain Visualization System
CN105894551B (en) Image drawing method and device
CN101714261A (en) Graphics processing systems
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
CN115375868B (en) Map display method, remote sensing map display method, computing device and storage medium
CN114219819A (en) Oblique photography model unitization method based on orthoscopic image boundary detection
CN101714258A (en) Graphics processing systems
CN108182212A (en) A kind of photomap dispatching method and display system based on aeroplane photography
CN113223159B (en) Single remote sensing image three-dimensional modeling method based on target texture virtualization processing
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN110335220B (en) Image fusion method based on parallel computing algorithm
CN114820945A (en) Sparse sampling-based method and system for generating image from ring shot image to any viewpoint image
CN102682424B (en) Image amplification processing method based on edge direction difference
CN109493331B (en) Method for rapidly acquiring overlapping area of two images based on parallel computing algorithm
CN112883907B (en) Landslide detection method and device for small-volume model
CN110110028A (en) A kind of method and system showing map by self defined area towards OGC standard
CN106210696A (en) A kind of method and device of real-time virtual View Synthesis
CN111047569B (en) Image processing method and device
CN117612153A (en) Three-dimensional target identification and positioning method based on image and point cloud information completion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant