CN103279923A - Partial image fusion processing method based on overlapped region - Google Patents

Partial image fusion processing method based on overlapped region Download PDF

Info

Publication number
CN103279923A
CN103279923A CN2013102343355A CN201310234335A CN103279923A CN 103279923 A CN103279923 A CN 103279923A CN 2013102343355 A CN2013102343355 A CN 2013102343355A CN 201310234335 A CN201310234335 A CN 201310234335A CN 103279923 A CN103279923 A CN 103279923A
Authority
CN
China
Prior art keywords
image
fusion
carry out
size
img1
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102343355A
Other languages
Chinese (zh)
Other versions
CN103279923B (en
Inventor
刘贵喜
卢海鹏
聂婷
刘荣荣
董亮
张菁超
常露
王明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201310234335.5A priority Critical patent/CN103279923B/en
Publication of CN103279923A publication Critical patent/CN103279923A/en
Application granted granted Critical
Publication of CN103279923B publication Critical patent/CN103279923B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a partial image fusion processing method based on an overlapped region. The overlapped region is registered, located and extracted and the overlapped region is processed correspondingly according to different fusion algorithms. According to the partial image fusion processing method based on the overlapped region, fusion processing of two images with overlapped parts is achieved, the universal fusion limit that a traditional fusion algorithm only can be used for whole images of the same size and complete registering is overcome, and the adaptability and the practicality of the fusion algorithm and relevant software are improved.

Description

基于重叠区域的局部图像融合处理方法Local Image Fusion Processing Method Based on Overlapping Area

技术领域 technical field

本发明涉及图像融合及其应用领域,特别是基于重叠区域的局部图像融合处理方法。 The invention relates to image fusion and its application field, in particular to a partial image fusion processing method based on overlapping regions.

背景技术 Background technique

   图像融合是多传感器数据融合的一个重要分支。图像融合是将不同传感器得到的多幅图像根据某种算法进行综合处理,以得到一个新的、满足某种需求的图像。    Image fusion is an important branch of multi-sensor data fusion. Image fusion is the comprehensive processing of multiple images obtained by different sensors according to a certain algorithm to obtain a new image that meets certain requirements.

目前,应用较为广泛的融合算法主要有简单融合算法、分量替换算法、Brovey算法、高通滤波HPF(high-pass filtering )融合算法、多尺度多分辨率分析融合算法等。但是现有绝大部分研究和算法的实现都是在一定的前提条件下进行的,这些前提条件包括:1)参与融合的原始图片必须是相同大小的、且已经完全配准的;2)其融合是全域融合,也就是说两幅图像包含的内容必须完全一致,且融合的区域是整个图像区域;3)对于小波融合算法和Contourlet融合算法来说,除了上述条件之外,还要求参与融合的图片大小必须是N*N且N必须为2的整数幂。这些前提条件有时和真实应用情况不一致,因此也就限制了图像融合算法及相关软件的适用性。 At present, the widely used fusion algorithms mainly include simple fusion algorithm, component replacement algorithm, Brovey algorithm, high-pass filtering HPF (high-pass filtering) fusion algorithm, multi-scale and multi-resolution analysis fusion algorithm, etc. However, most of the existing research and algorithm implementations are carried out under certain prerequisites. These prerequisites include: 1) The original images participating in the fusion must be of the same size and have been fully registered; 2) Other Fusion is global fusion, that is to say, the content contained in the two images must be exactly the same, and the fusion area is the entire image area; 3) For the wavelet fusion algorithm and the Contourlet fusion algorithm, in addition to the above conditions, it is also required to participate in the fusion The image size of must be N*N and N must be an integer power of 2. These preconditions are sometimes inconsistent with real application conditions, thus limiting the applicability of image fusion algorithms and related software.

在实际应用中,参与融合的图像往往是不同大小的、没有配准的(图像间往往还有平移、旋转和尺度变化)、仅有部分重叠区域的图像。对于这种情况下的融合问题,传统的融合处理方法根本无法完成相应的处理,现有的文献和相关资料也都没有正面的给出相应的解决方案。 In practical applications, the images involved in fusion are often of different sizes, without registration (there are often translation, rotation and scale changes between images), and images with only partial overlapping regions. For the fusion problem in this case, the traditional fusion processing method cannot complete the corresponding processing at all, and the existing literature and related materials have not given a positive corresponding solution.

发明内容 Contents of the invention

本发明的目的是提供一种基于重叠区域的局部图像融合处理方法。本发明通过对读入的图像进行配准,对重叠区域进行定位、提取,对重叠区域进行相应处理与融合,并对非重叠的图像区域进行无缝拼接,实现了重叠区域的局部图像融合处理。该发明能够达到较好的图像处理效果,提高融合算法的适应性和实用性。其关键步骤是对重叠区域进行配准、定位和提取,并针对不同融合算法对重叠区域进行相应处理。 The purpose of the present invention is to provide a local image fusion processing method based on overlapping regions. The present invention realizes local image fusion processing of overlapping areas by registering the read-in images, positioning and extracting overlapping areas, performing corresponding processing and fusion on overlapping areas, and seamlessly splicing non-overlapping image areas . The invention can achieve better image processing effect and improve the adaptability and practicability of the fusion algorithm. The key steps are to register, locate and extract the overlapping areas, and to deal with the overlapping areas according to different fusion algorithms.

本发明的技术方案是,基于重叠区域的局部图像融合处理方法,其特征是:包括如下步骤: The technical solution of the present invention is a local image fusion processing method based on overlapping regions, which is characterized in that: comprising the following steps:

步骤101:开始基于重叠区域的局部图像融合处理方法; Step 101: start the local image fusion processing method based on overlapping regions;

步骤102:导入两幅有部分重叠区域的图像,标记为img1,img2; Step 102: Import two images with partially overlapping regions, marked as img1, img2;

步骤103:选择配准算法,获得相关参数和判定图像,并进行相应处理; Step 103: Select a registration algorithm, obtain relevant parameters and judgment images, and perform corresponding processing;

步骤104:选择融合算法,依据步骤103的结果和所选的融合算法,对重叠区域进行定位、提取和处理,实现融合; Step 104: Select a fusion algorithm, and perform positioning, extraction and processing on overlapping regions according to the result of step 103 and the selected fusion algorithm to achieve fusion;

步骤105:对非重叠的图像区域进行无缝拼接; Step 105: seamlessly splicing non-overlapping image regions;

步骤106:结束基于重叠区域的局部图像融合处理方法。 Step 106: End the local image fusion processing method based on overlapping regions.

所述的步骤103,包括如下步骤: Described step 103 comprises the following steps:

步骤201:开始选择配准算法; Step 201: Start selecting a registration algorithm;

步骤202:分别对读入的两幅图像进行特征点提取,其点集分别标记为P1,P2; Step 202: Extract feature points from the two read-in images respectively, and mark the point sets as P1 and P2 respectively;

步骤203:使用特征描述符进行特征点粗匹配点对,结果标记为Q1,并得到图像的粗匹配结果示意图,标记为PZ_CD; Step 203: Use the feature descriptor to perform rough matching point pairs of feature points, the result is marked as Q1, and obtain a schematic diagram of the rough matching result of the image, marked as PZ_CD;

步骤204:对Q1进行RANSAC处理得到精匹配点对,结果标记为Q2,并得到图像的粗匹配结果示意图,标记为PZ_JD; Step 204: Perform RANSAC processing on Q1 to obtain a fine matching point pair, the result is marked as Q2, and a schematic diagram of the rough matching result of the image is obtained, marked as PZ_JD;

步骤205:对Q2进行最小二乘法处理得到两幅图像配准的转换矩阵H,得到平移量参数、缩放比例参数和旋转参数; Step 205: Perform least squares processing on Q2 to obtain a transformation matrix H for registration of two images, and obtain translation parameters, scaling parameters and rotation parameters;

步骤206:根据所读入两幅图像的大小和在步骤204中得到的转换矩阵H,求出将两幅图像融合拼接后的最小幅面的大小,并创建六幅该大小的黑板图像PZ1、PZ1_PD、PZ1_CH和PZ2、PZ2_PD、PZ2_CH,并初始化为全0。同时还可以得到将img1放置在该图像中时的偏移量,标记为DX,DY; Step 206: According to the size of the two images read in and the transformation matrix H obtained in step 204, find the size of the smallest format after fusing and splicing the two images, and create six blackboard images PZ1, PZ1_PD of this size , PZ1_CH and PZ2, PZ2_PD, PZ2_CH, and initialized to all 0. At the same time, you can also get the offset when img1 is placed in the image, marked as DX, DY;

步骤207:根据步骤205中所得DX,DY和转换矩阵H,分别将img1、img2分别放置在图像PZ1、PZ1_PD和PZ2、PZ2_PD中,并对PZ1_PD和PZ2_PD进行处理; Step 207: according to DX, DY and transformation matrix H obtained in step 205, respectively place img1 and img2 in images PZ1, PZ1_PD and PZ2, PZ2_PD, and process PZ1_PD and PZ2_PD;

步骤208:结束选择配准算法。  Step 208: Finish selecting the registration algorithm. the

所述步骤207,包括如下步骤: The step 207 includes the following steps:

步骤301:开始放置img1、img2和对PZ1_PD和PZ2_PD进行处理; Step 301: start to place img1, img2 and process PZ1_PD and PZ2_PD;

步骤302:根据步骤206中所得DX,DY,将img1分别放置在图像PZ1与PZ1_PD中; Step 302: place img1 in images PZ1 and PZ1_PD respectively according to DX and DY obtained in step 206;

步骤303:对PZ1_PD进行逐行逐列扫描,将PZ1_PD中存放img1的对应区域的值全部设置成255,其他地方的值全部设置为0; Step 303: Scan PZ1_PD row by row and column by row, set all the values in the corresponding area storing img1 in PZ1_PD to 255, and set all the values in other places to 0;

步骤304:分别用DX,DY去替换在步骤205中所得的转换矩阵H中的水平偏移量和垂直偏移量,得到新的转换矩阵H_XIN; Step 304: Replace the horizontal offset and vertical offset in the conversion matrix H obtained in step 205 with DX and DY respectively, to obtain a new conversion matrix H_XIN;

步骤305:利用所得的新矩阵对img2进行处理,并将变换后的img2分别放在PZ2与PZ2_PD中; Step 305: use the obtained new matrix to process img2, and place the transformed img2 in PZ2 and PZ2_PD respectively;

步骤306:对PZ2_PD进行逐行逐列扫描,将PZ2_PD中存放img2的对应区域的值全部设置成255,其他地方的值全部设置为0; Step 306: Scan PZ2_PD row by row and column by row, set all the values in the corresponding area storing img2 in PZ2_PD to 255, and set all the values in other places to 0;

步骤307:结束放置img1、img2和对PZ1_PD和PZ2_PD进行处理。 Step 307: Finish placing img1 and img2 and processing PZ1_PD and PZ2_PD.

所述的步骤104,包括如下步骤: Described step 104 comprises the following steps:

步骤401:开始选择融合算法; Step 401: start selecting a fusion algorithm;

步骤402:对PZ1图像进行逐行逐列扫描,若PZ1_PD和PZ2_PD同时为255,则该位置属于图像PZ1和PZ2的重叠区域,并将该位置的像素值存在图像PZ1_CH的对应位置上。同样对图像PZ2,进行类似处理,其结果放在PZ2_CH中; Step 402: Scan the PZ1 image row by row and column by row. If PZ1_PD and PZ2_PD are 255 at the same time, this position belongs to the overlapping area of images PZ1 and PZ2, and store the pixel value of this position in the corresponding position of image PZ1_CH. Also perform similar processing on the image PZ2, and the result is placed in PZ2_CH;

步骤403:根据选择不同的融合算法对图像PZ1_CH和PZ2_CH进行相应的预处理,并使其完成融合过程,其结果存在Fusion中; Step 403: Perform corresponding preprocessing on the images PZ1_CH and PZ2_CH according to the selection of different fusion algorithms, and make them complete the fusion process, and the results are stored in Fusion;

步骤404:结束选择融合算法。 Step 404: End selection of fusion algorithm.

所述的步骤403,包括如下步骤: The step 403 includes the following steps:

步骤501:开始根据不同的融合算法对图像PZ1_CH和PZ2_CH进行预处理和融合; Step 501: Start to preprocess and fuse images PZ1_CH and PZ2_CH according to different fusion algorithms;

步骤502:选择不同的融合方法,若选的融合方法不是小波融合或者Contourlet融合方法,直接进行步骤505,否则进行步骤503; Step 502: Select different fusion methods, if the selected fusion method is not wavelet fusion or Contourlet fusion method, directly proceed to step 505, otherwise proceed to step 503;

步骤503:判断图像PZ1_CH、PZ2_CH图像大小是不是N*N且N为2的整数次幂。如果不是进行步骤504,否则进行步骤505; Step 503: Determine whether the size of the images PZ1_CH and PZ2_CH is N*N and N is an integer power of 2. If not go to step 504, otherwise go to step 505;

步骤504:对图像PZ1_CH、PZ2_CH大小进行修整,其大小为大于PZ1_CH、PZ2_CH图像长、宽最大值的最小的2的整数次幂。根据所得到的最新大小,生成该大小的黑板图像PZ1_CH_XIN、PZ2_CH_XIN,并初值为全0,并将PZ1_CH、PZ2_CH分别从00位置放在PZ1_CH_XIN、PZ2_CH_XIN中; Step 504: trim the size of the images PZ1_CH and PZ2_CH, and the size is the smallest integer power of 2 greater than the maximum value of the length and width of the images of PZ1_CH and PZ2_CH. According to the latest size obtained, generate the blackboard images PZ1_CH_XIN, PZ2_CH_XIN of this size, and the initial value is all 0, and put PZ1_CH, PZ2_CH from the 00 position in PZ1_CH_XIN, PZ2_CH_XIN respectively;

步骤505:对此时的图像PZ1_CH、PZ2_CH或从步骤504得到的PZ1_CH_XIN、PZ2_CH_XIN,进行融合处理,融合结果标记为Fusion; Step 505: Perform fusion processing on the images PZ1_CH, PZ2_CH or PZ1_CH_XIN, PZ2_CH_XIN obtained from step 504 at this time, and the fusion result is marked as Fusion;

步骤506:判断是否经过大小修整,若在此之前经过步骤504,则还需要进行步骤507,否则直接进行508;  Step 506: Judging whether it has undergone size trimming, if step 504 has been passed before, then step 507 is still required, otherwise, go directly to step 508;

    步骤507:对融合后的图像Fusion进行处理,只从00位置开始提取大小与PZ1_CH大小相同的一部分,赋值给大小与PZ1_CH相同的被修正后的Fusion中; Step 507: Process the fused image Fusion, extract only a part of the same size as PZ1_CH from position 00, and assign it to the corrected Fusion with the same size as PZ1_CH;

步骤508:结束根据不同的融合算法对图像PZ1_CH和PZ2_CH进行预处理和融合。 Step 508: End the preprocessing and fusion of images PZ1_CH and PZ2_CH according to different fusion algorithms.

所述的步骤105,包括如下步骤: Described step 105 includes the following steps:

步骤601:开始对非重叠的图像区域进行无缝拼接; Step 601: Start seamless stitching of non-overlapping image regions;

步骤602:对PZ1_PD图像进行逐行逐列扫描,当PZ1_PD中像素值等于0时,用img2中的这一位置的像素值去替换Fusion中相同位置的像素值; Step 602: Scan the PZ1_PD image row by row and column by row. When the pixel value in PZ1_PD is equal to 0, replace the pixel value at the same position in Fusion with the pixel value at this position in img2;

步骤603:对PZ2_PD图像进行逐行逐列扫描,当PZ2_PD中像素值等于0时,用img1中的这一位置的像素值去替换Fusion中相同位置的像素值; Step 603: Scan the PZ2_PD image row by row and column by row. When the pixel value in PZ2_PD is equal to 0, replace the pixel value at the same position in Fusion with the pixel value at this position in img1;

步骤604:结束对非重叠的图像区域进行无缝拼接。 Step 604: End the seamless splicing of non-overlapping image regions.

本发明的优点是:克服了传统融合算法只能用于相同大小、完全配准、整幅图像的全域融合的限制,实现了:1)不同大小、没有配准(图像间往往还有平移、旋转和尺度变化)、仅有部分重叠区域的图像的融合处理;2)重叠区域图像不规则(大小不是N*N且N不为2的整数幂)情况下的小波及Contourlet融合处理,达到了良好的融合处理效果;3)对重叠区域的定位、提取和融合,对非重叠的图像区域的无缝拼接。 The advantages of the present invention are: it overcomes the limitation that the traditional fusion algorithm can only be used for the same size, complete registration, and global fusion of the entire image, and realizes: 1) different sizes, no registration (there are often translations between images, Rotation and scale change), fusion processing of images with only partial overlapping areas; 2) wavelet and Contourlet fusion processing in the case of irregular overlapping area images (the size is not N*N and N is not an integer power of 2), reaching Good fusion processing effect; 3) Positioning, extraction and fusion of overlapping areas, seamless splicing of non-overlapping image areas.

本发明突破了常规融合处理方法苛刻的前提条件,降低了对参与融合图像的要求,有良好的适应性和实用性。 The invention breaks through the harsh preconditions of conventional fusion processing methods, reduces the requirements for participating in fusion images, and has good adaptability and practicability.

附图说明 Description of drawings

图1 基于重叠区域的局部图像融合处理方法的主流程图; Figure 1. The main flow chart of the local image fusion processing method based on overlapping regions;

图2选择配准算法,获得相关参数和判定图像,并进行相应处理的流程图; Figure 2 is a flow chart of selecting a registration algorithm, obtaining relevant parameters and judging images, and performing corresponding processing;

图3 放置img1、img2和对PZ1_PD和PZ2_PD进行处理的流程图; Figure 3 is a flow chart for placing img1, img2 and processing PZ1_PD and PZ2_PD;

图4 选择融合算法,依据步骤103的结果和所选的融合算法,对重叠区域进行判定、提取和处理,实现融合的流程图; Fig. 4 selects the fusion algorithm, according to the result of step 103 and the selected fusion algorithm, the overlapping regions are judged, extracted and processed to realize the flow chart of fusion;

图5 根据不同的融合算法对图像PZ1_CH和PZ2_CH进行预处理和融合的流程图; Figure 5 is a flowchart of preprocessing and fusion of images PZ1_CH and PZ2_CH according to different fusion algorithms;

图6对非重叠的图像区域进行无缝拼接的流程图。 Fig. 6 is a flow chart of seamless stitching of non-overlapping image regions.

具体实施方式 Detailed ways

基于重叠区域的局部图像融合处理方法,其关键步骤是对重叠区域进行配准、定位、提取,并针对不同融合算法对重叠区域进行相应处理。由于两幅图像有部分重叠区域,所以我们必须先对图像进行配准,获得相关参数(平移量、缩放参数、旋转参数等),然后对重叠区域进行定位、提取。但是提取到的重叠区域可能是规则图像,也可能是任意大小、任意形状的图像。除此之外,有些融合算法对图像的大小还有特殊的要求,所以我们必须对重叠区域图像进行一定的处理,才能进行局部区域的融合。 The key steps of the local image fusion processing method based on overlapping areas are registration, positioning, and extraction of overlapping areas, and corresponding processing of overlapping areas for different fusion algorithms. Since the two images have partial overlapping areas, we must first register the images to obtain relevant parameters (translation, scaling parameters, rotation parameters, etc.), and then locate and extract the overlapping areas. However, the extracted overlapping regions may be regular images, or images of any size and shape. In addition, some fusion algorithms have special requirements on the size of the image, so we must perform some processing on the overlapping area images to perform local area fusion.

基于重叠区域的局部图像融合处理方法的特征是:首先对读入的两幅有部分重叠区域的图像进行配准操作并获得相关参数和判定图像,然后对配准后的图像进行重叠区域定位、提取和融合(这里需要注意:对于有些融合算法来说,在融合前,还需要对重叠区域图像进行一下处理),最后通过判定图像对非重叠的图像区域进行无缝拼接。这样就完成了对读入的两幅有部分重叠区域的图像进行局部图像融合处理。 The feature of the local image fusion processing method based on overlapping regions is: firstly, register the two read-in images with partial overlapping regions and obtain relevant parameters and judgment images, and then perform overlapping region positioning, Extraction and fusion (note here: for some fusion algorithms, images in overlapping areas need to be processed before fusion), and finally non-overlapping image areas are seamlessly spliced by judging images. In this way, the partial image fusion processing of the read-in two images with partially overlapping regions is completed.

 如图1所示。 As shown in Figure 1.

主流程图步骤特征是: The main flowchart step features are:

步骤101:开始基于重叠区域的局部图像融合处理方法; Step 101: start the local image fusion processing method based on overlapping regions;

步骤102:导入两幅有部分重叠区域的图像,标记为img1,img2; Step 102: Import two images with partially overlapping regions, marked as img1, img2;

步骤103:选择配准算法,获得相关参数和判定图像,并进行相应处理; Step 103: Select a registration algorithm, obtain relevant parameters and judgment images, and perform corresponding processing;

步骤104:选择融合算法,依据步骤103的结果和所选的融合算法,对重叠区域进行定位、提取和处理,实现融合; Step 104: Select a fusion algorithm, and perform positioning, extraction and processing on overlapping regions according to the result of step 103 and the selected fusion algorithm to achieve fusion;

步骤105:对非重叠的图像区域进行无缝拼接; Step 105: seamlessly splicing non-overlapping image regions;

步骤106:结束基于重叠区域的局部图像融合处理方法; Step 106: End the local image fusion processing method based on overlapping regions;

如图2所示, as shown in picture 2,

所述的步骤103,包括如下步骤: Described step 103 comprises the following steps:

步骤201:开始选择配准算法; Step 201: Start selecting a registration algorithm;

步骤202:分别对读入的两幅图像进行特征点提取,其点集分别标记为P1,P2; Step 202: Extract feature points from the two read-in images respectively, and mark the point sets as P1 and P2 respectively;

步骤203:使用特征描述符进行特征点粗匹配点对,结果标记为Q1,并得到图像的粗匹配结果示意图,标记为PZ_CD; Step 203: Use the feature descriptor to perform rough matching point pairs of feature points, the result is marked as Q1, and obtain a schematic diagram of the rough matching result of the image, marked as PZ_CD;

步骤204:对Q1进行RANSAC处理得到精匹配点对,结果标记为Q2,并得到图像的粗匹配结果示意图,标记为PZ_JD; Step 204: Perform RANSAC processing on Q1 to obtain a fine matching point pair, the result is marked as Q2, and a schematic diagram of the rough matching result of the image is obtained, marked as PZ_JD;

步骤205:对Q2进行最小二乘法处理得到两幅图像配准的转换矩阵H,得到平移量参数、缩放比例参数和旋转参数; Step 205: Perform least squares processing on Q2 to obtain a transformation matrix H for registration of two images, and obtain translation parameters, scaling parameters and rotation parameters;

步骤206:根据所读入两幅图像的大小和在步骤204中得到的转换矩阵H, 求出将两幅图像融合拼接后的最小幅面的大小,并创建六幅该大小的黑板图像PZ1、PZ1_PD、PZ1_CH和PZ2、PZ2_PD、PZ2_CH,并初始化为全0。同时还可以得到将img1放置在该图像中时的偏移量,标记为DX,DY; Step 206: According to the size of the two images read in and the transformation matrix H obtained in step 204, find the size of the smallest format after the two images are fused and stitched, and create six blackboard images PZ1, PZ1_PD of this size , PZ1_CH and PZ2, PZ2_PD, PZ2_CH, and initialized to all 0. At the same time, you can also get the offset when img1 is placed in the image, marked as DX, DY;

步骤207:根据步骤205中所得DX,DY和转换矩阵H,分别将img1、img2分别放置在图像PZ1、PZ1_PD和PZ2、PZ2_PD中,并对PZ1_PD和PZ2_PD进行处理; Step 207: according to DX, DY and transformation matrix H obtained in step 205, respectively place img1 and img2 in images PZ1, PZ1_PD and PZ2, PZ2_PD, and process PZ1_PD and PZ2_PD;

步骤208:结束选择配准算法;   Step 208: end the selection registration algorithm;

如图3所示, As shown in Figure 3,

所述步骤207,包括如下步骤: The step 207 includes the following steps:

步骤301:开始放置img1、img2和对PZ1_PD和PZ2_PD进行处理; Step 301: start to place img1, img2 and process PZ1_PD and PZ2_PD;

步骤302:根据步骤206中所得DX,DY,将img1分别放置在图像PZ1与PZ1_PD中; Step 302: place img1 in images PZ1 and PZ1_PD respectively according to DX and DY obtained in step 206;

步骤303:对PZ1_PD进行逐行逐列扫描,将PZ1_PD中存放img1的对应区域的值全部设置成255,其他地方的值全部设置为0; Step 303: Scan PZ1_PD row by row and column by row, set all the values in the corresponding area storing img1 in PZ1_PD to 255, and set all the values in other places to 0;

步骤304:分别用DX,DY去替换在步骤205中所得的转换矩阵H中的水平偏移量和垂直偏移量,得到新的转换矩阵H_XIN; Step 304: Replace the horizontal offset and vertical offset in the conversion matrix H obtained in step 205 with DX and DY respectively, to obtain a new conversion matrix H_XIN;

步骤305:利用所得的新的转换矩阵H_XIN对img2进行处理,并将变换后的img2分别放在PZ2与PZ2_PD中; Step 305: use the obtained new transformation matrix H_XIN to process img2, and place the transformed img2 in PZ2 and PZ2_PD respectively;

步骤306:对PZ2_PD进行逐行逐列扫描,将PZ2_PD中存放img2的对应区域的值全部设置成255,其他地方的值全部设置为0; Step 306: Scan PZ2_PD row by row and column by row, set all the values in the corresponding area storing img2 in PZ2_PD to 255, and set all the values in other places to 0;

步骤307:结束放置img1、img2和对PZ1_PD和PZ2_PD进行处理; Step 307: finish placing img1 and img2 and process PZ1_PD and PZ2_PD;

如图4所示, As shown in Figure 4,

所述的步骤104,包括如下步骤: Described step 104 comprises the following steps:

步骤401:开始选择融合算法; Step 401: start selecting a fusion algorithm;

步骤402:对PZ1图像进行逐行逐列扫描,若PZ1_PD和PZ2_PD同时为255,则该位置属于图像PZ1和PZ2的重叠区域,并将该位置的像素值存在图像PZ1_CH的对应位置上。同样对图像PZ2,进行类似处理,其结果放在PZ2_CH中; Step 402: Scan the PZ1 image row by row and column by row. If PZ1_PD and PZ2_PD are 255 at the same time, this position belongs to the overlapping area of images PZ1 and PZ2, and store the pixel value of this position in the corresponding position of image PZ1_CH. Also perform similar processing on the image PZ2, and the result is placed in PZ2_CH;

步骤403:根据选择不同的融合算法对图像PZ1_CH和PZ2_CH进行相应的预处理,并使其完成融合过程,其结果存在Fusion中; Step 403: Perform corresponding preprocessing on the images PZ1_CH and PZ2_CH according to the selection of different fusion algorithms, and make them complete the fusion process, and the results are stored in Fusion;

步骤404:结束选择融合算法; Step 404: End selection of the fusion algorithm;

如图5所示, As shown in Figure 5,

所述的步骤403,包括如下步骤: The step 403 includes the following steps:

步骤501:开始根据不同的融合算法对图像PZ1_CH和PZ2_CH进行预处理和融合; Step 501: Start to preprocess and fuse images PZ1_CH and PZ2_CH according to different fusion algorithms;

步骤502:选择不同的融合方法,若选的融合方法不是小波融合或者Contourlet融合方法,直接进行步骤505,否则进行步骤503; Step 502: Select different fusion methods, if the selected fusion method is not wavelet fusion or Contourlet fusion method, directly proceed to step 505, otherwise proceed to step 503;

步骤503:判断图像PZ1_CH、PZ2_CH图像大小是不是N*N且N为2的整数次幂。如果不是进行步骤504,否则进行步骤505; Step 503: Determine whether the size of the images PZ1_CH and PZ2_CH is N*N and N is an integer power of 2. If not go to step 504, otherwise go to step 505;

步骤504:对图像PZ1_CH、PZ2_CH大小进行修整,其大小为大于PZ1_CH、PZ2_CH图像长、宽最大值的最小的2的整数次幂。根据所得到的最新大小,生成该大小的黑板图像PZ1_CH_XIN、PZ2_CH_XIN,并初值为全0,并将PZ1_CH、PZ2_CH分别从00位置放在PZ1_CH_XIN、PZ2_CH_XIN中; Step 504: trim the size of the images PZ1_CH and PZ2_CH, and the size is the smallest integer power of 2 greater than the maximum value of the length and width of the images of PZ1_CH and PZ2_CH. According to the latest size obtained, generate the blackboard images PZ1_CH_XIN, PZ2_CH_XIN of this size, and the initial value is all 0, and put PZ1_CH, PZ2_CH from the 00 position in PZ1_CH_XIN, PZ2_CH_XIN respectively;

步骤505:对此时的图像PZ1_CH、PZ2_CH或从步骤504得到的PZ1_CH_XIN、PZ2_CH_XIN,进行融合处理,融合结果标记为Fusion; Step 505: Perform fusion processing on the images PZ1_CH, PZ2_CH or PZ1_CH_XIN, PZ2_CH_XIN obtained from step 504 at this time, and the fusion result is marked as Fusion;

步骤506:判断是否经过大小修整,若在此之前经过步骤504,则还需要进行步骤507,否则直接进行508;  Step 506: Judging whether it has undergone size trimming, if step 504 has been passed before, then step 507 is still required, otherwise, go directly to step 508;

    步骤507:对融合后的图像Fusion进行处理,只从00位置开始提取大小与PZ1_CH大小相同的一部分,赋值给大小与PZ1_CH相同的被修正后的Fusion中; Step 507: Process the fused image Fusion, extract only a part of the same size as PZ1_CH from position 00, and assign it to the corrected Fusion with the same size as PZ1_CH;

步骤508:结束根据不同的融合算法对图像PZ1_CH和PZ2_CH进行预处理和融合; Step 508: End the preprocessing and fusion of images PZ1_CH and PZ2_CH according to different fusion algorithms;

如图6所示, As shown in Figure 6,

所述的步骤105,包括如下步骤:     The step 105 includes the following steps:

步骤601:开始对非重叠的图像区域进行无缝拼接; Step 601: Start seamless stitching of non-overlapping image regions;

步骤602:对PZ1_PD图像进行逐行逐列扫描,当PZ1_PD中像素值等于0时,用img2中的这一位置的像素值去替换Fusion中相同位置的像素值; Step 602: Scan the PZ1_PD image row by row and column by row. When the pixel value in PZ1_PD is equal to 0, replace the pixel value at the same position in Fusion with the pixel value at this position in img2;

步骤603:对PZ2_PD图像进行逐行逐列扫描,当PZ2_PD中像素值等于0时,用img1中的这一位置的像素值去替换Fusion中相同位置的像素值; Step 603: Scan the PZ2_PD image row by row and column by row. When the pixel value in PZ2_PD is equal to 0, replace the pixel value at the same position in Fusion with the pixel value at this position in img1;

步骤604:结束对非重叠的图像区域进行无缝拼接; Step 604: End the seamless splicing of non-overlapping image areas;

本实施例没有详细叙述的部分属本行业的公知的常用手段,这里不一一叙述。 The parts that are not described in detail in this embodiment belong to well-known common means in this industry, and will not be described here one by one.

Claims (6)

1. based on topography's method for amalgamation processing of overlapping region, it is characterized in that: comprise the steps:
Step 101: beginning is based on topography's method for amalgamation processing of overlapping region;
Step 102: import two width of cloth overlap the zone image, be labeled as img1, img2;
Step 103: select registration Algorithm, obtain correlation parameter and process decision chart picture, and carry out respective handling;
Step 104: select blending algorithm, result and selected blending algorithm according to step 103 position, extract and handle the overlapping region, realize merging;
Step 105: carry out seamless spliced to non-overlapped image-region;
Step 106: finish the topography's method for amalgamation processing based on the overlapping region.
2. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 103 comprises the steps:
Step 201: begin to select registration Algorithm;
Step 202: respectively two width of cloth images that read in are carried out feature point extraction, its point set is labeled as P1, P2 respectively;
Step 203: the use characteristic descriptor carries out that the thick match point of unique point is right, and result queue is Q1, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_CD;
Step 204: Q1 is carried out RANSAC handle that to obtain smart match point right, result queue is Q2, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_JD;
Step 205: Q2 is carried out the transition matrix H that Least Square in Processing obtains two width of cloth image registrations, obtain translational movement parameter, scaling parameter and rotation parameter;
Step 206: according to the size of read in two width of cloth images and the transition matrix H that in step 204, obtains, obtain the size with the spliced minimum breadth of two width of cloth image co-registration, and create blackboard image PZ1, PZ1_PD, PZ1_CH and PZ2, PZ2_PD, the PZ2_CH that six width of cloth should size, and be initialized as complete 0; Side-play amount in the time of simultaneously can also obtaining being placed on img1 in this image is labeled as DX, DY;
Step 207: according to gained DX in the step 205, DY and transition matrix H are placed on img1, img2 respectively among image PZ1, PZ1_PD and PZ2, the PZ2_PD respectively, and PZ1_PD and PZ2_PD are handled;
Step 208: finish to select registration Algorithm.
3. the topography's method for amalgamation processing based on the overlapping region according to claim 2, it is characterized in that: described step 207 comprises the steps:
Step 301: begin to place img1, img2 and PZ1_PD and PZ2_PD are handled;
Step 302: according to gained DX in the step 206, DY is placed on img1 respectively among image PZ1 and the PZ1_PD;
Step 303: PZ1_PD is scanned line by line, the value of depositing the corresponding region of img1 among the PZ1_PD all is arranged to 255, other local values all are set to 0;
Step 304: use DX respectively, DY goes to replace horizontal offset and the vertical offset among the transition matrix H of gained in step 205, obtains new transition matrix H_XIN;
Step 305: utilize the new matrix of gained that img2 is handled, and the img2 after the conversion is placed on respectively among PZ2 and the PZ2_PD;
Step 306: PZ2_PD is scanned line by line, the value of depositing the corresponding region of img2 among the PZ2_PD all is arranged to 255, other local values all are set to 0;
Step 307: end is placed img1, img2 and PZ1_PD and PZ2_PD is handled.
4. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 104 comprises the steps:
Step 401: begin to select blending algorithm;
Step 402: the PZ1 image is scanned line by line, if PZ1_PD and PZ2_PD are 255 simultaneously, then this position belongs to the overlapping region of image PZ1 and PZ2, and the pixel value of this position is existed on the correspondence position of image PZ1_CH;
To image PZ2, carry out similar processing equally, its result is placed among the PZ2_CH;
Step 403: according to the different blending algorithm of selection image PZ1_CH and PZ2_CH are carried out corresponding pre-service, and make it finish fusion process, its result exists among the Fusion;
Step 404: finish to select blending algorithm.
5. the topography's method for amalgamation processing based on the overlapping region according to claim 4, it is characterized in that: described step 403 comprises the steps:
Step 501: beginning is carried out pre-service and fusion according to different blending algorithms to image PZ1_CH and PZ2_CH;
Step 502: select different fusion methods, do not merge or the Contourlet fusion method if the fusion method of choosing is not small echo, directly carry out step 505, otherwise carry out step 503;
Step 503: judge that image PZ1_CH, PZ2_CH image size are that N*N and N are 2 integral number power;
If not carry out step 504, otherwise carry out step 505;
Step 504: image PZ1_CH, PZ2_CH size are repaired, and its size is 2 integral number power greater than PZ1_CH, the peaked minimum of PZ2_CH image length and width;
According to resulting up-to-date size, generate this big or small blackboard image PZ1_CH_XIN, PZ2_CH_XIN, and initial value is complete 0, and PZ1_CH, PZ2_CH are placed on PZ1_CH_XIN, the PZ2_CH_XIN from 00 position respectively;
Step 505: image PZ1_CH, PZ2_CH or the PZ1_CH_XIN that obtains from step 504, PZ2_CH_XIN to this moment, carry out fusion treatment, fusion results is labeled as Fusion;
Step 506: judge whether through big deseaming, if pass through step 504 before this, then also need carry out step 507, otherwise directly carry out 508;
Step 507: the image Fusion after merging is handled, only extract a size part identical with the PZ1_CH size since 00 position, assignment is given among the Fusion after big or small identical with PZ1_CH being corrected;
Step 508: finish according to different blending algorithms image PZ1_CH and PZ2_CH to be carried out pre-service and fusion.
6. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 105 comprises the steps:
Step 601: begin to carry out seamless spliced to non-overlapped image-region;
Step 602: the PZ1_PD image is scanned line by line, when pixel value equals 0 among the PZ1_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img2;
Step 603: the PZ2_PD image is scanned line by line, when pixel value equals 0 among the PZ2_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img1;
Step 604: finish to carry out seamless spliced to non-overlapped image-region.
CN201310234335.5A 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region Expired - Fee Related CN103279923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310234335.5A CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310234335.5A CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Publications (2)

Publication Number Publication Date
CN103279923A true CN103279923A (en) 2013-09-04
CN103279923B CN103279923B (en) 2015-12-23

Family

ID=49062430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310234335.5A Expired - Fee Related CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Country Status (1)

Country Link
CN (1) CN103279923B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252705A (en) * 2014-09-30 2014-12-31 中安消技术有限公司 Method and device for splicing images
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located
CN108230281A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108347540A (en) * 2017-01-23 2018-07-31 精工爱普生株式会社 The production method of scanner, scanner program and scan data
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221661A (en) * 2008-01-29 2008-07-16 深圳市迅雷网络技术有限公司 A method and device for image registration
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
US20110002544A1 (en) * 2009-07-01 2011-01-06 Fujifilm Corporation Image synthesizer and image synthesizing method
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221661A (en) * 2008-01-29 2008-07-16 深圳市迅雷网络技术有限公司 A method and device for image registration
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
US20110002544A1 (en) * 2009-07-01 2011-01-06 Fujifilm Corporation Image synthesizer and image synthesizing method
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李琳娜: ""基于特征匹配的图像拼接技术研究"", 《中国优秀硕士学位论文全文数据库-信息科技辑 》 *
赵万金: ""图像自动拼接技术研究与应用"", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252705B (en) * 2014-09-30 2017-05-17 中安消技术有限公司 Method and device for splicing images
CN104252705A (en) * 2014-09-30 2014-12-31 中安消技术有限公司 Method and device for splicing images
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located
CN107710276B (en) * 2015-09-30 2022-05-13 高途乐公司 Method, system, and non-transitory computer readable medium for image processing
CN108230281B (en) * 2016-12-30 2020-11-20 北京市商汤科技开发有限公司 Remote sensing image processing method and device and electronic equipment
CN108230281A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108347540A (en) * 2017-01-23 2018-07-31 精工爱普生株式会社 The production method of scanner, scanner program and scan data
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN106991645B (en) * 2017-03-22 2018-09-28 腾讯科技(深圳)有限公司 Image split-joint method and device
US10878537B2 (en) 2017-03-22 2020-12-29 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107085842B (en) * 2017-04-01 2020-04-10 上海讯陌通讯技术有限公司 Self-learning multipath image fusion real-time correction method and system
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product

Also Published As

Publication number Publication date
CN103279923B (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN103279923B (en) Based on topography's method for amalgamation processing of overlapping region
Ren et al. CorrI2P: Deep image-to-point cloud registration via dense correspondence
CN105100640B (en) A kind of local registration parallel video joining method and system
CN112862685A (en) Image stitching processing method and device and electronic system
US9349152B2 (en) Image identifiers and methods and systems of presenting image identifiers
CN104036244B (en) Checkerboard pattern corner point detecting method and device applicable to low-quality images
CN104680516A (en) Acquisition method for high-quality feature matching set of images
CN104794701A (en) Image splicing device, method and image processing equipment
CN112085708B (en) Method and equipment for detecting defects of straight line edges in outer contour of product
CN109360145A (en) A method for stitching infrared thermal images based on eddy current pulses
CN110276279A (en) A Text Detection Method of Arbitrarily Shaped Scenes Based on Image Segmentation
CN107451985A (en) A kind of joining method of the micro- sequence image of mouse tongue section
TWI664421B (en) Method,apparatus ,and computer software product for image processing
TW405059B (en) Method for combining the computer models of two surfaces in 3-D space
CN111861866A (en) A panorama reconstruction method of substation equipment inspection image
Zhang et al. Automatic crack inspection for concrete bridge bottom surfaces based on machine vision
JP2020067308A (en) Image processing method and image processing device
Li et al. Automated rust-defect detection of a steel bridge using aerial multispectral imagery
CN104966283A (en) Imaging layered registering method
US9305235B1 (en) System and method for identifying and locating instances of a shape under large variations in linear degrees of freedom and/or stroke widths
CN117853848B (en) A method and processor for constructing a binocular vision RGB-IR image pair data set
CN100574360C (en) A kind of preprocess method that obtains difference image
JP5142836B2 (en) Scanning image element alignment method
Shahsavarani et al. Multi-modal image processing pipeline for NDE of structures and industrial assets
CN104992432B (en) Multimode image registering method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151223

Termination date: 20160614