CN103279923A - Partial image fusion processing method based on overlapped region - Google Patents

Partial image fusion processing method based on overlapped region Download PDF

Info

Publication number
CN103279923A
CN103279923A CN2013102343355A CN201310234335A CN103279923A CN 103279923 A CN103279923 A CN 103279923A CN 2013102343355 A CN2013102343355 A CN 2013102343355A CN 201310234335 A CN201310234335 A CN 201310234335A CN 103279923 A CN103279923 A CN 103279923A
Authority
CN
China
Prior art keywords
image
fusion
carry out
overlapping region
img1
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102343355A
Other languages
Chinese (zh)
Other versions
CN103279923B (en
Inventor
刘贵喜
卢海鹏
聂婷
刘荣荣
董亮
张菁超
常露
王明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201310234335.5A priority Critical patent/CN103279923B/en
Publication of CN103279923A publication Critical patent/CN103279923A/en
Application granted granted Critical
Publication of CN103279923B publication Critical patent/CN103279923B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a partial image fusion processing method based on an overlapped region. The overlapped region is registered, located and extracted and the overlapped region is processed correspondingly according to different fusion algorithms. According to the partial image fusion processing method based on the overlapped region, fusion processing of two images with overlapped parts is achieved, the universal fusion limit that a traditional fusion algorithm only can be used for whole images of the same size and complete registering is overcome, and the adaptability and the practicality of the fusion algorithm and relevant software are improved.

Description

Topography's method for amalgamation processing based on the overlapping region
Technical field
The present invention relates to image co-registration and application thereof, particularly based on topography's method for amalgamation processing of overlapping region.
Background technology
Image co-registration is the important branch that multi-sensor data merges.Image co-registration is that the multiple image that different sensors obtains is carried out overall treatment according to certain algorithm, to obtain an image new, that satisfy certain demand.
At present, widely used blending algorithm mainly contains simple blending algorithm, component is replaced algorithm, Brovey algorithm, high-pass filtering HPF (high-pass filtering) blending algorithm, multiple dimensioned multiresolution analysis blending algorithm etc.But the realization of existing overwhelming majority research and algorithm all is to carry out under certain precondition, and these preconditions comprise: the original image that 1) participates in fusion must be identical size and registration fully; 2) its fusion is that universe merges, and that is to say that the content that two width of cloth images comprise must be in full accord, and the zone of merging is the entire image zone; 3) for small echo blending algorithm and Contourlet blending algorithm, except above-mentioned condition, the picture size that also requires to participate in merging must be that N*N and N are necessary for 2 integer power.Applicable cases is inconsistent sometimes and truly for these preconditions, has therefore also just limited the applicability of Image Fusion and related software.
In actual applications, the image that participate in to merge (often also having translation, rotation and dimensional variation between image) different sizes, that do not have registration, the overlap image in zone only often.For fusion problem in this case, traditional method for amalgamation processing can't be finished corresponding processing at all, and existing literature and related data all do not have the positive corresponding solution that provides yet.
Summary of the invention
The purpose of this invention is to provide a kind of topography's method for amalgamation processing based on the overlapping region.The present invention positions, extracts the overlapping region by the image that reads in is carried out registration, and respective handling and fusion are carried out in the overlapping region, and non-overlapped image-region is carried out seamless spliced, has realized topography's fusion treatment of overlapping region.This invention can reach the better image treatment effect, improves adaptability and the practicality of blending algorithm.Its committed step is that registration, location and extraction are carried out in the overlapping region, and at different blending algorithms respective handling is carried out in the overlapping region.
Technical scheme of the present invention is that the topography's method for amalgamation processing based on the overlapping region is characterized in that: comprise the steps:
Step 101: beginning is based on topography's method for amalgamation processing of overlapping region;
Step 102: import two width of cloth overlap the zone image, be labeled as img1, img2;
Step 103: select registration Algorithm, obtain correlation parameter and process decision chart picture, and carry out respective handling;
Step 104: select blending algorithm, result and selected blending algorithm according to step 103 position, extract and handle the overlapping region, realize merging;
Step 105: carry out seamless spliced to non-overlapped image-region;
Step 106: finish the topography's method for amalgamation processing based on the overlapping region.
Described step 103 comprises the steps:
Step 201: begin to select registration Algorithm;
Step 202: respectively two width of cloth images that read in are carried out feature point extraction, its point set is labeled as P1, P2 respectively;
Step 203: the use characteristic descriptor carries out that the thick match point of unique point is right, and result queue is Q1, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_CD;
Step 204: Q1 is carried out RANSAC handle that to obtain smart match point right, result queue is Q2, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_JD;
Step 205: Q2 is carried out the transition matrix H that Least Square in Processing obtains two width of cloth image registrations, obtain translational movement parameter, scaling parameter and rotation parameter;
Step 206: according to the size of read in two width of cloth images and the transition matrix H that in step 204, obtains, obtain the size with the spliced minimum breadth of two width of cloth image co-registration, and create blackboard image PZ1, PZ1_PD, PZ1_CH and PZ2, PZ2_PD, the PZ2_CH that six width of cloth should size, and be initialized as complete 0.Side-play amount in the time of simultaneously can also obtaining being placed on img1 in this image is labeled as DX, DY;
Step 207: according to gained DX in the step 205, DY and transition matrix H are placed on img1, img2 respectively among image PZ1, PZ1_PD and PZ2, the PZ2_PD respectively, and PZ1_PD and PZ2_PD are handled;
Step 208: finish to select registration Algorithm.
Described step 207 comprises the steps:
Step 301: begin to place img1, img2 and PZ1_PD and PZ2_PD are handled;
Step 302: according to gained DX in the step 206, DY is placed on img1 respectively among image PZ1 and the PZ1_PD;
Step 303: PZ1_PD is scanned line by line, the value of depositing the corresponding region of img1 among the PZ1_PD all is arranged to 255, other local values all are set to 0;
Step 304: use DX respectively, DY goes to replace horizontal offset and the vertical offset among the transition matrix H of gained in step 205, obtains new transition matrix H_XIN;
Step 305: utilize the new matrix of gained that img2 is handled, and the img2 after the conversion is placed on respectively among PZ2 and the PZ2_PD;
Step 306: PZ2_PD is scanned line by line, the value of depositing the corresponding region of img2 among the PZ2_PD all is arranged to 255, other local values all are set to 0;
Step 307: end is placed img1, img2 and PZ1_PD and PZ2_PD is handled.
Described step 104 comprises the steps:
Step 401: begin to select blending algorithm;
Step 402: the PZ1 image is scanned line by line, if PZ1_PD and PZ2_PD are 255 simultaneously, then this position belongs to the overlapping region of image PZ1 and PZ2, and the pixel value of this position is existed on the correspondence position of image PZ1_CH.To image PZ2, carry out similar processing equally, its result is placed among the PZ2_CH;
Step 403: according to the different blending algorithm of selection image PZ1_CH and PZ2_CH are carried out corresponding pre-service, and make it finish fusion process, its result exists among the Fusion;
Step 404: finish to select blending algorithm.
Described step 403 comprises the steps:
Step 501: beginning is carried out pre-service and fusion according to different blending algorithms to image PZ1_CH and PZ2_CH;
Step 502: select different fusion methods, do not merge or the Contourlet fusion method if the fusion method of choosing is not small echo, directly carry out step 505, otherwise carry out step 503;
Step 503: judge that image PZ1_CH, PZ2_CH image size are that N*N and N are 2 integral number power.If not carry out step 504, otherwise carry out step 505;
Step 504: image PZ1_CH, PZ2_CH size are repaired, and its size is 2 integral number power greater than PZ1_CH, the peaked minimum of PZ2_CH image length and width.According to resulting up-to-date size, generate this big or small blackboard image PZ1_CH_XIN, PZ2_CH_XIN, and initial value is complete 0, and PZ1_CH, PZ2_CH are placed on PZ1_CH_XIN, the PZ2_CH_XIN from 00 position respectively;
Step 505: image PZ1_CH, PZ2_CH or the PZ1_CH_XIN that obtains from step 504, PZ2_CH_XIN to this moment, carry out fusion treatment, fusion results is labeled as Fusion;
Step 506: judge whether through big deseaming, if pass through step 504 before this, then also need carry out step 507, otherwise directly carry out 508;
Step 507: the image Fusion after merging is handled, only extract a size part identical with the PZ1_CH size since 00 position, assignment is given among the Fusion after big or small identical with PZ1_CH being corrected;
Step 508: finish according to different blending algorithms image PZ1_CH and PZ2_CH to be carried out pre-service and fusion.
Described step 105 comprises the steps:
Step 601: begin to carry out seamless spliced to non-overlapped image-region;
Step 602: the PZ1_PD image is scanned line by line, when pixel value equals 0 among the PZ1_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img2;
Step 603: the PZ2_PD image is scanned line by line, when pixel value equals 0 among the PZ2_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img1;
Step 604: finish to carry out seamless spliced to non-overlapped image-region.
Advantage of the present invention is: overcome traditional blending algorithm and can only be used for identical size, the fully universe of registration, the entire image restriction of merging, realized: 1) different sizes, do not have registration (often also having translation, rotation and dimensional variation between image), the overlap fusion treatment of image in zone only; 2) small echo and the Contourlet fusion treatment under overlapping region image irregular (size is not that N*N and N are not 2 the integer power) situation reached good fusion treatment effect; 3) to location, extraction and the fusion of overlapping region, seamless spliced to non-overlapped image-region.
The present invention has broken through the precondition of conventional method for amalgamation processing harshness, has reduced participating in the requirement of fused images, and good adaptability and practicality are arranged.
Description of drawings
Fig. 1 is based on the main flow chart of topography's method for amalgamation processing of overlapping region;
Fig. 2 selects registration Algorithm, obtains correlation parameter and process decision chart picture, and carries out the process flow diagram of respective handling;
The process flow diagram that Fig. 3 places img1, img2 and PZ1_PD and PZ2_PD are handled;
Fig. 4 selects blending algorithm, according to result and the selected blending algorithm of step 103, the overlapping region is judged, is extracted and handles, and realizes the process flow diagram that merges;
Fig. 5 carries out the process flow diagram of pre-service and fusion to image PZ1_CH and PZ2_CH according to different blending algorithms;
Fig. 6 carries out seamless spliced process flow diagram to non-overlapped image-region.
Embodiment
Based on topography's method for amalgamation processing of overlapping region, its committed step is that registration, location, extraction are carried out in the overlapping region, and at different blending algorithms respective handling is carried out in the overlapping region.The zone because two width of cloth images overlap so we must carry out registration to image earlier, obtains correlation parameter (translational movement, zooming parameter, rotation parameter etc.), then the overlapping region is positioned, extracts.But the overlapping region of extracting may be regular image, also may be the image of any size, arbitrary shape.In addition, some blending algorithm also has special requirement to the size of image, so we must carry out certain processing to overlapping area image, just can carry out the fusion of regional area.
Feature based on topography's method for amalgamation processing of overlapping region is: at first the overlap image in zone of two width of cloth that read in is carried out registration operation and obtains correlation parameter and the process decision chart picture, image behind registration carries out location, overlapping region, extraction and merges (should be noted that here: for some blending algorithm then, before fusion, also need overlapping area image is once handled), carry out seamless spliced by the process decision chart picture to non-overlapped image-region at last.So just finished to two width of cloth that read in overlap the zone image carry out topography's fusion treatment.
As shown in Figure 1.
The main flow chart steps characteristic is:
Step 101: beginning is based on topography's method for amalgamation processing of overlapping region;
Step 102: import two width of cloth overlap the zone image, be labeled as img1, img2;
Step 103: select registration Algorithm, obtain correlation parameter and process decision chart picture, and carry out respective handling;
Step 104: select blending algorithm, result and selected blending algorithm according to step 103 position, extract and handle the overlapping region, realize merging;
Step 105: carry out seamless spliced to non-overlapped image-region;
Step 106: finish the topography's method for amalgamation processing based on the overlapping region;
As shown in Figure 2,
Described step 103 comprises the steps:
Step 201: begin to select registration Algorithm;
Step 202: respectively two width of cloth images that read in are carried out feature point extraction, its point set is labeled as P1, P2 respectively;
Step 203: the use characteristic descriptor carries out that the thick match point of unique point is right, and result queue is Q1, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_CD;
Step 204: Q1 is carried out RANSAC handle that to obtain smart match point right, result queue is Q2, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_JD;
Step 205: Q2 is carried out the transition matrix H that Least Square in Processing obtains two width of cloth image registrations, obtain translational movement parameter, scaling parameter and rotation parameter;
Step 206: according to the size of read in two width of cloth images and the transition matrix H that in step 204, obtains, obtain the size with the spliced minimum breadth of two width of cloth image co-registration, and create blackboard image PZ1, PZ1_PD, PZ1_CH and PZ2, PZ2_PD, the PZ2_CH that six width of cloth should size, and be initialized as complete 0.Side-play amount in the time of simultaneously can also obtaining being placed on img1 in this image is labeled as DX, DY;
Step 207: according to gained DX in the step 205, DY and transition matrix H are placed on img1, img2 respectively among image PZ1, PZ1_PD and PZ2, the PZ2_PD respectively, and PZ1_PD and PZ2_PD are handled;
Step 208: finish to select registration Algorithm;
As shown in Figure 3,
Described step 207 comprises the steps:
Step 301: begin to place img1, img2 and PZ1_PD and PZ2_PD are handled;
Step 302: according to gained DX in the step 206, DY is placed on img1 respectively among image PZ1 and the PZ1_PD;
Step 303: PZ1_PD is scanned line by line, the value of depositing the corresponding region of img1 among the PZ1_PD all is arranged to 255, other local values all are set to 0;
Step 304: use DX respectively, DY goes to replace horizontal offset and the vertical offset among the transition matrix H of gained in step 205, obtains new transition matrix H_XIN;
Step 305: utilize the new img2 of transition matrix H_XIN of gained to handle, and the img2 after the conversion is placed on respectively among PZ2 and the PZ2_PD;
Step 306: PZ2_PD is scanned line by line, the value of depositing the corresponding region of img2 among the PZ2_PD all is arranged to 255, other local values all are set to 0;
Step 307: end is placed img1, img2 and PZ1_PD and PZ2_PD is handled;
As shown in Figure 4,
Described step 104 comprises the steps:
Step 401: begin to select blending algorithm;
Step 402: the PZ1 image is scanned line by line, if PZ1_PD and PZ2_PD are 255 simultaneously, then this position belongs to the overlapping region of image PZ1 and PZ2, and the pixel value of this position is existed on the correspondence position of image PZ1_CH.To image PZ2, carry out similar processing equally, its result is placed among the PZ2_CH;
Step 403: according to the different blending algorithm of selection image PZ1_CH and PZ2_CH are carried out corresponding pre-service, and make it finish fusion process, its result exists among the Fusion;
Step 404: finish to select blending algorithm;
As shown in Figure 5,
Described step 403 comprises the steps:
Step 501: beginning is carried out pre-service and fusion according to different blending algorithms to image PZ1_CH and PZ2_CH;
Step 502: select different fusion methods, do not merge or the Contourlet fusion method if the fusion method of choosing is not small echo, directly carry out step 505, otherwise carry out step 503;
Step 503: judge that image PZ1_CH, PZ2_CH image size are that N*N and N are 2 integral number power.If not carry out step 504, otherwise carry out step 505;
Step 504: image PZ1_CH, PZ2_CH size are repaired, and its size is 2 integral number power greater than PZ1_CH, the peaked minimum of PZ2_CH image length and width.According to resulting up-to-date size, generate this big or small blackboard image PZ1_CH_XIN, PZ2_CH_XIN, and initial value is complete 0, and PZ1_CH, PZ2_CH are placed on PZ1_CH_XIN, the PZ2_CH_XIN from 00 position respectively;
Step 505: image PZ1_CH, PZ2_CH or the PZ1_CH_XIN that obtains from step 504, PZ2_CH_XIN to this moment, carry out fusion treatment, fusion results is labeled as Fusion;
Step 506: judge whether through big deseaming, if pass through step 504 before this, then also need carry out step 507, otherwise directly carry out 508;
Step 507: the image Fusion after merging is handled, only extract a size part identical with the PZ1_CH size since 00 position, assignment is given among the Fusion after big or small identical with PZ1_CH being corrected;
Step 508: finish according to different blending algorithms image PZ1_CH and PZ2_CH to be carried out pre-service and fusion;
As shown in Figure 6,
Described step 105 comprises the steps:
Step 601: begin to carry out seamless spliced to non-overlapped image-region;
Step 602: the PZ1_PD image is scanned line by line, when pixel value equals 0 among the PZ1_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img2;
Step 603: the PZ2_PD image is scanned line by line, when pixel value equals 0 among the PZ2_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img1;
Step 604: finish to carry out seamless spliced to non-overlapped image-region;
The part that present embodiment is not described in detail belongs to the known conventional means of the industry, here not narration one by one.

Claims (6)

1. based on topography's method for amalgamation processing of overlapping region, it is characterized in that: comprise the steps:
Step 101: beginning is based on topography's method for amalgamation processing of overlapping region;
Step 102: import two width of cloth overlap the zone image, be labeled as img1, img2;
Step 103: select registration Algorithm, obtain correlation parameter and process decision chart picture, and carry out respective handling;
Step 104: select blending algorithm, result and selected blending algorithm according to step 103 position, extract and handle the overlapping region, realize merging;
Step 105: carry out seamless spliced to non-overlapped image-region;
Step 106: finish the topography's method for amalgamation processing based on the overlapping region.
2. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 103 comprises the steps:
Step 201: begin to select registration Algorithm;
Step 202: respectively two width of cloth images that read in are carried out feature point extraction, its point set is labeled as P1, P2 respectively;
Step 203: the use characteristic descriptor carries out that the thick match point of unique point is right, and result queue is Q1, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_CD;
Step 204: Q1 is carried out RANSAC handle that to obtain smart match point right, result queue is Q2, and obtains the thick matching result synoptic diagram of image, is labeled as PZ_JD;
Step 205: Q2 is carried out the transition matrix H that Least Square in Processing obtains two width of cloth image registrations, obtain translational movement parameter, scaling parameter and rotation parameter;
Step 206: according to the size of read in two width of cloth images and the transition matrix H that in step 204, obtains, obtain the size with the spliced minimum breadth of two width of cloth image co-registration, and create blackboard image PZ1, PZ1_PD, PZ1_CH and PZ2, PZ2_PD, the PZ2_CH that six width of cloth should size, and be initialized as complete 0; Side-play amount in the time of simultaneously can also obtaining being placed on img1 in this image is labeled as DX, DY;
Step 207: according to gained DX in the step 205, DY and transition matrix H are placed on img1, img2 respectively among image PZ1, PZ1_PD and PZ2, the PZ2_PD respectively, and PZ1_PD and PZ2_PD are handled;
Step 208: finish to select registration Algorithm.
3. the topography's method for amalgamation processing based on the overlapping region according to claim 2, it is characterized in that: described step 207 comprises the steps:
Step 301: begin to place img1, img2 and PZ1_PD and PZ2_PD are handled;
Step 302: according to gained DX in the step 206, DY is placed on img1 respectively among image PZ1 and the PZ1_PD;
Step 303: PZ1_PD is scanned line by line, the value of depositing the corresponding region of img1 among the PZ1_PD all is arranged to 255, other local values all are set to 0;
Step 304: use DX respectively, DY goes to replace horizontal offset and the vertical offset among the transition matrix H of gained in step 205, obtains new transition matrix H_XIN;
Step 305: utilize the new matrix of gained that img2 is handled, and the img2 after the conversion is placed on respectively among PZ2 and the PZ2_PD;
Step 306: PZ2_PD is scanned line by line, the value of depositing the corresponding region of img2 among the PZ2_PD all is arranged to 255, other local values all are set to 0;
Step 307: end is placed img1, img2 and PZ1_PD and PZ2_PD is handled.
4. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 104 comprises the steps:
Step 401: begin to select blending algorithm;
Step 402: the PZ1 image is scanned line by line, if PZ1_PD and PZ2_PD are 255 simultaneously, then this position belongs to the overlapping region of image PZ1 and PZ2, and the pixel value of this position is existed on the correspondence position of image PZ1_CH;
To image PZ2, carry out similar processing equally, its result is placed among the PZ2_CH;
Step 403: according to the different blending algorithm of selection image PZ1_CH and PZ2_CH are carried out corresponding pre-service, and make it finish fusion process, its result exists among the Fusion;
Step 404: finish to select blending algorithm.
5. the topography's method for amalgamation processing based on the overlapping region according to claim 4, it is characterized in that: described step 403 comprises the steps:
Step 501: beginning is carried out pre-service and fusion according to different blending algorithms to image PZ1_CH and PZ2_CH;
Step 502: select different fusion methods, do not merge or the Contourlet fusion method if the fusion method of choosing is not small echo, directly carry out step 505, otherwise carry out step 503;
Step 503: judge that image PZ1_CH, PZ2_CH image size are that N*N and N are 2 integral number power;
If not carry out step 504, otherwise carry out step 505;
Step 504: image PZ1_CH, PZ2_CH size are repaired, and its size is 2 integral number power greater than PZ1_CH, the peaked minimum of PZ2_CH image length and width;
According to resulting up-to-date size, generate this big or small blackboard image PZ1_CH_XIN, PZ2_CH_XIN, and initial value is complete 0, and PZ1_CH, PZ2_CH are placed on PZ1_CH_XIN, the PZ2_CH_XIN from 00 position respectively;
Step 505: image PZ1_CH, PZ2_CH or the PZ1_CH_XIN that obtains from step 504, PZ2_CH_XIN to this moment, carry out fusion treatment, fusion results is labeled as Fusion;
Step 506: judge whether through big deseaming, if pass through step 504 before this, then also need carry out step 507, otherwise directly carry out 508;
Step 507: the image Fusion after merging is handled, only extract a size part identical with the PZ1_CH size since 00 position, assignment is given among the Fusion after big or small identical with PZ1_CH being corrected;
Step 508: finish according to different blending algorithms image PZ1_CH and PZ2_CH to be carried out pre-service and fusion.
6. the topography's method for amalgamation processing based on the overlapping region according to claim 1, it is characterized in that: described step 105 comprises the steps:
Step 601: begin to carry out seamless spliced to non-overlapped image-region;
Step 602: the PZ1_PD image is scanned line by line, when pixel value equals 0 among the PZ1_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img2;
Step 603: the PZ2_PD image is scanned line by line, when pixel value equals 0 among the PZ2_PD, remove to replace the pixel value of same position among the Fusion with the pixel value of this position among the img1;
Step 604: finish to carry out seamless spliced to non-overlapped image-region.
CN201310234335.5A 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region Expired - Fee Related CN103279923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310234335.5A CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310234335.5A CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Publications (2)

Publication Number Publication Date
CN103279923A true CN103279923A (en) 2013-09-04
CN103279923B CN103279923B (en) 2015-12-23

Family

ID=49062430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310234335.5A Expired - Fee Related CN103279923B (en) 2013-06-14 2013-06-14 Based on topography's method for amalgamation processing of overlapping region

Country Status (1)

Country Link
CN (1) CN103279923B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252705A (en) * 2014-09-30 2014-12-31 中安消技术有限公司 Method and device for splicing images
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located
CN108230281A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108347540A (en) * 2017-01-23 2018-07-31 精工爱普生株式会社 The production method of scanner, scanner program and scan data
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221661A (en) * 2008-01-29 2008-07-16 深圳市迅雷网络技术有限公司 Image registration method and device
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
US20110002544A1 (en) * 2009-07-01 2011-01-06 Fujifilm Corporation Image synthesizer and image synthesizing method
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221661A (en) * 2008-01-29 2008-07-16 深圳市迅雷网络技术有限公司 Image registration method and device
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
US20110002544A1 (en) * 2009-07-01 2011-01-06 Fujifilm Corporation Image synthesizer and image synthesizing method
CN101840570A (en) * 2010-04-16 2010-09-22 广东工业大学 Fast image splicing method
CN102968777A (en) * 2012-11-20 2013-03-13 河海大学 Image stitching method based on overlapping region scale-invariant feather transform (SIFT) feature points

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李琳娜: ""基于特征匹配的图像拼接技术研究"", 《中国优秀硕士学位论文全文数据库-信息科技辑 》 *
赵万金: ""图像自动拼接技术研究与应用"", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252705B (en) * 2014-09-30 2017-05-17 中安消技术有限公司 Method and device for splicing images
CN104252705A (en) * 2014-09-30 2014-12-31 中安消技术有限公司 Method and device for splicing images
CN107710276A (en) * 2015-09-30 2018-02-16 高途乐公司 The unified image processing of the combination image in the region based on spatially co-located
CN107710276B (en) * 2015-09-30 2022-05-13 高途乐公司 Method, system, and non-transitory computer readable medium for image processing
CN108230281B (en) * 2016-12-30 2020-11-20 北京市商汤科技开发有限公司 Remote sensing image processing method and device and electronic equipment
CN108230281A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108347540A (en) * 2017-01-23 2018-07-31 精工爱普生株式会社 The production method of scanner, scanner program and scan data
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN106991645B (en) * 2017-03-22 2018-09-28 腾讯科技(深圳)有限公司 Image split-joint method and device
US10878537B2 (en) 2017-03-22 2020-12-29 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107085842B (en) * 2017-04-01 2020-04-10 上海讯陌通讯技术有限公司 Self-learning multipath image fusion real-time correction method and system
CN108460724A (en) * 2018-02-05 2018-08-28 湖北工业大学 The Adaptive image fusion method and system differentiated based on mahalanobis distance
CN113810665A (en) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 Video processing method, device, equipment, storage medium and product

Also Published As

Publication number Publication date
CN103279923B (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN103279923A (en) Partial image fusion processing method based on overlapped region
CN105100640B (en) A kind of local registration parallel video joining method and system
US20160307350A1 (en) View synthesis - panorama
JP6347675B2 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
US20140362422A1 (en) Handheld device document imaging
CN109961399A (en) Optimal stitching line method for searching based on Image distance transform
EP2570989A3 (en) Resolution and contrast enhancement with fusion in low-resolution IR images
CN101630406A (en) Camera calibration method and camera calibration device
CN103150716B (en) Infrared image joining method
CN102609931B (en) Field depth expanding method and device of microscopic image
TW201435792A (en) Process and device for capturing and rendering a panoramic or stereoscopic stream of images technical domain
CN105530421B (en) A kind of focusing method based on dual camera, device and terminal
JP2011096135A5 (en)
WO2015059462A3 (en) Improvements in or relating to super-resolution microscopy
CN105574815A (en) Image splicing method and device used for scanning mouse
Chen et al. Full-frame video stabilization via SIFT feature matching
CN102663733A (en) Characteristic points matching method based on characteristic assembly
CN104966283A (en) Imaging layered registering method
Park et al. Real time rectification using differentially encoded lookup table
Singla et al. Medical image stitching using hybrid of sift & surf techniques
CN111669492A (en) Method for processing shot digital image by terminal and terminal
CN106454152B (en) Video image joining method, device and system
CN106355595A (en) Stereo vision matching method for target images
CN105447826A (en) Banknote image acquisition processing method
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151223

Termination date: 20160614

CF01 Termination of patent right due to non-payment of annual fee