CN106504194A - A kind of image split-joint method based on most preferably splicing plane and local feature - Google Patents

A kind of image split-joint method based on most preferably splicing plane and local feature Download PDF

Info

Publication number
CN106504194A
CN106504194A CN201610956645.1A CN201610956645A CN106504194A CN 106504194 A CN106504194 A CN 106504194A CN 201610956645 A CN201610956645 A CN 201610956645A CN 106504194 A CN106504194 A CN 106504194A
Authority
CN
China
Prior art keywords
image
region
plane
depth information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610956645.1A
Other languages
Chinese (zh)
Other versions
CN106504194B (en
Inventor
陈勇
詹帝
刘焕淋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201610956645.1A priority Critical patent/CN106504194B/en
Publication of CN106504194A publication Critical patent/CN106504194A/en
Application granted granted Critical
Publication of CN106504194B publication Critical patent/CN106504194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention relates to a kind of based on the image split-joint method for most preferably splicing plane and local feature, the method is comprised the following steps:S1:Image depth information is estimated;S2:Image segmentation is carried out based on depth information, image original image is divided into region subject to registration and non-registered region, and optimal splicing plane is determined with two cut-off rules as side;S3:Most preferably to splice plane as reference, respective cut-off rule calculates the anglec of rotation respectively for axle, and image is transformed in reference planes;S4:Extract the feature interpretation of FAST characteristic points and characteristic point in region to be spliced;S5:Calculate Hamming distance carry out Feature Points Matching, using depth information and RANSAC algorithms carry out smart mate, obtain matching double points;S6:Transformation matrix is calculated using the point of registration features, the registration region image after being converted obtains stitching image using the Weighted Fusion method based on focal length.The method can overcome the stretch distortion that global house of correction brings under big subtense angle, more be met the stitching image of human eye vision.

Description

A kind of image split-joint method based on most preferably splicing plane and local feature
Technical field
The invention belongs to image mosaic technology field, is related to a kind of image based on most preferably splicing plane and local feature and spells Connect method.
Background technology
Image mosaic technology is the small angle image mosaic that several are had overlapping region into the complete big visual angle figure of a width Picture, the wide-angle image obtained compared to hardware have better image quality.Image mosaic mainly includes three below step: Pretreatment, image registration and image co-registration.Wherein image registration is the core and key of image mosaic, and image co-registration is generation one The final step of fabric width visual pattern.The method of present image registration is broadly divided into three classes:Based on the method for transform domain, based on ash The related method of degree and the method for feature based.And the image registration method of feature based is effect the best way, but currently Typically with an image to be spliced as reference pair, other images carry out global registration and conversion to algorithm.When the figure to be spliced for processing When there is large viewing difference as between, larger stretch distortion is also easy to produce, affects splicing result.Meanwhile, in order to improve algorithm Real-time, stitching algorithm fusion method mode mostly from general Weighted Fusion, is also easy to produce splicing vestige and ghost.Therefore, The stretch distortion of big subtense angle splicing, and a kind of more preferable fusion method of splicing effect is the problem that the present invention needs to solve.
Content of the invention
In view of this, it is an object of the invention to provide a kind of image mosaic based on most preferably splicing plane and local feature Method, the stretch distortion that the method is produced when can solve the problem that multiple image splices under big parallax.
For reaching above-mentioned purpose, the present invention provides following technical scheme:
A kind of image split-joint method based on most preferably splicing plane and local feature, the method are comprised the following steps:
S1:Image depth information is estimated;
S2:Image segmentation is carried out based on depth information, image original image is divided into region subject to registration and non-registered region, And optimal splicing plane is determined with two cut-off rules as side;
S3:Most preferably to splice plane as reference, respective cut-off rule calculates the anglec of rotation respectively for axle, and image is transformed to ginseng Examine in plane;
S4:Extract the feature interpretation of FAST characteristic points and characteristic point in region to be spliced;
S5:Calculating Hamming distance carries out Feature Points Matching, using depth information and RANSAC (RANdom Sample Consensus) algorithm carries out smart coupling, obtains matching double points;
S6:Transformation matrix is calculated using the point of registration features, the registration region image after being converted, using based on Jiao The Weighted Fusion method of point distance obtains stitching image.
Further, in step s 2, image registration scope is no longer global registration, but is in the office of same field depth Portion region.
Further, in step s 6, using the Weighted Fusion method based on focal length, arrived with pixel in overlapping region The distance of different images focus calculates weights, if O is any point in overlapping region, the focus of two width images is respectively O1、 O2If, OO1The distance between be dw1, OO2The distance between be dw2, then w1And w2Computing formula is:
w2=1-w1
Wherein r is fusion regulatory factor.Focal length is introduced, the distribution of pixel weights is relevant with pixel interdependence, optimizes Fusion results.
The beneficial effects of the present invention is:
The invention provides a kind of based on the feature matching method for most preferably splicing plane, using depth information to stitching image Region division is carried out, is obtained most preferably splicing plane, image to be spliced is converted into reference planes, is overcome at world coordinates conversion Manage the irrationality of big anaglyph, it is to avoid the excessive tensile in image boundary region distorts.
Using pixel depth information, the accuracy of characteristic matching is improve, improve the precision of transformation matrix.Due to phase Depth information with the same point obtained under focal length is identical, and therefore depth information can lift coupling essence as the foundation of essence coupling Degree.
The Weighted Fusion method based on focal length is proposed, splicing effect is effectively improved.Correlation by things Principle, the nearlyer correlation of distance are bigger, and picture quality is best in focal point, therefore using pixel to the distance of focus as power The basis for estimation of value can improve fusion results.
Description of the drawings
In order that the purpose of the present invention, technical scheme and beneficial effect are clearer, the present invention provides drawings described below and carries out Explanation:
Schematic flow sheets of the Fig. 1 for the method for the invention;
Fig. 2 is plane of delineation cut-off rule schematic diagram;
Fig. 3 is that the image-region based on depth information is divided and optimal splicing floor map;
Fig. 4 is space schematic diagram;
Fig. 5 is splicing schematic diagram.
Specific embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
The present invention is broadly divided into three parts, and Part I is found and region segmentation for optimal splicing plane, according to image depth Degree information divides region to be spliced and obtains the reference planes of image mosaic;Part II is the plane transformation based on reference planes With local shape factor, be to refer to stitching image to be transformed in reference planes most preferably to splice plane, and treat splicing regions Extract ORB features;Part III is characterized a registration and is merged with regional area, carries out Feature Points Matching using Hamming distance, and Smart coupling is carried out in conjunction with depth information and RANSAC algorithms, correction matrix is calculated, using the distance weighted side based on focal length Method carries out overlapping region fusion, obtains stitching image.
Fig. 1 is the schematic flow sheet of the method for the invention, comprises the following steps that:
1. the estimation of image depth information is carried out first.
2. the region division based on depth information, when the change for occurring plane in picture material, the depth of image slices vegetarian refreshments Degree information can also change.According to the image depth information distribution map that step one is estimated to obtain, the intersection of two planes is found, As shown in Fig. 2 i.e. red dotted line is the cut-off rule of image, image is divided into region to be spliced with non-splicing regions.With two figures As cut-off rule is that border constitutes reference planes, it is most preferably to splice plane, as shown in Figure 3.
3., most preferably to splice plane as reference, respective cut-off rule is that axle calculates anglec of rotation a, will using perspective transform principle Image is transformed in reference planes, as shown in Figure 4.Can be obtained according to geometrical relationship,
B=θ (1)
Angle a can be calculated by the subtense angle θ of two cameras then, computing formula is as follows:
4. the ORB feature extractions in region to be spliced, are divided into FAST Corner Detections and feature point description is calculated.Fig. 5 is spelling Connect schematic diagram.
1) FAST Corner Detections
Image pyramid is initially set up, and then FAST key points is detected at pyramidal each layer, using the FAST- of standard 9 detectors, gray value of the detector detection frame in all pixels adjacent to 16 pixels, if there are continuous 12 or more pixels Gray value adds (or deducting) detection threshold value T more than (or being less than) center pixel gray value, then this central point is sentenced It is set to a FAST angle point.The Harris response of key point is calculated again, and the size according to response filters out N number of key point, As the point for eventually detecting.
2) feature point description is calculated
First, gray scale centre of form C in characteristic point regional area is found, and spy is determined with the direction vector of characteristic point to the centre of form Levy principal direction a little.The formula of regional area square is:
Then the centre of form is:
The principal direction of FAST angle points is:
θ=arctan (m01,m10) (5)
Then, 5 × 5 subwindow, the ash of comparison window are randomly selected in 31 × 31 pixel regions of feature vertex neighborhood Degree integration, while choosing n in characteristic point to feature, obtains a 2n matrix:
Using by feature point detection to affine transformation R that determines of principal directionθTo rotating, new Description Matrix is obtained Sθ:
Then final feature point description is:
gn(p, θ)=fn(p)|(xi,yi)∈Sθ(8)
5. characteristic matching and correction are carried out in the region subject to registration for obtaining.
1) Hamming distance is calculated by the description of characteristic point, while being screened according to the depth information of match point, difference More than the point of threshold T to excluding, obtain the thick coupling of characteristic point using RANSAC to coupling to carrying out except mistake after, obtain essence Right with.
2) calculate the transformation matrix of coordinates between image using registering feature point pairs, with left image as reference pair on the right of Image is corrected.
6. the Weighted Fusion that the lap of splicing regions carries out based on focal length is treated, and method is as follows:
If not in overlapping region, the pixel takes the pixel of respective image to the pixel of fused images, if fusion figure The pixel of picture is merged in overlapping region after calculating the corresponding weights of different pixels.Realize that formula is as follows:
In formula, I1(x, y) and I2(x, y) is respectively image sequence to be spliced, and I (x, y) is the image after fusion, w1With w2The weighted value of the corresponding overlapping region of respectively two images to be spliced, w1+w2=1,0 < w1,w2< 1.
Weight calculation method based on focal length is as follows:
If O is any point in overlapping region, the focus of two width images is respectively O1、O2If, OO1The distance between be dw1, OO2The distance between be dw2, then w1And w2Computing formula is:
w2=1-w1(11)
Wherein r is fusion regulatory factor.Weights according to obtaining merge to registration region, you can obtain finally splicing knot Really.
Finally illustrate, preferred embodiment above is only unrestricted in order to technical scheme to be described, although logical Cross above preferred embodiment to be described in detail the present invention, it is to be understood by those skilled in the art that can be In form and various changes are made to which in details, without departing from claims of the present invention limited range.

Claims (4)

1. a kind of based on the image split-joint method for most preferably splicing plane and local feature, it is characterised in that:The method includes following Step:
S1:Image depth information is estimated;
S2:Image segmentation is carried out based on depth information, image original image is divided into region to be spliced and non-splicing regions, and with Two cut-off rules are that side determines optimal splicing plane;
S3:Most preferably to splice plane as reference, respective cut-off rule calculates the anglec of rotation respectively for axle, and image is transformed to reference to flat In face;
S4:Extract the feature interpretation of FAST characteristic points and characteristic point in region to be spliced;
S5:Calculating Hamming distance carries out Feature Points Matching, using depth information and RANSAC (RANdom Sample Consensus) algorithm carries out smart coupling, obtains matching double points;
S6:Using the point of registration features calculate transformation matrix, the registration region image after convert, using be based on focus away from From Weighted Fusion method obtain stitching image.
2. as claimed in claim 1 based on the image split-joint method for most preferably splicing plane and local feature, it is characterised in that:? In step S2, region to be spliced is to refer to be divided with image depth information line of demarcation.
3. as claimed in claim 1 based on the image split-joint method for most preferably splicing plane and local feature, it is characterised in that:? In step S3, the reference planes of image mosaic are not as reference with a certain image to be spliced, but most preferably to splice plane are With reference to.
4. as claimed in claim 1 based on the image split-joint method for most preferably splicing plane and local feature, it is characterised in that:? In step S6, using Weighted Fusion method based on focal length, with pixel in overlapping region to different images focus away from From weights are calculated, if O is any point in overlapping region, the focus of two width images is respectively O1、O2If, OO1The distance between For dw1, OO2The distance between be dw2, then w1And w2Computing formula is:
w 1 = d w 2 r d w 1 r + d w 2 r
w2=1-w1
Wherein r is fusion regulatory factor, introduces focal length, and the distribution of pixel weights is relevant with pixel interdependence, optimizes and melts Close result.
CN201610956645.1A 2016-11-03 2016-11-03 A kind of image split-joint method based on best splicing plane and local feature Active CN106504194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610956645.1A CN106504194B (en) 2016-11-03 2016-11-03 A kind of image split-joint method based on best splicing plane and local feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610956645.1A CN106504194B (en) 2016-11-03 2016-11-03 A kind of image split-joint method based on best splicing plane and local feature

Publications (2)

Publication Number Publication Date
CN106504194A true CN106504194A (en) 2017-03-15
CN106504194B CN106504194B (en) 2019-06-21

Family

ID=58321234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610956645.1A Active CN106504194B (en) 2016-11-03 2016-11-03 A kind of image split-joint method based on best splicing plane and local feature

Country Status (1)

Country Link
CN (1) CN106504194B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN107767330A (en) * 2017-10-17 2018-03-06 中电科新型智慧城市研究院有限公司 A kind of image split-joint method
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment
CN109544447A (en) * 2018-10-26 2019-03-29 广西师范大学 A kind of image split-joint method, device and storage medium
CN110009673A (en) * 2019-04-01 2019-07-12 四川深瑞视科技有限公司 Depth information detection method, device and electronic equipment
CN110021045A (en) * 2019-04-17 2019-07-16 桂林理工大学 Equipment localization method, device, positioning system and electronic equipment
CN110060286A (en) * 2019-04-25 2019-07-26 东北大学 A kind of monocular depth estimation method
CN110349086A (en) * 2019-07-03 2019-10-18 重庆邮电大学 A kind of image split-joint method of non-concentric image-forming condition
CN111429358A (en) * 2020-05-09 2020-07-17 南京大学 Image splicing method based on planar area consistency
CN114418920A (en) * 2022-03-30 2022-04-29 青岛大学附属医院 Endoscope multi-focus image fusion method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063608A1 (en) * 2003-09-24 2005-03-24 Ian Clarke System and method for creating a panorama image from a plurality of source images
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN104299215A (en) * 2014-10-11 2015-01-21 中国兵器工业第二O二研究所 Feature point calibrating and matching image splicing method
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN104966270A (en) * 2015-06-26 2015-10-07 浙江大学 Multi-image stitching method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063608A1 (en) * 2003-09-24 2005-03-24 Ian Clarke System and method for creating a panorama image from a plurality of source images
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN104299215A (en) * 2014-10-11 2015-01-21 中国兵器工业第二O二研究所 Feature point calibrating and matching image splicing method
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN104966270A (en) * 2015-06-26 2015-10-07 浙江大学 Multi-image stitching method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘威等: "一种基于ORB检测的特征点匹配算法", 《激光与红外》 *
宋佳乾、汪西原: "基于改进SIFT特征点匹配的图像拼接算法", 《计算机测量与控制》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878537B2 (en) 2017-03-22 2020-12-29 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium
CN106991645B (en) * 2017-03-22 2018-09-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN107767330A (en) * 2017-10-17 2018-03-06 中电科新型智慧城市研究院有限公司 A kind of image split-joint method
CN107767330B (en) * 2017-10-17 2021-02-26 中电科新型智慧城市研究院有限公司 Image splicing method
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment
CN109242811B (en) * 2018-08-16 2021-09-17 广州视源电子科技股份有限公司 Image alignment method and device, computer readable storage medium and computer equipment
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment
CN109166077B (en) * 2018-08-17 2023-04-18 广州视源电子科技股份有限公司 Image alignment method and device, readable storage medium and computer equipment
CN109544447A (en) * 2018-10-26 2019-03-29 广西师范大学 A kind of image split-joint method, device and storage medium
CN109544447B (en) * 2018-10-26 2022-10-21 广西师范大学 Image splicing method and device and storage medium
CN110009673A (en) * 2019-04-01 2019-07-12 四川深瑞视科技有限公司 Depth information detection method, device and electronic equipment
CN110021045A (en) * 2019-04-17 2019-07-16 桂林理工大学 Equipment localization method, device, positioning system and electronic equipment
CN110060286A (en) * 2019-04-25 2019-07-26 东北大学 A kind of monocular depth estimation method
CN110060286B (en) * 2019-04-25 2023-05-23 东北大学 Monocular depth estimation method
CN110349086A (en) * 2019-07-03 2019-10-18 重庆邮电大学 A kind of image split-joint method of non-concentric image-forming condition
CN111429358A (en) * 2020-05-09 2020-07-17 南京大学 Image splicing method based on planar area consistency
CN114418920A (en) * 2022-03-30 2022-04-29 青岛大学附属医院 Endoscope multi-focus image fusion method

Also Published As

Publication number Publication date
CN106504194B (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN106504194A (en) A kind of image split-joint method based on most preferably splicing plane and local feature
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN102006425B (en) Method for splicing video in real time based on multiple cameras
CN101512601B (en) Method for determining a depth map from images, device for determining a depth map
CN106910222A (en) Face three-dimensional rebuilding method based on binocular stereo vision
CN101542529B (en) Generation method of depth map for an image and an image process unit
CN107248159A (en) A kind of metal works defect inspection method based on binocular vision
CN101556692A (en) Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN110992263B (en) Image stitching method and system
CN108122191A (en) Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN114255197B (en) Infrared and visible light image self-adaptive fusion alignment method and system
CN107580175A (en) A kind of method of single-lens panoramic mosaic
CN104599258A (en) Anisotropic characteristic descriptor based image stitching method
CN107945111A (en) A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN111160291B (en) Human eye detection method based on depth information and CNN
CN108470356A (en) A kind of target object fast ranging method based on binocular vision
CN108734657A (en) A kind of image split-joint method with parallax processing capacity
CN103902953B (en) A kind of screen detecting system and method
CN106464780B (en) XSLIT camera
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN110109465A (en) A kind of self-aiming vehicle and the map constructing method based on self-aiming vehicle
CN103793894A (en) Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN109493282A (en) A kind of stereo-picture joining method for eliminating movement ghost image
CN107679542A (en) A kind of dual camera stereoscopic vision recognition methods and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant