CN108734657B - Image splicing method with parallax processing capability - Google Patents

Image splicing method with parallax processing capability Download PDF

Info

Publication number
CN108734657B
CN108734657B CN201810384920.6A CN201810384920A CN108734657B CN 108734657 B CN108734657 B CN 108734657B CN 201810384920 A CN201810384920 A CN 201810384920A CN 108734657 B CN108734657 B CN 108734657B
Authority
CN
China
Prior art keywords
image
points
point
transformation
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810384920.6A
Other languages
Chinese (zh)
Other versions
CN108734657A (en
Inventor
杨丰瑞
漆坤元
董昊
翁小莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Information Technology Designing Co ltd
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing Information Technology Designing Co ltd
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Information Technology Designing Co ltd, Chongqing University of Post and Telecommunications filed Critical Chongqing Information Technology Designing Co ltd
Priority to CN201810384920.6A priority Critical patent/CN108734657B/en
Publication of CN108734657A publication Critical patent/CN108734657A/en
Application granted granted Critical
Publication of CN108734657B publication Critical patent/CN108734657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention requests to protect an image splicing method with parallax processing capability. The method breaks through the limitation of the traditional splicing method on the co-plane of the input images, and provides an image splicing scheme based on a multi-homography matrix. The scheme firstly designs a unique SIFT feature descriptor: selecting a circular ring as a domain construction key point descriptor, and combining the gradient histogram and the gray difference information as the feature information of the descriptor to finally form a new 36-dimensional feature vector; and then, a multi-homography matrix method is adopted, and the multi-homography matrix is used for registration and alignment. The local projective transformation model is used first, and the projective transformation of each image is guided by the grid. So that the overlapping areas can be aligned precisely and with minimal local distortion. After the projective transformation of each local region is completed, the projective transformation of each image is constrained by a global similarity transformation so that the projective transformation is similar to the similarity transformation as a whole. The result of the stitching is both accurate alignment and not too distorted.

Description

Image splicing method with parallax processing capability
Technical Field
The invention belongs to the field of digital image processing, and particularly discloses an image splicing method with parallax processing capability.
Background
The image splicing technology is to splice two or more images with overlapped areas into an image with wide visual field and high resolution through a series of technologies. The image stitching technology has a great number of applications in our daily life, such as: the mobile phone (camera) panoramic shooting function, the three-dimensional panoramic view, the vehicle-mounted safety, the virtual reality technology and other fields are integrated into the aspects of our life. In our daily life, when a large-sized object or scene needs to be shot, the field of view is often enlarged by hardware devices such as a panoramic camera and a wide-angle lens, and the devices are expensive. The better method is realized by using a software method, and a computer uses image splicing software to splice several images with overlapped areas together. With the emergence of various commercial splicing software in recent years, people are given an illusion that the image splicing technology is completely mature. However, in practical applications, it is found that the software stitching results cannot meet the sensory requirements of people, and particularly, when the image shooting mode is not in accordance with the software default mode (for example, parallax exists, and the images are not on the same plane), the stitched images may have phenomena such as misalignment, ghost, image distortion and the like. The image splicing method with parallax processing capability provided by the invention enables the spliced image to have parallax fault tolerance, and can have higher alignment precision and lower image distortion.
The problem to be solved by the invention is that the traditional splicing method has strict requirements on input images, and the optical centers of cameras are nearly overlapped when shooting, namely, only when the imaging planes of the images are basically in the same plane, a satisfactory result can be obtained by directly utilizing a homography matrix to splice the images. When the imaging planes of the input images are not in the same plane or have parallax, the alignment precision of the overlapping areas of the images is not high, the phenomena of blurring and ghosting occur, and the non-overlapping areas are distorted. The invention adopts a method of multiple homography matrixes, and the multiple homography matrixes are used for registration and alignment. The local projective transformation model is used first, and the projective transformation of each image is guided by the grid. So that the overlapping areas can be aligned precisely and with minimal local distortion. After the projective transformation of each local region is completed, the projective transformation of each image is constrained by a global similarity transformation so that the projective transformation is similar to the similarity transformation as a whole.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The image splicing method with the parallax processing capability has parallax fault tolerance, and can simultaneously have high alignment precision and low image distortion. The technical scheme of the invention is as follows:
an image stitching method with parallax processing capability, comprising the steps of:
step 1: constructing a scale space for the two input images to enable image data to have multi-scale characteristics;
step 2: determining extreme points under each scale space by using an extreme point uniform detection method, and removing edge points and low-contrast points and removing the edge points and the low-contrast points;
and step 3: selecting a circular area as a sampling area for constructing a key point descriptor, and combining a gradient histogram and gray difference information as characteristic information of the descriptor to finally form a new 36-dimensional descriptor;
and 4, step 4: dividing an input image into NxN grids, using projection transformation once for each grid, and then using similarity transformation once for the whole image;
and 5: compensating the image deformation area, combining the local projection transformation and the global similarity transformation to obtain a homography matrix between the two images, registering by using the obtained homography matrix, and then fusing the images to obtain a spliced image.
Further, the image scale space constructed in the step 1) adopts convolution of the image and the Gaussian function to construct a scale space, and a difference Gaussian pyramid is constructed by the difference between adjacent images in the scale space.
Further, the step 2) of uniformly detecting the scale space specifically includes: for the pixel points in the same scale, the key point is taken as the center, the pixel points in the square area with the radius of r are taken as the detection area, and for the pixel points in the adjacent upper and lower scales, 18 pixel points corresponding to the key point are selected as the neighborhood, and (2r +1) is the total2-1+18=4r2+4r +18 pixel points, where r is:
Figure GDA0003558425870000021
xirepresenting the position of the feature point i, O is the set of all feature point positions, crFor the robustness parameter, DoG (x) is the feature point.
Further, the precise positioning of the feature points is to remove points with low contrast through a three-dimensional quadratic function and remove edge points through a Hessian function.
Further, the method takes the key point as the center, takes the pixel points in the square area with the radius of r as the detection area, and comprises the following steps:
(1) using key point as the centre of a circle, 8 pixels are the radius, confirm the neighborhood scope of characteristic point, then use 6 pixels respectively again, and 7 pixels are the radius, form 3 concentric circles of different radiuses, divide into 3 sub-regions with the neighborhood, rotate interior circle region again, make its direction the same with the main direction, divide equally into 4 regions, do respectively: d1,D2,D3,D4Two rings are respectively R1,R2
(2) Respectively calculating gradient module values and directions of all pixels in four regions of the inner circle, projecting the gradient module values and the directions to 8 directions to form a gradient direction histogram, and projecting D18-dimensional vectors within the region as the first 8 elements of the feature vector, D2The 8-dimensional vector in the region is the 9 th to 16 th element, D3The 8-dimensional vector in the region is used as the feature vector for the 17 th to 24 th elements, D4Taking the 8-dimensional vector in the region as a 25 th element to a 32 nd element of the feature vector to form a 32-dimensional feature vector;
(3) normalizing 8 gradient directions in the four regions to ensure illumination invariance, namely, normalizing to obtain: di=(D1,D2,D3,D4)
Figure GDA0003558425870000031
Wherein d isijA jth gradient vector representing an ith region,
Figure GDA0003558425870000032
denotes dijNormalized values.
Further, the difference between all pixel values in the two circles and the pixel value of the feature point is calculated: cp=Ip-Ipc,IpIs the pixel value of each point in the circle, IpcIs the feature point pixel value. All C ispThe values of ≧ 0 add up:
Figure GDA0003558425870000033
handle CpThe values of < 0 add up:
Figure GDA0003558425870000034
wherein
Figure GDA0003558425870000035
An accumulated value indicating that the difference between all pixel values in the ith circle is greater than zero,
Figure GDA0003558425870000036
indicating an accumulated value where the difference between all pixel values within the ith circle is less than zero.
Finally form a 36-dimensional descriptor
Figure GDA0003558425870000041
Figure GDA0003558425870000042
Further, the step 4) divides the image into N × N blocks, and then takes the coordinates of the center point of each block as the point P to be matched*Let A be the first two rows of H, H ═ HTEstimating H by weighted linear transformation determinant*
Figure GDA0003558425870000043
Wherein W*∈R2N×2NIs a diagonal matrix of the grid,
Figure GDA0003558425870000044
weight of
Figure GDA0003558425870000045
Is based on the current point P*To all feature points on the image I
Figure GDA0003558425870000046
The gaussian distance of (d) to determine:
Figure GDA0003558425870000047
Hmdltis the matrix W*The smallest of a has a singular vector.
Further, the step 5) of compensating the image deformation region and combining the local projective transformation and the global similarity transformation to obtain a homography matrix between the two images specifically includes: first using a threshold value epsilon1Remove outliers and then use a threshold ε2RANSAC to find homographies of planes with maximum internal angles, where ε1<ε2And removing interior points, repeating the process until the number of interior angles is less than eta, and each group of matched interior points is used for calculating similarity transformation; then, the rotation angle corresponding to the transformation is checked, and the rotation angle with the minimum is checked, the final multi-homography matrix is calculated:
Hi′=αHi+βS
wherein, alpha represents the proportion of the homography matrix, namely the ratio of the image overlapping area to the whole image, beta represents the proportion of the similarity transformation matrix, and the two relations are as follows: α -1- β.
Further, after a multi-homography matrix between the images is determined, a fusion strategy is utilized to realize natural and smooth transition on an overlapped area of the two images to be spliced.
Furthermore, the direct average fusion method is adopted to fuse the complementarity, the correlation and the redundancy information of the images, so that the visual uniformity is realized, and the fusion area is in smooth transition.
The invention has the following advantages and beneficial effects:
step 2) uniform detection of scale space, wherein for the pixel points in the same scale, the key point is taken as the center, the pixel points in the region with the radius of r are taken as the field, and for the pixel points in the adjacent upper and lower scales, 18 pixel points corresponding to the key point are selected as the neighborhood. And if the key points are larger or smaller than the pixel points, the key points are extreme points.
And 3) determining the neighborhood range of the characteristic points by taking the characteristic points as the circle center and 8 pixels as the radius. Then respectively using 6 pixels and 7 pixels as radiuses to form 3 concentric rings with different radiuses, dividing the neighborhood into 3 sub-regions, rotating the inner circle region to enable the direction of the inner circle region to be the same as the main direction, and dividing the inner circle into 4 regions respectively: d1,D2,D3,D4. The two rings are respectively: r1And R2. And respectively calculating the gradient module values and the directions of the four regions of the inner circle and all the pixels, and projecting the gradient module values and the directions to 8 directions to form a gradient direction histogram. And normalizing 8 gradient directions in the four regions to ensure illumination invariance. And taking the pixel difference between the outer circular ring and the characteristic point as one dimension of the descriptor to form a 32-dimensional descriptor. The circular area is used as the sampling area of the descriptor, so that the influence of rotation on the algorithm can be reduced, the robustness of the descriptor is improved by adding pixel difference information, the complexity of the algorithm is greatly reduced by the 32-dimensional descriptor, and the real-time performance is improved.
And 4) dividing the images to be spliced into an overlapped part and a non-overlapped part. Dividing the image into N × N blocks, using the coordinates of the center point of each block as the point to be matched, and estimating the homography matrix for each block by a weighted linear transformation determinant. The homography matrix of the parallax image is fitted by calculating the homography matrix of each square, so that two images of the image to be spliced are approximately positioned on the same plane, the alignment precision of an overlapping area is greatly improved, and adverse factors such as ghost, gaps, cracks and the like which may occur in splicing are eliminated.
Said step 5) first using a threshold value ε1RANSAC of (a) removes outliers. Then using a threshold value epsilon2RANSAC to find homographies of planes with maximum internal angles, where ε1<ε2And the inner points are removed. This process is repeated until the number of internal angles is less than η. Each set of matched inliers is used to compute a similarity transformation. Then, the rotation angle corresponding to the transformation is checked, and the rotation angle having the minimum is checked. After the global similarity transformation is obtained, the local projective transformation and the global similarity transformation are combined by using the weight coefficient to obtain a final homography matrix. The geometric shape of the image in the non-overlapping area is constrained through global similarity transformation, the distortion of the image is eliminated, and the splicing result is more natural and attractive.
Drawings
FIG. 1 is a schematic diagram of the uniform detection of extreme points according to the preferred embodiment of the present invention;
FIG. 2 is a gradient histogram direction assignment diagram;
FIG. 3 is a SIFT feature point descriptor structure diagram of the present invention;
FIG. 4 is a comparison graph of the splicing results of the multi-homography matrix splicing method adopted by the invention;
FIG. 5 is a flow chart of the multi-homography matrix splicing algorithm provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the SIFT algorithm is to construct a scale space for two input remote sensing images to be matched for simulating multi-scale features of image data. The extreme point detection is carried out on a scale space, and the extreme point uniform detection is adopted in the invention, as shown in figure 1, so that the obtained characteristic points are uniformly distributed on the image. The invention selects a ring as a domain construction key point descriptor, combines the gradient histogram and the gray difference information as the characteristic information of the descriptor, and finally forms a new 36-dimensional descriptor, as shown in fig. 3. Not only is the complexity of the algorithm reduced, but also the robustness of the descriptor is enhanced. In order to reduce the requirement of an input image, parallax tolerance is provided. The invention adopts a method of multiple homography matrixes, and the multiple homography matrixes are used for registration and alignment. The local projective transformation model is used first, and the projective transformation of each image is guided by the grid. The projective transformation of each image is then constrained with a global similarity transformation such that it is similar to the similarity transformation as a whole. The flow chart is shown in the attached figure 5, and specifically comprises the following steps:
step 1: and constructing an image scale space for simulating multi-scale features of the image data. Adopting extreme point uniform detection to detect the pixel points in the same scale, taking the key point as the center and the pixel points in the region with radius r as the field, selecting the corresponding 18 pixel points as the neighborhood for the pixel points in the adjacent scale, and totally (2r +1)2-1+18=4r2+4r +18 pixel points,
Figure GDA0003558425870000061
step 2: constructing a descriptor based on the detected extreme points, and the steps are as follows:
(1) and determining the neighborhood range of the characteristic points by taking the characteristic points as the circle center and 8 pixels as the radius. Then respectively using 6 pixels and 7 pixels as radiuses to form 3 concentric circular rings with different radiuses, dividing the neighborhood into 3 sub-regions, rotating the inner circular region to enable the direction of the inner circular region to be the same as the main direction, and dividing the inner circular region into 4 regions respectively: d1,D2,D3,D4. Two rings are respectively R1,R2
(2) And respectively calculating the gradient module values and the directions of the four regions of the inner circle and all the pixels, and projecting the gradient module values and the directions to 8 directions to form a gradient direction histogram. Will D18-dimensional vectors within the region as the first 8 elements of the feature vector, D2Region(s)Inner 8-dimensional vector as feature vector 9 th to 16 th element, D3The 8-dimensional vector in the region is used as the feature vector for the 17 th to 24 th elements, D4The 8-dimensional vectors within the region serve as feature vectors at the 25 th to 32 th elements. A 32-dimensional feature vector may be formed.
(3) And normalizing the 8 gradient directions in the four regions to ensure the illumination invariance. Namely, normalization yields: di=(D1,D2,D3,D4)
Figure GDA0003558425870000071
(4) For the obtained DiFinding the largest gradient direction statistic, if the statistic element is at the head of the 8-dimensional vector, the vector is finally formed, if the largest vector is not at the first, circularly moving the whole vector to the left until the largest gradient direction statistic moves to the first element of the vector.
(5) Calculating the pixel difference between each pixel point and the characteristic point in the two circular rings as the pixel distribution condition of the points in the circular rings, IpcIs the feature point pixel value. All C ispThe values of ≧ 0 add up:
Figure GDA0003558425870000072
handle CpThe values of < 0 add up:
Figure GDA0003558425870000073
finally form a 36-dimensional descriptor
Figure GDA0003558425870000074
Figure GDA0003558425870000075
And step 3: such as the overlapping area of fig. 4Domain model estimation, a direct linear transformation algorithm, can be used to estimate the homography matrix. Dividing the image into N × N blocks and then using the coordinates of the center point of each block as the point P to be matched*Estimating H by weighted linear transformation determinant*
Figure GDA0003558425870000081
And 4, step 4: using a signal having a threshold value epsilon1RANSAC of (a) removes outliers. Then, we use the value with the threshold ε2RANSAC to find homographies of planes with maximum internal angles, where ε1<ε2And the inner points are removed. This process is repeated until the number of internal angles is less than η. Each set of matched inliers is used to compute a similarity transformation. Then, the rotation angle corresponding to the transformation is checked, and the rotation angle having the minimum is checked. The final homography matrix is calculated: hi′=αHi+βS
And 5: after the transformation matrix between the images is determined, a certain fusion strategy can be utilized to realize natural and smooth transition of the overlapped area of the two images to be spliced. The invention utilizes the direct average fusion method to fuse the complementarity, the correlation and the redundancy information of the images, realizes the uniformity of vision, and the smooth transition of the fusion area, and effectively eliminates the splicing seams.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. An image stitching method with parallax processing capability is characterized by comprising the following steps:
step 1: constructing a scale space for the two input images to enable image data to have multi-scale characteristics;
step 2: determining extreme points under each scale space by using an extreme point uniform detection method, and removing edge points and low-contrast points;
and step 3: selecting a circular area as a sampling area for constructing a key point descriptor, and combining a gradient histogram and gray difference information as characteristic information of the descriptor to finally form a new 36-dimensional descriptor;
and 4, step 4: dividing an input image into NxN grids, using projection transformation once for each grid, and then using similarity transformation once for the whole image;
and 5: compensating the image deformation area, combining local projection transformation and global similarity transformation to obtain a homography matrix between the two images, registering by using the obtained homography matrix, and then carrying out image fusion to obtain a spliced image;
the step 2) of uniform detection of the scale space specifically comprises the following steps: for the pixel points in the same scale, the key point is taken as the center, the pixel points in the square area with the radius of r are taken as the detection area, and for the pixel points in the adjacent upper and lower scales, 18 pixel points corresponding to the key point are selected as the neighborhood, and (2r +1) is the total2-1+18=4r2+4r +18 pixel points, where r is:
Figure FDA0003542933670000011
xirepresenting the position of the feature point i, O is the set of all feature point positions, crFor robustness parameters, DoG (x) is a feature point;
the method is characterized in that the method takes a key point as a center, takes pixel points in a square area with radius r as a detection area, and comprises the following steps:
(1) using key point as the centre of a circle, 8 pixels are the radius, confirm the neighborhood scope of characteristic point, then use 6 pixels respectively again, and 7 pixels are the radius, form 3 concentric circles of different radiuses, divide into 3 sub-regions with the neighborhood, rotate interior circle region again, make its direction the same with the main direction, divide equally into 4 regions, do respectively: d1,D2,D3,D4Two rings are respectively R1,R2
(2) Respectively calculating gradient module values and directions of all pixels in four regions of the inner circle, projecting the gradient module values and the directions to 8 directions to form a gradient direction histogram, and projecting D18-dimensional vectors within the region as the first 8 elements of the feature vector, D2The 8-dimensional vector in the region is the 9 th to 16 th element, D3The 8-dimensional vector in the region is used as the feature vector for the 17 th to 24 th elements, D4Taking the 8-dimensional vector in the region as a 25 th element to a 32 nd element of the feature vector to form a 32-dimensional feature vector;
(3) normalizing 8 gradient directions in the four regions to ensure illumination invariance, namely, normalizing to obtain: di=(D1,D2,D3,D4)
Figure FDA0003542933670000021
Wherein d isijA jth gradient vector representing an ith region,
Figure FDA0003542933670000022
denotes dijA normalized value;
calculating the difference between all pixel values in the two circular rings and the pixel value of the characteristic point: cp=Ip-Ipc,IpIs the pixel value of each point in the circle, IpcAll C's are added to the feature point pixel valuespThe values of ≧ 0 add up:
Figure FDA0003542933670000023
handle CpThe values of < 0 add up:
Figure FDA0003542933670000024
wherein
Figure FDA0003542933670000025
An accumulated value indicating that the difference between all pixel values in the ith circle is greater than zero,
Figure FDA0003542933670000026
an accumulated value representing that the difference between all pixel values in the ith circle is less than zero;
finally form a 36-dimensional descriptor
Figure FDA0003542933670000027
Figure FDA0003542933670000028
2. The image stitching method with parallax processing capability according to claim 1, wherein the image scale space constructed in step 1) adopts convolution of the image and a gaussian function to construct a scale space, and a difference gaussian pyramid is constructed from a difference between adjacent images in the scale space.
3. The image stitching method with parallax processing capability according to claim 1,
the precise positioning of the characteristic points is to remove points with low contrast through a three-dimensional quadratic function and remove edge points through a Hessian function.
4. The image stitching method with parallax processing capability according to claim 1,
the step 4) divides the image into N multiplied by N square blocks, and then takes the coordinates of the center point of each block as the point P to be matched*Let A be the first two rows of H, H ═ HTEstimating H by weighted linear transformation determinant*
Figure FDA0003542933670000031
Wherein W*∈R2N×2NIs a diagonal matrix of the grid,
Figure FDA0003542933670000032
weight of
Figure FDA0003542933670000033
Is based on the current point P*To all feature points on the image I
Figure FDA0003542933670000034
The gaussian distance of (d) to determine:
Figure FDA0003542933670000035
Hmdltis the matrix W*The smallest of a has a singular vector.
5. The image stitching method with parallax processing capability according to claim 4,
the step 5) of compensating the image deformation region, and combining the local projection transformation and the global similarity transformation to obtain a homography matrix between the two images specifically comprises the following steps: first using a threshold value epsilon1Remove outliers and then use a threshold ε2RANSAC to find homographies of planes with maximum internal angles, where ε1<ε2And removing interior points, repeating the process until the number of interior angles is less than eta, and each group of matched interior points is used for calculating similarity transformation; then, the rotation angle corresponding to the transformation is checked, and the rotation angle with the minimum is checked, resulting in a similarity transformation matrix S, the final multi-homography matrix is calculated:
Hi′=αHi+βS
wherein, alpha represents the proportion of the homography matrix, namely the ratio of the image overlapping area to the whole image, beta represents the proportion of the similarity transformation matrix, and the two relations are as follows: α -1- β.
6. The image stitching method with parallax processing capability according to claim 5,
after the multi-homography matrix between the images is determined, a fusion strategy is utilized to realize natural and smooth transition on the overlapped area of the two images to be spliced.
7. The image stitching method with parallax processing capability according to claim 6,
and the direct average fusion method is adopted to fuse the complementarity, correlation and redundancy information of the images, so that the visual uniformity is realized, and the fusion region is in smooth transition.
CN201810384920.6A 2018-04-26 2018-04-26 Image splicing method with parallax processing capability Active CN108734657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810384920.6A CN108734657B (en) 2018-04-26 2018-04-26 Image splicing method with parallax processing capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810384920.6A CN108734657B (en) 2018-04-26 2018-04-26 Image splicing method with parallax processing capability

Publications (2)

Publication Number Publication Date
CN108734657A CN108734657A (en) 2018-11-02
CN108734657B true CN108734657B (en) 2022-05-03

Family

ID=63939978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810384920.6A Active CN108734657B (en) 2018-04-26 2018-04-26 Image splicing method with parallax processing capability

Country Status (1)

Country Link
CN (1) CN108734657B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation
CN110111250B (en) * 2019-04-11 2020-10-30 中国地质大学(武汉) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
CN110232673B (en) * 2019-05-30 2023-06-23 电子科技大学 Rapid and steady image stitching method based on medical microscopic imaging
CN110349086B (en) * 2019-07-03 2023-01-24 重庆邮电大学 Image splicing method under non-concentric imaging condition
CN110930301B (en) * 2019-12-09 2023-08-11 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111898525A (en) * 2020-07-29 2020-11-06 广东智媒云图科技股份有限公司 Smoke recognition model construction method, smoke detection method and smoke detection device
CN112085653B (en) * 2020-08-07 2022-09-16 四川九洲电器集团有限责任公司 Parallax image splicing method based on depth of field compensation
US11190748B1 (en) 2020-11-20 2021-11-30 Rockwell Collins, Inc. Dynamic parallax correction for visual sensor fusion
TWI784605B (en) * 2021-06-29 2022-11-21 倍利科技股份有限公司 Image stitching method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389787A (en) * 2015-09-30 2016-03-09 华为技术有限公司 Panorama image stitching method and device
CN105931185A (en) * 2016-04-20 2016-09-07 中国矿业大学 Automatic splicing method of multiple view angle image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9153025B2 (en) * 2011-08-19 2015-10-06 Adobe Systems Incorporated Plane detection and tracking for structure from motion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389787A (en) * 2015-09-30 2016-03-09 华为技术有限公司 Panorama image stitching method and device
CN105931185A (en) * 2016-04-20 2016-09-07 中国矿业大学 Automatic splicing method of multiple view angle image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像拼接中多单应性矩阵配准及错位消除算法研究;王莹;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20170615;正文第7-9,24-25页 *

Also Published As

Publication number Publication date
CN108734657A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108734657B (en) Image splicing method with parallax processing capability
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN109903227B (en) Panoramic image splicing method based on camera geometric position relation
CN109003311B (en) Calibration method of fisheye lens
KR101643607B1 (en) Method and apparatus for generating of image data
CN104778656B (en) Fisheye image correcting method based on spherical perspective projection
CN110969670B (en) Multispectral camera dynamic three-dimensional calibration method based on significant features
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN111553939B (en) Image registration algorithm of multi-view camera
CN113191954B (en) Panoramic image stitching method based on binocular camera
CN106952219B (en) Image generation method for correcting fisheye camera based on external parameters
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
CN112801870B (en) Image splicing method based on grid optimization, splicing system and readable storage medium
CN110880191B (en) Infrared stereo camera dynamic external parameter calculation method based on histogram equalization
Lo et al. Efficient and accurate stitching for 360° dual-fisheye images and videos
CN110969669A (en) Visible light and infrared camera combined calibration method based on mutual information registration
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
AU2015256320A1 (en) Imaging system, method, and applications
CN110264403A (en) It is a kind of that artifacts joining method is gone based on picture depth layering
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN113012234A (en) High-precision camera calibration method based on plane transformation
CN110136048B (en) Image registration method and system, storage medium and terminal
CN108093188B (en) A method of the big visual field video panorama splicing based on hybrid projection transformation model
CN114255197A (en) Infrared and visible light image self-adaptive fusion alignment method and system
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant