CN107301620A - Method for panoramic imaging based on camera array - Google Patents
Method for panoramic imaging based on camera array Download PDFInfo
- Publication number
- CN107301620A CN107301620A CN201710407833.3A CN201710407833A CN107301620A CN 107301620 A CN107301620 A CN 107301620A CN 201710407833 A CN201710407833 A CN 201710407833A CN 107301620 A CN107301620 A CN 107301620A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- mtd
- mrow
- spliced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 title claims description 20
- 239000011159 matrix material Substances 0.000 claims abstract description 25
- 230000009466 transformation Effects 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 239000013598 vector Substances 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims 4
- 239000012141 concentrate Substances 0.000 claims 1
- 238000010276 construction Methods 0.000 claims 1
- 230000001186 cumulative effect Effects 0.000 claims 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims 1
- 239000010931 gold Substances 0.000 claims 1
- 229910052737 gold Inorganic materials 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000012216 screening Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000013475 authorization Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于相机阵列的全景成像方法,主要解决现有技术拼接场景较小且存在“鬼影”的问题。其方案是:1)使用阵列相机获取多幅图像,读入两幅图像分别提取SIFT特征,进行特征匹配查找,得到两幅图像的匹配点并进行筛选,计算最优变换矩阵,根据最优变换矩阵对图像进行变换,采用改进的最佳缝合线算法和加权平均融合算法对图像进行融合拼接;2)重复步骤1)直到完成上下两部分的图像拼接,使之作为待拼接图像,逆时针旋转90°后继续拼接,对结果顺时针旋转90°,得到最终拼接全景图像。本发明显著消除了“鬼影”现象,且得到全景图像视场大,分辨率高,更加贴近真实的全景图,可用于横竖两个方向更大场景图像的拼接。
The invention discloses a panoramic imaging method based on a camera array, which mainly solves the problems of small stitching scenes and "ghost images" in the prior art. The solution is: 1) Use an array camera to acquire multiple images, read in two images to extract SIFT features respectively, perform feature matching search, obtain matching points of the two images and perform screening, calculate the optimal transformation matrix, and according to the optimal transformation The matrix transforms the image, and uses the improved best suture algorithm and weighted average fusion algorithm to fuse and stitch the image; 2) Repeat step 1) until the upper and lower parts of the image stitching are completed, so that it is used as the image to be stitched and rotated counterclockwise Continue stitching after 90°, and rotate the result 90° clockwise to get the final stitched panoramic image. The invention significantly eliminates the "ghost" phenomenon, and obtains a panoramic image with a large field of view and high resolution, which is closer to a real panoramic image, and can be used for splicing larger scene images in horizontal and vertical directions.
Description
技术领域technical field
本发明属于图像处理技术领域,特别涉及一种基于相机阵列的全景成像方法,可用于横竖两个方向更大场景图像的拼接。The invention belongs to the technical field of image processing, and in particular relates to a panoramic imaging method based on a camera array, which can be used for splicing larger scene images in horizontal and vertical directions.
背景技术Background technique
随着科学技术的发展,数字成像技术正在逐步迈上一个新的台阶,数字成像设备也开始被广泛应用于日常生活中,利用数码相机,手机等设备拍摄相片已成为人们日常生活中不可或缺的一部分,与此同时,单个相机成像的局限也在日益显现:在一些特殊的应用场景中,由于数字成像设备自身的限制,使得使用者的需求不能被很好地满足。例如,当人们要获得宽视场,高分辨率的图像时,很多时候只能采用广角相机,然而其昂贵的价格却使人望而却步。With the development of science and technology, digital imaging technology is gradually stepping up to a new level, and digital imaging equipment has also begun to be widely used in daily life. Using digital cameras, mobile phones and other equipment to take photos has become an indispensable part of people's daily life. At the same time, the limitation of single camera imaging is also becoming increasingly apparent: in some special application scenarios, due to the limitations of digital imaging equipment itself, the needs of users cannot be well met. For example, when people want to obtain wide-field and high-resolution images, they can only use wide-angle cameras in many cases, but their expensive prices make people prohibitive.
为了解决上述问题,图像拼接技术应运而生。该技术可以将一系列有部分重叠边界的小视角图像根据相应算法进行匹配对准进而融合,最终拼接成一幅宽视角图像。In order to solve the above problems, image stitching technology came into being. This technology can match and align a series of small-view images with partially overlapping boundaries according to the corresponding algorithm, and then fuse them, and finally splicing them into a wide-view image.
图像拼接技术最直接的应用就是手机自带的全景成像模式,然而其局限也显而易见:只能横拍或只能竖拍,最终得到的图像只有一个方向的延伸;同时,拍摄这样的图像需要手持拍摄时有极高的稳定度,否则会导致最终得到的图像产生变形,无法得到想要的效果,使得用户体验大打折扣。The most direct application of image stitching technology is the panoramic imaging mode that comes with the mobile phone. However, its limitations are also obvious: it can only be taken horizontally or vertically, and the final image only extends in one direction; at the same time, shooting such an image requires a hand-held There is a very high degree of stability when shooting, otherwise the final image will be deformed, and the desired effect cannot be obtained, which greatly reduces the user experience.
维沃移动通信有限公司拥有的专利“一种全景拍照的方法及移动终端”(申请号201610515352.x,申请日2016.06.30,授权号CN 1059779156A,授权日2016.09.28)提出了一种全景拍照的方法及移动终端。该专利技术包括第一、第二和第三摄像头,通过在全景拍摄中控制三个摄像头获取三幅图像,同时进行图像拼接,生成目标全景图像,可在不水平旋转移动终端的情况下,一次拍照就获取到全景图像。该方法存在的不足之处是一次拍照仅能获取一个方向的图像,最终结果不能满足某些大场景拼接的需求。The patent "A method and mobile terminal for panoramic photography" owned by Vivo Mobile Communications Co., Ltd. (application number 201610515352.x, application date 2016.06.30, authorization number CN 1059779156A, authorization date 2016.09.28) proposes a panoramic photography method and mobile terminal. The patented technology includes the first, second and third cameras. By controlling the three cameras in the panoramic shooting to obtain three images, image stitching is performed at the same time to generate a target panoramic image, which can be captured at one time without rotating the mobile terminal horizontally. A panoramic image is obtained by taking a photo. The disadvantage of this method is that only one direction of images can be obtained by taking a photo, and the final result cannot meet the needs of some large scene stitching.
发明内容Contents of the invention
本发明的目的在于针对上述已有技术的不足,提出一种基于相机阵列的全景成像方法,以通过一次拍照同时获取横竖两个方向的图像,满足更大场景图像拼接的需求。The purpose of the present invention is to address the shortcomings of the above-mentioned prior art, and propose a panoramic imaging method based on camera arrays, so as to obtain images in both horizontal and vertical directions at the same time by taking a single photo, so as to meet the needs of larger scene image stitching.
本发明的基本思路是:使用2×3阵列相机得到阵列图像,每两幅图像间都存在重叠区域,控制六个相机同步采集图像;对图像分别进行特征提取与两两匹配,最后进行图像融合完成拼接。其实现步骤如下:The basic idea of the present invention is: use 2×3 array cameras to obtain array images, and there is an overlapping area between every two images, control six cameras to collect images synchronously; perform feature extraction and pairwise matching on the images, and finally perform image fusion Finished stitching. Its implementation steps are as follows:
(1)使用阵列相机完成多幅图像的同时采集,获取i幅图像,m≤i;(1) Use the array camera to complete the simultaneous acquisition of multiple images, and acquire i images, m≤i;
(2)读入两幅图像并分别提取尺度不变特征变换特征点,即SIFT特征点;(2) Read in two images and extract scale-invariant feature transformation feature points, namely SIFT feature points;
(3)对步骤(2)所得SIFT特征点进行特征匹配查找,得到每两幅图像的匹配点;(3) carry out feature matching search to step (2) gained SIFT feature point, obtain the matching point of every two images;
(4)对步骤(3)所得的每两幅图像匹配点进行筛选并计算最优变换矩阵H;(4) screening and calculating optimal transformation matrix H for every two image matching points obtained in step (3);
(5)根据步骤(4)所得最优变换矩阵对图像进行变换,并进行图像融合:(5) Transform the image according to the optimal transformation matrix obtained in step (4), and perform image fusion:
(5a)根据步骤(4)所得最优变换矩阵对输入两幅图像中的任意一幅进行变换,使得两幅图像位于同一坐标系中且两幅图像具有重叠区域;(5a) Transform any one of the two input images according to the optimal transformation matrix obtained in step (4), so that the two images are located in the same coordinate system and the two images have overlapping regions;
(5b)对(5a)中变换到同一坐标系的两幅图像进行亮度校正,使两幅图像的亮度差异最小;(5b) Carry out brightness correction to the two images converted to the same coordinate system in (5a), so that the brightness difference of the two images is minimized;
(5c)在配准的图像上寻找一条最佳缝合线;(5c) Find an optimal suture line on the registered image;
(5d)对包含最佳缝合线的矩形进行加权平均融合,得到两幅图像的拼接全景图像;(5d) Carrying out weighted average fusion to the rectangle containing the best suture line to obtain the mosaic panoramic image of the two images;
(6)阵列图像拼接:(6) Array image stitching:
(6a)以其前一次拼接好的图像和待拼接图像作为输入的两幅图像,继续重复步骤(2)~(5),循环进行,直到待拼接图像为第m幅图像为止,m<=i,最终得到m幅图像的横向拼接全景图;(6a) Take the previously stitched image and the image to be stitched as the input two images, continue to repeat steps (2) to (5), and proceed in a loop until the image to be stitched is the mth image, m<= i, finally get the horizontal mosaic panorama of m images;
(6b)重复(6a)k次,k<=i且k×m≤i,得到k幅横向拼接图,每幅横向拼接图由m幅图像拼接而成;(6b) Repeat (6a) k times, k<=i and k×m≤i, to obtain k horizontal mosaic images, each horizontal mosaic image is stitched by m images;
(6c)将k幅横向图像的前两幅作为输入图像,逆时针旋转90°,重复步骤(2)~(5),得到两幅图像的纵向拼接图,对于剩余k-2幅横向图像,以其前一次拼接好的图像和逆时针旋转90°后的待拼接图像作为输入的两幅图像,再重复步骤(2)~(5),循环进行,直到待拼接图像为第k幅图像为止,再将拼接所得的纵向图像顺时针旋转90°,最终得到i幅图像的横纵向拼接全景图。(6c) Take the first two of the k horizontal images as input images, rotate them counterclockwise by 90°, repeat steps (2) to (5), and obtain the vertical mosaic of the two images. For the remaining k-2 horizontal images, Take the previously stitched image and the image to be stitched rotated 90° counterclockwise as the input two images, and then repeat steps (2) to (5) in a loop until the image to be stitched is the kth image , and then rotate the stitched longitudinal images clockwise by 90° to finally obtain a horizontal and vertical stitched panorama of i images.
本发明与现有技术相比有以下优点:Compared with the prior art, the present invention has the following advantages:
第一、本发明将阵列图像与SIFT算法结合,并对SIFT算法进行了改进,在景深基本一致的情况下可以大幅缩减拼接时间;First, the present invention combines the array image with the SIFT algorithm, and improves the SIFT algorithm, so that the splicing time can be greatly reduced when the depth of field is basically the same;
第二、本发明有效的实现了阵列图像与融合算法的结合,并对现有寻找最佳缝合线的算法进行了改进,即使用阵列相机获取图像,可以同时采集i幅图像进行拼接,减小因时间变化而导致的场景误差,尤其在动态场景下的物体位移相对最小,使得拼接效果更好;改进的最佳缝合线算法可以有效避开运动物体进行拼接,实验结果显示拼接图像几乎不存在拼接缝,“鬼影”现象的出现频率也大幅下降;Second, the present invention effectively realizes the combination of the array image and the fusion algorithm, and improves the existing algorithm for finding the best suture line, that is, the array camera is used to acquire images, and i images can be collected simultaneously for splicing, reducing The scene error caused by time changes, especially in the dynamic scene, the object displacement is relatively minimal, which makes the stitching effect better; the improved optimal seam algorithm can effectively avoid moving objects for stitching, and the experimental results show that the stitching image is almost non-existent The frequency of splicing seams and the phenomenon of "ghosting" has also been greatly reduced;
第三、本发明使用阵列相机采集图像,可以满足一次拍摄就拼接成像,减少了工作量,得到更大视场,更高分辨率的拼接图。Thirdly, the present invention uses an array camera to collect images, which can be spliced and formed in one shot, reduces workload, and obtains a spliced image with a larger field of view and higher resolution.
附图说明Description of drawings
图1为本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;
图2为本发明中用于图像采集的相机阵列图;Fig. 2 is a camera array diagram for image acquisition in the present invention;
图3为本发明中相机阵列采集的六幅图像;Fig. 3 is six images that camera array gathers among the present invention;
图4为对图3中的六幅图像进行了拼接及融合后的全景图像。FIG. 4 is a panoramic image after splicing and fusion of the six images in FIG. 3 .
具体实施方式detailed description
下面结合附图对本发明的具体实施例进行详细说明:Specific embodiments of the present invention are described in detail below in conjunction with accompanying drawing:
参照图1,本发明的实现步骤如下:With reference to Fig. 1, the realization steps of the present invention are as follows:
步骤1,获取图像。Step 1, get the image.
采用如图2所示相机阵列采集图像,该相机阵列由i个相机在横纵方向进行二维组合排列,通过调节每个相机的位置及焦距,使得这些相机始终能够成行成列且整体组合为矩形,得到满足不同要求的图像。The camera array shown in Figure 2 is used to collect images. The camera array is composed of i cameras in two-dimensional combination in the horizontal and vertical directions. By adjusting the position and focal length of each camera, these cameras can always be arranged in rows and columns and the overall combination is Rectangles to get images that meet different requirements.
该相机阵列中的每一个相机都可以连续采集多幅图像;通过改变相机的位置获取不同视野的图像;同时,每个相机都可以调节焦距,获得不同景深的图像。Each camera in the camera array can continuously collect multiple images; images of different fields of view can be obtained by changing the position of the camera; at the same time, each camera can adjust the focal length to obtain images with different depths of field.
调节好相机位置及焦距后,快速按下开始与结束键,得到每个相机采集到的若干幅图像,每相邻两个相机采集到的图像均具有重叠区域,共得到i幅具有重叠区域的图像,本实例给出但不限于六幅图像。After adjusting the camera position and focal length, quickly press the start and end keys to obtain several images collected by each camera. The images collected by every two adjacent cameras have overlapping areas, and a total of i images with overlapping areas are obtained. images, this example gives but not limited to six images.
步骤2,读入两幅图像并分别提取SIFT特征。Step 2, read in two images and extract SIFT features respectively.
常见的特征点提取算法包括:Harris算子,LOG算子,SUSAN算子,SIFT算法等,本发明采用尺度不变特征变换算法提取特征点,即SIFT特征点,步骤如下:Common feature point extraction algorithms include: Harris operator, LOG operator, SUSAN operator, SIFT algorithm, etc. The present invention uses a scale-invariant feature transformation algorithm to extract feature points, i.e. SIFT feature points, and the steps are as follows:
(2a)构建高斯金字塔及高斯差分金字塔,检测尺度空间极值;(2a) Construct a Gaussian pyramid and a Gaussian difference pyramid, and detect the extreme value of the scale space;
(2a1)构建高斯金字塔的过程包括对图像做降采样和对图像做高斯平滑两步;(2a1) The process of constructing a Gaussian pyramid includes two steps of downsampling the image and performing Gaussian smoothing on the image;
根据图像的原始大小和塔顶图像的大小,计算金字塔的层数n:According to the original size of the image and the size of the image at the top of the tower, calculate the number of layers n of the pyramid:
n=log2{min(M,N)}-t,t∈[0,log2{min(M,N)})n=log2 { min(M,N)}-t,t∈[0,log2 { min(M,N)})
其中M,N分别为原图像的长和宽,t为塔顶图像的最小维数的对数值。Among them, M and N are the length and width of the original image respectively, and t is the logarithm value of the minimum dimension of the tower top image.
在本发明中,调整每个相机焦距,使得相机拍摄图像的景深基本一致,此时两幅图像的尺度基本一致,使得对两幅图像构建金字塔时金字塔层数n是一个大于1小于4的值,以降低拼接时间;In the present invention, the focal length of each camera is adjusted so that the depth of field of the images taken by the camera is basically the same, and the scales of the two images are basically the same, so that when the pyramid is constructed for the two images, the number of pyramid layers n is a value greater than 1 and less than 4 , to reduce splicing time;
(2a2)将原图像作为高斯金子塔的第一层,再对原始图像逐层降阶采样,每次降采样所得到的新图像为金字塔的新一层,直到第n层为止,得到一系列由大到小的图像,从下到上构成塔状模型,得到初始图像金字塔;(2a2) The original image is used as the first layer of the Gaussian pyramid, and then the original image is down-sampled layer by layer. The new image obtained by each down-sampling is a new layer of the pyramid until the nth layer, and a series of From large to small images, form a tower model from bottom to top to get the initial image pyramid;
(2a3)将初始图像金字塔每层的一张图像使用不同参数做高斯模糊,使得金字塔的每层含有多张高斯模糊图像,将金字塔每层多张图像合称为一组,得到高斯金字塔;(2a3) An image of each layer of the initial image pyramid is Gaussian blurred using different parameters, so that each layer of the pyramid contains multiple Gaussian blurred images, and multiple images of each layer of the pyramid are collectively called a group to obtain a Gaussian pyramid;
(2a4)构建高斯差分金字塔,即DOG金字塔:将(2a3)得到的高斯金字塔每组中相邻上下两张图像相减,得到高斯差分金字塔;(2a4) Constructing a Gaussian difference pyramid, i.e. a DOG pyramid: Subtract the adjacent upper and lower images in each group of the Gaussian pyramid obtained in (2a3) to obtain a Gaussian difference pyramid;
(2a5)进行空间极值点检测:(2a5) Carry out spatial extreme point detection:
取高斯差分金字塔每组中的每一个像素点,分别将它们与本张以及上下两张的26个邻域中所有像素点作比较:如果从高斯差分金字塔所取的像素点的值是最大值或最小值,则所取点的像素值是图像在当前尺度下的一个尺度空间极值,其中尺度空间由高斯金字塔实现,每一组的每一张图像对应不同的尺度;Take each pixel in each group of the Gaussian difference pyramid, and compare them with all the pixels in the 26 neighborhoods of this picture and the upper and lower two: If the value of the pixel point taken from the Gaussian difference pyramid is the maximum value or the minimum value, the pixel value of the selected point is an extreme value of the scale space of the image at the current scale, where the scale space is realized by a Gaussian pyramid, and each image in each group corresponds to a different scale;
(2b)将(2a)中的尺度空间极值作为关键点,对该关键点进行定位和方向确定:(2b) Use the scale space extremum in (2a) as a key point to locate and determine the direction of the key point:
(2b1)通过插值去除低对比度的点,并且消除边缘响应,完成对关键点的精确定位;(2b1) Remove low-contrast points by interpolation and eliminate edge response to complete the precise positioning of key points;
(2b2)对特征点进行方向赋值:(2b2) Assign direction to feature points:
对于(2b1)精确定位的关键点,采集其所在高斯金字塔图像3σ邻域窗口内像素的梯度和方向分布特征,梯度的模值m(x,y)和方向θ(x,y)如下:For (2b1) precisely positioned key points, the gradient and direction distribution features of the pixels in the 3σ neighborhood window of the Gaussian pyramid image where it is located are collected. The modulus m(x, y) and direction θ(x, y) of the gradient are as follows:
θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
其中,L为关键点所在的尺度空间值,梯度的模值m(x,y)按σ=1.5σ_oct的高斯分布加成,3σ邻域窗口半径为3×1.5σ_oct;Among them, L is the scale space value where the key point is located, the gradient modulus m(x, y) is added according to the Gaussian distribution of σ=1.5σ_oct, and the radius of the 3σ neighborhood window is 3×1.5σ_oct;
(2b3)使用直方图依次统计每个关键点邻域窗口内像素的梯度和方向,即将直方图以每10度方向为一个柱,共36个柱,柱表示的方向为像素点梯度方向,柱的长短为梯度幅值,以直方图中最长柱表示的方向作为每个关键点的主方向,完成方向确定;(2b3) Use the histogram to count the gradient and direction of the pixels in the neighborhood window of each key point sequentially, that is, the histogram takes every 10-degree direction as a column, a total of 36 columns, and the direction indicated by the column is the gradient direction of the pixel point, and the column The length of is the gradient amplitude, and the direction represented by the longest column in the histogram is used as the main direction of each key point to complete the direction determination;
(2c)对于每个完成定位和方向确定的关键点周围图像区域进行分块,并在该关键点为中心的4×4的块内计算8个方向的梯度直方图,绘制每个梯度方向的累加,生成具有独特性的128维的向量,用该向量将这个关键点描述出来,得到两幅图像的SIFT特征。(2c) Block the image area around each key point that has completed positioning and direction determination, and calculate the gradient histogram of 8 directions in the 4×4 block centered on the key point, and draw the gradient histogram of each gradient direction Accumulate to generate a unique 128-dimensional vector, use this vector to describe this key point, and obtain the SIFT features of the two images.
步骤3,进行特征匹配查找,得到每两幅图像的匹配点。Step 3, perform feature matching search to obtain matching points of every two images.
采用k-d树算法即k-d tree算法和最优节点优先算法,即BBF算法对图像进行特征匹配查找,实现两个图像的特征点匹配,步骤如下:Using the k-d tree algorithm, namely the k-d tree algorithm and the optimal node first algorithm, namely the BBF algorithm, to perform feature matching search on the image, and to realize the feature point matching of the two images, the steps are as follows:
(3a)采用k-d树算法根据步骤(2)所得的特征点对待拼接图像中的特征点建立k-d树;(3a) adopting the k-d tree algorithm to set up a k-d tree for the feature points in the image to be stitched according to the feature points obtained in step (2);
(3b)采用BBF算法对图像进行特征匹配查找,实现两个图像的特征点匹配:(3b) Use the BBF algorithm to perform feature matching search on the image, and realize the feature point matching of the two images:
(3b1)对输入图像中的每个特征点,在k-d树中查找出待拼接图像中与之欧氏距离最近的前两个最近邻特征点;(3b1) For each feature point in the input image, find the first two nearest neighbor feature points in the image to be spliced with the closest Euclidean distance in the k-d tree;
(3b2)将指定特征点与两个最近邻特征点中第一个最近邻的欧式距离与指定特征点与第二个最近邻的欧式距离求,并将该比值与设定的比例阈值0.49进行比较:(3b2) Calculate the Euclidean distance between the specified feature point and the first nearest neighbor of the two nearest neighbor feature points and the Euclidean distance between the specified feature point and the second nearest neighbor, and compare the ratio with the set ratio threshold 0.49 Compare:
如果比值小于该比例阈值,则接受指定特征点与第一个最近邻点为一对匹配点,实现两个图像的特征点匹配;否则不接受指定特征点与第一个最近邻点为一对匹配点。If the ratio is less than the ratio threshold, the specified feature point and the first nearest neighbor point are accepted as a pair of matching points to realize the feature point matching of the two images; otherwise, the specified feature point and the first nearest neighbor point are not accepted as a pair match point.
步骤4,对步骤3所得的每两幅图像匹配点进行筛选并计算最优变换矩阵H。Step 4, filter the matching points of every two images obtained in step 3 and calculate the optimal transformation matrix H.
本步骤采用RANSAC算法进行,其步骤如下:This step is performed using the RANSAC algorithm, and the steps are as follows:
(4a)将步骤3中所得匹配点对作为样本集,从样本集中随机抽选一个RANSAC样本,即4个匹配点对;(4a) Use the matching point pairs obtained in step 3 as a sample set, and randomly select a RANSAC sample from the sample set, that is, 4 matching point pairs;
(4b)根据这4个匹配点对计算当前变换矩阵L;(4b) Calculate the current transformation matrix L according to these 4 matching point pairs;
(4c)根据样本集、当前变换矩阵L和误差度量函数,得到满足当前变换矩阵L的一致集C,并记录一致集中元素的个数a;(4c) According to the sample set, the current transformation matrix L and the error metric function, obtain a consistent set C satisfying the current transformation matrix L, and record the number a of elements in the consistent set;
(4d)设定一个最优一致集,初始元素个数为0,将当前一致集中元素个数a与最优一致集中元素个数做比较:如果当前一致集中元素个数a大于最优一致集中元素个数,则将最优一致集更新为当前一致集,否则,则不更新最优一致集;(4d) Set an optimal consistent set, the initial number of elements is 0, compare the number of elements a in the current consistent set with the number of elements in the optimal consistent set: if the number of elements a in the current consistent set is greater than the optimal consistent set number of elements, update the optimal consistent set to the current consistent set, otherwise, do not update the optimal consistent set;
(4e)计算当前错误概率p:(4e) Calculate the current error probability p:
p=(1-in_fracs)o p=(1-in_frac s ) o
其中,in_frac为当前最优一致集中元素个数占样本集中样本总数的百分比,s是计算变换矩阵需要的最小特征点对个数,取值为s=4,o是迭代次数;Among them, in_frac is the percentage of the number of elements in the current optimal consistent set to the total number of samples in the sample set, s is the minimum number of feature point pairs required to calculate the transformation matrix, the value is s=4, and o is the number of iterations;
(4f)将计算当前错误概率p与允许的最小错误概率0.01进行比较:(4f) Compare the computed current error probability p with the allowable minimum error probability of 0.01:
如果p大于允许的最小错误概率,则返回步骤(4a),直到当前错误概率p小于最小错误概率为止;If p is greater than the minimum error probability allowed, return to step (4a) until the current error probability p is less than the minimum error probability;
如果p小于允许的最小错误概率,则当前最优一致集对应的变换矩阵L即为所求的最优变换矩阵H,该最优变换矩阵的大小为3×3。If p is smaller than the allowable minimum error probability, the transformation matrix L corresponding to the current optimal consistent set is the optimal transformation matrix H to be obtained, and the size of the optimal transformation matrix is 3×3.
步骤5,根据步骤4所得最优变换矩阵对图像进行变换和图像融合。In step 5, the image is transformed and fused according to the optimal transformation matrix obtained in step 4.
由于传统的加权平均融合算法易产生“鬼影”现象,尤其在动态场景下,当使用阵列相机采集图像时,如存在运动物体,直接传统的加权平均融合算法效果很差,很难体现运动物体,因此本发明采用一种改进的最佳缝合线算法与加权融合算法结合,对“鬼影””现象的拼接融合有显著地改进,其步骤如下:Because the traditional weighted average fusion algorithm is prone to "ghosting" phenomenon, especially in dynamic scenes, when the array camera is used to collect images, if there is a moving object, the direct traditional weighted average fusion algorithm is very ineffective, and it is difficult to reflect the moving object , so the present invention adopts an improved optimal suture algorithm combined with a weighted fusion algorithm to significantly improve the splicing and fusion of the "ghost" phenomenon, and the steps are as follows:
(5a)根据步骤(4)所得变换矩阵对输入两幅图像中的任意一幅进行变换,使得两幅图像位于同一坐标系中;(5a) Transform any one of the two input images according to the transformation matrix obtained in step (4), so that the two images are located in the same coordinate system;
(5b)对(5a)中变换到同一坐标系的两幅图像进行亮度校正,使两幅图像的亮度差异最小,过程如下:(5b) Correct the brightness of the two images transformed to the same coordinate system in (5a), so that the brightness difference between the two images is the smallest, the process is as follows:
(5b1)将待拼接图像与输入图像均转化为灰度图,分别计算出待拼接图像的像素和与输入图像的像素和,即先计算输入图像非重叠部分的像素值之和g及待拼接图像非重叠部分的像素值之和v;再计算重叠区域的中心矩形的像素和q,该中心矩形的高为重叠区域高度值的1/2,宽为重叠区域宽度的1/1.5;然后得到输入图像的像素和为g+q,得到待拼接图像的像素和为v+q;(5b1) Convert both the image to be stitched and the input image into a grayscale image, and calculate the pixel sum of the image to be stitched and the pixel sum of the input image respectively, that is, first calculate the sum g of the pixel values of the non-overlapping part of the input image and the sum of the pixels to be stitched The sum v of the pixel values of the non-overlapping part of the image; then calculate the pixel sum q of the central rectangle of the overlapping area, the height of the central rectangle is 1/2 of the height value of the overlapping area, and the width is 1/1.5 of the width of the overlapping area; then get The pixel sum of the input image is g+q, and the pixel sum of the image to be stitched is v+q;
(5b2)计算待拼接图像与输入图像像素和的比值b,并将比值b与1进行比较:(5b2) Calculate the ratio b of the pixel sum of the image to be stitched and the input image, and compare the ratio b with 1:
如果b小于1,则将输入图像每点的像素值与b相乘,执行(5c);If b is less than 1, multiply the pixel value of each point of the input image by b, and execute (5c);
如果b大于1,则将待拼接图像的每点的像素值与b的倒数相乘,执行(5c);If b is greater than 1, the pixel value of each point of the image to be stitched is multiplied by the reciprocal of b, and (5c) is executed;
(5c)在配准且亮度矫正过的图像上寻找一条最佳缝合线:(5c) Find an optimal seam line on the registered and brightness-corrected image:
(5c1)将输入图像和待拼接图像均转化为灰度图,将输入图像与待拼接图像在重叠区域的对应像素进行依次相减,得到两幅图像重叠区域的差值图像,计算差值图像中每个像素的强度值E(x,y):(5c1) Convert both the input image and the image to be stitched into a grayscale image, and subtract the corresponding pixels of the input image and the image to be stitched in the overlapping area in sequence to obtain the difference image of the overlapping area of the two images, and calculate the difference image The intensity value E(x,y) of each pixel in:
E(x,y)=|Egray(x,y)|+Egeometry(x,y),E(x,y)=|E gray (x,y)|+E geometry (x,y),
其中,Egray表示重叠区域像素点的灰度值之差,Egeometry表示重叠区域像素点的结构值之差:Among them, E gray represents the difference between the gray value of the pixels in the overlapping area, and E geometry represents the difference in the structure value of the pixels in the overlapping area:
Egeometry=(▽x1-▽x2)×(▽y1-▽y2)E geometry = (▽x 1 -▽x 2 )×(▽y 1 -▽y 2 )
其中,▽x1-▽x2为输入图像与待拼接图像在重叠区域的对应像素在x方向的梯度差,Among them, ▽x 1 -▽x 2 is the gradient difference in the x direction of the corresponding pixels in the overlapping area between the input image and the image to be stitched,
▽y1-▽y2为输入图像与待拼接图像在重叠区域的对应像素在y方向的梯度差;▽y 1 -▽y 2 is the gradient difference in the y direction of the corresponding pixels in the overlapping area between the input image and the image to be stitched;
▽x1为输入图像在重叠区域每个点在x方向的梯度,该梯度由x方向的核Sx与输入图像在重叠区域图像中的每个像素点做卷积和的运算得到;▽x 1 is the gradient of each point of the input image in the overlapping area in the x direction, which is obtained by performing a convolution sum operation on the kernel S x in the x direction and each pixel of the input image in the overlapping area image;
▽x2为待拼接图像在重叠区域每个点在x方向的梯度,该梯度由x方向的核Sx与待拼接图像在重叠区域图像中的每个像素点做卷积和的运算得到;▽ x 2 is the gradient of each point in the overlapping region of the image to be spliced in the x direction, and the gradient is obtained by the convolution sum of the kernel S x in the x direction and each pixel of the image to be spliced in the overlapping region;
▽y1为输入图像在重叠区域每个点在y方向的梯度,该梯度由y方向的核Sy与输入图像在重叠区域图像中的每个像素点做卷积和的运算得到;▽y 1 is the gradient of each point of the input image in the overlapping area in the y direction, and the gradient is obtained by the convolution sum of the kernel S y in the y direction and each pixel of the input image in the overlapping area image;
▽y2为待拼接图像在重叠区域每个点在y方向的梯度,该梯度由y方向的核Sy与待拼接图像在重叠区域图像中的每个像素点做卷积和的运算得到;▽y 2 is the gradient of each point in the overlapping region of the image to be spliced in the y direction, and the gradient is obtained by the convolution sum of the kernel S y in the y direction and each pixel of the image to be spliced in the overlapping region image;
Sx,Sy均为改进的Sobel算子模板,分别为:S x , S y are both improved Sobel operator templates, respectively:
(5c2)运用动态规划理论,将差值图像第一行的每一像素点作为一条缝合线的起点,向下扩展,寻找下一行相邻的三个点中强度值最小的点,使之作为缝合线的扩展方向,依次类推到最后一行,在生成的所有缝合线中找出E(x,y)之和最小的一条缝合线作为最佳缝合线;(5c2) Using dynamic programming theory, each pixel point in the first line of the difference image is used as the starting point of a suture line, and expanded downwards to find the point with the smallest intensity value among the three adjacent points in the next line, so that it can be used as The expansion direction of the suture line, and so on to the last line, find the suture line with the smallest sum of E(x, y) among all the suture lines generated as the best suture line;
(5d)对包含最佳缝合线的矩形进行加权平均融合,得到两幅图像的拼接全景图像:(5d) Perform a weighted average fusion of the rectangles containing the best stitching lines to obtain a stitched panoramic image of the two images:
(5d1)找到最小缝合线后,取包含缝合线在内并左右各扩展10个像素的矩形区域,对其中的像素进行加权平均,得到该矩形区域的融合图;(5d1) After finding the minimum suture line, take a rectangular area containing the suture line and extending 10 pixels left and right, and perform weighted average of the pixels therein to obtain the fusion map of the rectangular area;
(5d2)从输入图像中取得矩形区域左边部分,从变换后的待拼接图像中取得矩形区域右边部分,得到最终的融合图,至此,完成了两幅输入图像的拼接。(5d2) Obtain the left part of the rectangular area from the input image, and obtain the right part of the rectangular area from the converted image to be stitched to obtain the final fusion image. So far, the stitching of the two input images is completed.
步骤6,阵列图像的拼接。Step 6, stitching of array images.
(6a)以其前一次拼接好的图像和待拼接图像作为输入的两幅图像,继续重复步骤2~步骤5,循环进行,直到待拼接图像为第3幅图像为止,最终得到3幅图像的横向拼接全景图;(6a) Take the previously spliced image and the image to be spliced as the input two images, continue to repeat steps 2 to 5, and proceed in a loop until the image to be spliced is the third image, and finally get 3 images. Stitch panoramas horizontally;
(6b)重复(6a)两次,得到两幅横向拼接图,每幅横向拼接图由3幅图像拼接而成;(6b) Repeat (6a) twice to obtain two horizontal mosaic images, each horizontal mosaic image is stitched by 3 images;
(6c)判断这两幅横向拼接图的长和宽是否是4的倍数:若不是,将这两幅横向拼接图的长和宽就近修改为4的倍数,若是,则保持这两幅横向拼接图的长和宽不变;(6c) Determine whether the length and width of the two horizontal mosaic images are multiples of 4: if not, modify the length and width of the two horizontal mosaic images to a multiple of 4, if so, keep the two horizontal mosaic images The length and width of the graph remain unchanged;
(6d)将两幅横向拼接图作为输入图像,逆时针旋转90°,重复步骤2~步骤5,得到两幅图像的纵向拼接图,再将拼接所得的纵向图像顺时针旋转90°,最终得到6幅图像的横纵向拼接全景图。(6d) Take the two horizontal mosaic images as input images, rotate them counterclockwise by 90°, repeat steps 2 to 5 to obtain the vertical mosaic images of the two images, and then rotate the spliced vertical images 90° clockwise to finally get Horizontal and vertical stitching panorama of 6 images.
需要说明的是,本发明方法不仅仅局限于六幅阵列图像的拼接,该方法具有广泛适用性,在相机足够多的情况下或者移动相机分别取景时,可完成更多图像的拼接,也可以在现有基础上完成两幅,三幅,四幅图像的拼接,满足各种不同的需求,适用于各种不同的场合。同时,由于阵列相机可以实现六个相机同时拍摄,也保证了拼接结果的准确性。It should be noted that the method of the present invention is not limited to the splicing of six array images, and the method has wide applicability. When there are enough cameras or when the cameras are moved to view separately, more images can be spliced. Complete two, three, four image stitching on the existing basis to meet various needs and apply to various occasions. At the same time, since the array camera can realize simultaneous shooting by six cameras, the accuracy of the stitching result is also guaranteed.
本发明的效果可通过一些实验进一步说明。The effect of the present invention can be further illustrated by some experiments.
1.实验条件1. Experimental conditions
实验系统包括一个阵列相机,如图2所示,本实验在VS2010的软件环境下进行。The experimental system includes an array camera, as shown in Figure 2. This experiment is carried out under the software environment of VS2010.
2.实验内容2. Experimental content
使用本发明方法采集室外图像,场景中包括一个正在移动的人,阵列相机采集到的图像如图3所示,其中图3中包括有六幅具有重叠区域的图像,利用本发明方法对图3六幅图像进行拼接后得到图4全景图。Using the method of the present invention to collect outdoor images, including a moving person in the scene, the image collected by the array camera is shown in Figure 3, wherein Figure 3 includes six images with overlapping regions, using the method of the present invention to analyze Figure 3 The panorama in Figure 4 is obtained after stitching the six images.
从图4可见,本发明对存在动态物体的图像拼接效果良好,不存在“鬼影”现象;在整幅拼接全景图中,没有观察到明显的拼接缝;相比单幅图像,该拼接全景图视野更大,图像细节更多,可获得高质量的拼接结果。It can be seen from Fig. 4 that the present invention has a good stitching effect on images with dynamic objects, and there is no "ghost" phenomenon; in the whole stitched panorama, no obvious stitching seam is observed; compared with a single image, the stitching Panoramas have a larger field of view and more image details for high-quality stitching results.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407833.3A CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407833.3A CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107301620A true CN107301620A (en) | 2017-10-27 |
CN107301620B CN107301620B (en) | 2019-08-13 |
Family
ID=60134594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710407833.3A Active CN107301620B (en) | 2017-06-02 | 2017-06-02 | Method for panoramic imaging based on camera array |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107301620B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107948547A (en) * | 2017-12-29 | 2018-04-20 | 北京奇艺世纪科技有限公司 | Processing method, device and the electronic equipment of panoramic video splicing |
CN108111753A (en) * | 2017-12-14 | 2018-06-01 | 中国电子科技集团公司电子科学研究院 | A kind of high-resolution real time panoramic monitoring device and monitoring method |
CN108876723A (en) * | 2018-06-25 | 2018-11-23 | 大连海事大学 | A kind of construction method of the color background of gray scale target image |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN109166178A (en) * | 2018-07-23 | 2019-01-08 | 中国科学院信息工程研究所 | A kind of significant drawing generating method of panoramic picture that visual characteristic is merged with behavioral trait and system |
CN109470698A (en) * | 2018-09-27 | 2019-03-15 | 钢研纳克检测技术股份有限公司 | Across scale field trash quick analytic instrument device and method based on microphotograph matrix |
CN109754437A (en) * | 2019-01-14 | 2019-05-14 | 北京理工大学 | A method of adjusting the sampling frequency of graphics |
CN109961398A (en) * | 2019-02-18 | 2019-07-02 | 鲁能新能源(集团)有限公司 | Fan blade image segmentation and grid optimization joining method |
CN110020995A (en) * | 2019-03-06 | 2019-07-16 | 沈阳理工大学 | For the image split-joint method of complicated image |
CN110018153A (en) * | 2019-04-23 | 2019-07-16 | 钢研纳克检测技术股份有限公司 | The full-automatic scanning positioning of large scale sample universe ingredient and quantified system analysis |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image fusion method and device |
CN110232673A (en) * | 2019-05-30 | 2019-09-13 | 电子科技大学 | A kind of quick steady image split-joint method based on medical micro-imaging |
CN110390640A (en) * | 2019-07-29 | 2019-10-29 | 齐鲁工业大学 | Template-based Poisson fusion image stitching method, system, equipment and medium |
CN110569927A (en) * | 2019-09-19 | 2019-12-13 | 浙江大搜车软件技术有限公司 | Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal |
CN112365404A (en) * | 2020-11-23 | 2021-02-12 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112529028A (en) * | 2019-09-19 | 2021-03-19 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN112541507A (en) * | 2020-12-17 | 2021-03-23 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN113822800A (en) * | 2021-06-11 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and fusing method and device |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | A multi-camera real-time stitching system and method with adjustable observation area |
CN114463170A (en) * | 2021-12-24 | 2022-05-10 | 河北大学 | A large scene image stitching method for AGV applications |
CN114549301A (en) * | 2021-12-29 | 2022-05-27 | 浙江大华技术股份有限公司 | Image splicing method and device |
CN114693522A (en) * | 2022-03-14 | 2022-07-01 | 江苏大学 | Full-focus ultrasonic image splicing method |
CN114723757A (en) * | 2022-06-09 | 2022-07-08 | 济南大学 | High-precision wafer defect detection method and system based on deep learning algorithm |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101278565A (en) * | 2005-08-08 | 2008-10-01 | 康涅狄格大学 | Controlling Depth and Lateral Dimensions of 3D Images in Projected Panoramic Imaging |
US20120105574A1 (en) * | 2010-10-28 | 2012-05-03 | Henry Harlyn Baker | Panoramic stereoscopic camera |
CN105245841A (en) * | 2015-10-08 | 2016-01-13 | 北京工业大学 | A CUDA-based panoramic video surveillance system |
-
2017
- 2017-06-02 CN CN201710407833.3A patent/CN107301620B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101278565A (en) * | 2005-08-08 | 2008-10-01 | 康涅狄格大学 | Controlling Depth and Lateral Dimensions of 3D Images in Projected Panoramic Imaging |
US20120105574A1 (en) * | 2010-10-28 | 2012-05-03 | Henry Harlyn Baker | Panoramic stereoscopic camera |
CN105245841A (en) * | 2015-10-08 | 2016-01-13 | 北京工业大学 | A CUDA-based panoramic video surveillance system |
Non-Patent Citations (1)
Title |
---|
田军等: "全景图中投影模型与算法", 《计算机系统应用》 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111753A (en) * | 2017-12-14 | 2018-06-01 | 中国电子科技集团公司电子科学研究院 | A kind of high-resolution real time panoramic monitoring device and monitoring method |
CN107948547A (en) * | 2017-12-29 | 2018-04-20 | 北京奇艺世纪科技有限公司 | Processing method, device and the electronic equipment of panoramic video splicing |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image fusion method and device |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN108876723A (en) * | 2018-06-25 | 2018-11-23 | 大连海事大学 | A kind of construction method of the color background of gray scale target image |
CN109166178A (en) * | 2018-07-23 | 2019-01-08 | 中国科学院信息工程研究所 | A kind of significant drawing generating method of panoramic picture that visual characteristic is merged with behavioral trait and system |
CN109470698A (en) * | 2018-09-27 | 2019-03-15 | 钢研纳克检测技术股份有限公司 | Across scale field trash quick analytic instrument device and method based on microphotograph matrix |
CN109470698B (en) * | 2018-09-27 | 2020-02-14 | 钢研纳克检测技术股份有限公司 | Cross-scale inclusion rapid analysis instrument and method based on photomicrography matrix |
CN109754437A (en) * | 2019-01-14 | 2019-05-14 | 北京理工大学 | A method of adjusting the sampling frequency of graphics |
CN109961398A (en) * | 2019-02-18 | 2019-07-02 | 鲁能新能源(集团)有限公司 | Fan blade image segmentation and grid optimization joining method |
CN110020995B (en) * | 2019-03-06 | 2023-02-07 | 沈阳理工大学 | Image splicing method for complex images |
CN110020995A (en) * | 2019-03-06 | 2019-07-16 | 沈阳理工大学 | For the image split-joint method of complicated image |
CN110018153A (en) * | 2019-04-23 | 2019-07-16 | 钢研纳克检测技术股份有限公司 | The full-automatic scanning positioning of large scale sample universe ingredient and quantified system analysis |
CN110018153B (en) * | 2019-04-23 | 2021-11-02 | 钢研纳克检测技术股份有限公司 | Full-automatic scanning, positioning and quantitative analysis system for global components of large-scale samples |
CN110232673A (en) * | 2019-05-30 | 2019-09-13 | 电子科技大学 | A kind of quick steady image split-joint method based on medical micro-imaging |
CN110232673B (en) * | 2019-05-30 | 2023-06-23 | 电子科技大学 | Rapid and steady image stitching method based on medical microscopic imaging |
CN110390640A (en) * | 2019-07-29 | 2019-10-29 | 齐鲁工业大学 | Template-based Poisson fusion image stitching method, system, equipment and medium |
CN110569927A (en) * | 2019-09-19 | 2019-12-13 | 浙江大搜车软件技术有限公司 | Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal |
CN112529028A (en) * | 2019-09-19 | 2021-03-19 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN112529028B (en) * | 2019-09-19 | 2022-12-02 | 北京声迅电子股份有限公司 | Networking access method and device for security check machine image |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
CN112365404A (en) * | 2020-11-23 | 2021-02-12 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112365404B (en) * | 2020-11-23 | 2023-03-17 | 成都唐源电气股份有限公司 | Contact net panoramic image splicing method, system and equipment based on multiple cameras |
CN112541507A (en) * | 2020-12-17 | 2021-03-23 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN112541507B (en) * | 2020-12-17 | 2023-04-18 | 中国海洋大学 | Multi-scale convolutional neural network feature extraction method, system, medium and application |
CN113079325A (en) * | 2021-03-18 | 2021-07-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113079325B (en) * | 2021-03-18 | 2023-01-06 | 北京拙河科技有限公司 | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions |
CN113822800A (en) * | 2021-06-11 | 2021-12-21 | 无锡安科迪智能技术有限公司 | Panoramic image splicing and fusing method and device |
CN113689331B (en) * | 2021-07-20 | 2023-06-23 | 中国铁路设计集团有限公司 | A Method of Panoramic Image Stitching under Complex Background |
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
CN114463170A (en) * | 2021-12-24 | 2022-05-10 | 河北大学 | A large scene image stitching method for AGV applications |
CN114463170B (en) * | 2021-12-24 | 2024-06-04 | 河北大学 | Large scene image stitching method for AGV application |
CN114549301A (en) * | 2021-12-29 | 2022-05-27 | 浙江大华技术股份有限公司 | Image splicing method and device |
CN114339157A (en) * | 2021-12-30 | 2022-04-12 | 福州大学 | A multi-camera real-time stitching system and method with adjustable observation area |
CN114693522A (en) * | 2022-03-14 | 2022-07-01 | 江苏大学 | Full-focus ultrasonic image splicing method |
CN114723757A (en) * | 2022-06-09 | 2022-07-08 | 济南大学 | High-precision wafer defect detection method and system based on deep learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN107301620B (en) | 2019-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107301620B (en) | Method for panoramic imaging based on camera array | |
CN104732482B (en) | A kind of multi-resolution image joining method based on control point | |
US9224189B2 (en) | Method and apparatus for combining panoramic image | |
US8818101B1 (en) | Apparatus and method for feature matching in distorted images | |
CN108416732A (en) | A Panoramic Image Stitching Method Based on Image Registration and Multi-resolution Fusion | |
US9824486B2 (en) | High resolution free-view interpolation of planar structure | |
CN111242848B (en) | Method and system for stitching of binocular camera images based on regional feature registration | |
CN102521814A (en) | Wireless sensor network image fusion method based on multi-focus fusion and image splicing | |
CN104318517A (en) | Image splicing method and device and client terminal | |
CN106447607B (en) | A kind of image split-joint method and device | |
WO2014183385A1 (en) | Terminal and image processing method therefor | |
CN105931185A (en) | Automatic splicing method of multiple view angle image | |
CN111163265A (en) | Image processing method, image processing device, mobile terminal and computer storage medium | |
CN107563978A (en) | Face deblurring method and device | |
CN113744142A (en) | Image restoration method, electronic device and storage medium | |
CN112801870A (en) | Image splicing method based on grid optimization, splicing system and readable storage medium | |
US10482571B2 (en) | Dual fisheye, hemispherical image projection and stitching method, device and computer-readable medium | |
CN107333064A (en) | The joining method and system of a kind of spherical panorama video | |
CN106034203A (en) | Image processing method and device for shooting terminal | |
CN106204422A (en) | Super large width image Rapid matching joining method based on block subgraph search | |
CN109767381A (en) | A shape-optimized rectangular panoramic image construction method based on feature selection | |
Fu et al. | Image stitching techniques applied to plane or 3-D models: a review | |
CN114332183A (en) | Image registration method and device, computer equipment and storage medium | |
US20090059018A1 (en) | Navigation assisted mosaic photography | |
CN109600667B (en) | A method for video redirection based on grid and frame grouping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |