CN104506828B - A non-overlapping variable structure effective real time video splicing method of site-directed orientation - Google Patents

A non-overlapping variable structure effective real time video splicing method of site-directed orientation Download PDF

Info

Publication number
CN104506828B
CN104506828B CN 201510016447 CN201510016447A CN104506828B CN 104506828 B CN104506828 B CN 104506828B CN 201510016447 CN201510016447 CN 201510016447 CN 201510016447 A CN201510016447 A CN 201510016447A CN 104506828 B CN104506828 B CN 104506828B
Authority
CN
Grant status
Grant
Patent type
Prior art keywords
image
sub
video
line
step
Prior art date
Application number
CN 201510016447
Other languages
Chinese (zh)
Other versions
CN104506828A (en )
Inventor
蒋朝辉
陈致蓬
桂卫华
阳春华
邓康
Original Assignee
中南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Abstract

本发明公开了一种无有效重叠变结构的定点定向视频实时拼接方法,所述方法包括:分别采集不同位置的视频流信息;将压缩后的视频流信息按照时间顺序分为多个第一静态视频帧组;将所述第一静态视频帧组中每一路视频流信息对应的静态图像,转换为被拍对象的俯视图,形成第二静态视频帧组;对所述第二静态视频帧组中的每个视频帧进行定位、全景粗拼接、补偿融合、全景精拼接得到实时全景视频流。 The present invention discloses an effective non-overlapping configuration variable directional real-time video splice point, the method comprising: capturing video stream information are respectively different locations; video stream and the compressed information into a plurality of first static chronologically of video frames; static image of the still video frame a first group of video streams each corresponding to information, is converted to a plan view of the imaged object, a second set of still video frame; still video frame of the second group each video frame is positioned, panorama stitching coarse compensation fusion, to give real-time panoramic fine stitching panoramic video stream. 本发明针对不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行拼接处理,再实时生成视频流。 No change overlapping image effective configuration of the present invention, the same scene for different viewing angles, different directions of stitching acquisition process, and then generates a video stream in real time. 本发明的拼接方法不但提高了图像拼接精度,同时保证了图像拼接效率,满足了视频流拼接的实时性要求。 Splicing method of the present invention not only improves image stitching accuracy, while ensuring the efficiency of the mosaic image to meet the requirements of real-time video stream splicing.

Description

一种无有效重叠变结构的定点定向视频实时拼接方法 A non-overlapping variable structure effective real time video splicing method of site-directed orientation

技术领域 FIELD

[0001] 本发明涉实时视频图像拼接技术领域,更具体涉及一种无有效重叠变结构的定点定向视频实时拼接方法。 [0001] The present invention relates to real-time video image mosaic technical field, and more particularly relates to a non-real-time variable structure effective overlapping orientation pointing video splicing method.

背景技术 Background technique

[0002] 近年来随着视频拼接技术的快速发展,其在建立大视角的高分辨率图像领域、虚拟现实领域、医学图像领域、遥感技术以及军事领域中均有广泛的应用。 [0002] In recent years, with the rapid development of video splicing technology, its high-resolution images in the field to establish a large viewing angle, field of virtual reality, medical imaging field, remote sensing and military fields are widely used. 视频拼接技术主要包含图像拼接技术和视频实时合成技术。 Video splicing technology mainly includes real-time video image stitching technology and synthesis technology. 其一,图像拼接技术是实现视频拼接技术的核心, 主要包括图像配准和图像融合两个关键环节:图像配准是实现图像拼接的基础,其目标是对在不同摄像位置和角度下的多幅图像进行匹配;图像融合则是通过消除由于几何校正、 动态的场景或光照变化引起的相邻图像间的强度或颜色不连续现象,合成高质量的图像。 First, image stitching technology is the core for video splicing technology, including image registration and image fusion two key areas: image registration is the basis for image mosaic, and its goal is more in different positions and angles of the camera matching images; image intensity or color is fusion between adjacent images by eliminating the correction due to the geometric and dynamic scene or illumination change due to discontinuity, the synthesis of high quality images. 其二,视频实时合成技术则是通过采用FPGA、英特尔的IPP、英伟达的CUDA等平行计算架构, 提高图像拼接算法的执行效率来实现的。 Second, real-time video synthesis technology is through the use of FPGA, Intel IPP, such as Nvidia's CUDA parallel computing architecture, improve the efficiency of image stitching algorithm to achieve.

[0003] 从图像采集的角度来看,大致可以分为四类。 [0003] From the point of view of the image acquisition can be divided into four categories. 1)单相机固定支点,镜头旋转对同一场景进行图像采集;2)单相机固定在一个滑轨上平行移动对同一场景进行图像采集;3)多相机在不同视向角,不同方向对同一场景进行图像采集,图像间具有可用与图像拼接的有效重叠区域;4)多相机在不同视向角,不同方向对同一场景进行图像采集,图像间无能用于图像拼接的有效重叠区域,甚至图像间有较小的间隙和漏洞。 1) a single camera fixed pivot, rotation of the lens for image pickup of the same scene; 2) in a single camera is mounted on a slide moving parallel to capture images of the same scene; 3) the multi-camera angle different view, the same scene in different directions image acquisition, the image has an effective overlap area between the available image stitching; 4) multiple camera angles at different viewing direction, the same scene in different image pickup directions, the inter-image area of ​​the image overlap incompetent for effectively spliced, even the inter-image there are small gaps and vulnerabilities. 根据本发明研究的实际问题, 主要关注第四类情况,即利用多路摄像头定点定向对同一场景进行视频采集和拼接。 The practical problem of the present invention studied, focusing on the case of the fourth class, namely the use of multiple site-directed camera video capture the same scene and splice.

[0004] 从图像拼接的核心技术图像配准角度来看,目前图像配准技术有两种,即相位相关度法和几何特征法。 [0004] The core technology of the image reference angle image mosaic, the current image registration techniques, there are two, i.e., phase correlation method and the method of geometric features. 相位相关度法具有场景无关性,能够将纯粹二维平移的图像精确地对齐。 Phase correlation method has the scene-independent, can be precisely aligned purely two-dimensional image of the translation. 但相位相关度法只适合解决存在纯平移或旋转的图像配准问题,在仿射和透视变换模型中,该方法就无法配准图像,而在实际过程中不可能做到相机位置与成像平面的绝对平行,故其实际使用价值低。 However, phase correlation method is only suitable for solving the problems of image registration purely translational or rotational, affine and perspective transformation model, this method can not image registration, but not possible in the actual process the imaging plane of the camera position the absolutely parallel, so low practical value. 几何特征法是通过基于图像中的低级几何特征,例如边模型、 角模型、顶点模型等,对图像进行拼接的方法。 Characterized by a lower geometric method based on the geometric features of the image, for example the model edge, corner model, and other models vertices, the image stitching method. 但基于几何特征的图像配准方法的前提是两图像必须要有一定量的重叠区域,并且匹配的图像在时间上必须具有连贯的特征,对于多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像拼接无能无力。 But only based on the geometric features of the image registration process is that the two images must have a certain amount of overlapping area, and the image matching must have consistent characteristics over time, for a multi-camera at different viewing angles, different directions of the same scene acquired without image stitching overlap region incompetent and weak.

[0005] 从图像融合的角度来看,其目的是从图像颜色亮度和结构上消除图像间的拼接缝。 [0005] From the point of view of image fusion, the aim of eliminating the seam between the images from the image brightness, and color structure. 图像融合的方法很多,对于图像颜色和亮度的融合算法,简单的有光强加权平均和加权平均融合,复杂的有图像Voronoi权重法和高斯样条插值法。 Many image fusion methods, image fusion algorithm for color and brightness, the light intensity of a simple weighted average and weighted average fusion complex image Voronoi Gaussian weight method and spline interpolation method. 其核心思想是先对图像进行分害J,利用重叠区域作为匹配的标准,再通过图像校正、颜色变换和像素插值等手段来融合图像的拼接缝。 Its core idea is to damage the image points J, matching using overlapping area as a standard, and then to fuse the seam image by image correction, the pixel interpolation and color conversion means. 而对于图像结构上的过度差异,一般都是采用简单的羽化的方法,根据欧氏距离对权重进行平均,再利用滤波技术消除由于羽化造成的图像模糊现象和拼接视频中的鬼影现象。 As for excessive differences in image configuration, are generally used simple method emergence of weights according to the average Euclidean distance, then the filtering technique to eliminate image blurring caused due to the feathering phenomenon and ghosting spliced ​​video. 显然,对于多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像拼接缝的融合现有方法无法处理,并且对于实时的视频流融合来说,现有技术也无法满足实时性的要求。 Obviously, the camera can not be processed for multiple different viewing angles, the conventional method of fusion seam in different directions on the image non-overlapping regions of the same scene acquired, and for real-time video stream integration, the prior art can not meet the real-time requirements.

[0006] 专利公开号CNl 03501415A发明专利是一种基于重叠部分结构变形的视频实时拼接方法,其工作原理是首先计算两幅图像各自的拼接缝,然后在两条拼接缝上进行一维特征点的提取与匹配,将匹配的特征点移动到重合位置并记录位移量,在设定的变形扩散影响范围内进行结构形变的扩散,最后计算结构变形后的梯度图,利用梯度域上的融合方法完成图像融合得到最终的拼接图像。 [0006] Patent Publication No. Patent CNl 03501415A invention is a method of real time video splice overlapping portion based on distortion, which works by first calculating two images of each seam, the seam and then the two one-dimensional fight extraction and matching feature points, the feature point matching the position and moved to coincide record displacement amount, diffusion structural deformation in the deformation of the diffusion of the influence range setting, the gradient deformation of view of the final structure calculation, using a gradient field fusion complete image fusion splicing get the final image. 为了满足视频拼接的实时性的要求,专利中将图像拼接算法在FPGA上实现,从而达到快速高效的视频拼接效果。 In order to meet real-time requirements of video splicing, in the patent image stitching algorithm implemented on the FPGA, so as to achieve rapid and efficient video mosaic effect. 显然此专利采用的方法无法对多相机在不同视角,不同方向对同一场景采集的无重叠区域的图像进行拼接处理,无法满足本发明研究的实际问题的实际需求。 The method of this patent apparently not used for splicing multi-camera angle of view at different, non-overlapping regions of images of the same scene captured in different directions, it can not meet the actual needs of the research practical problems the present invention.

[0007] 专利公开号CN101593350A发明专利是一种深度自适应视频拼接的方法、装置和系统。 [0007] Patent Patent Publication No. CN101593350A invention is an adaptive depth video splicing method, apparatus and system. 其视频拼接系统包括摄像机阵列、校准装置、视频拼接装置以及像素插值器和混合器。 Splicing system which includes a video camera array calibration apparatus, and a video splicing apparatus and a pixel interpolating mixer. 首先,摄像机阵列生成多个源视频;然后,校准装置执行对极校准和摄像机位置校准,确定多个源视频中每一空间相邻图像对的接缝区域,并生成像素索引表;再次,视频拼接装置利用平均偏移值形成像素索引表的补偿项更新像素索引表;最后,像素插值器和混合器利用更新后的像素索引表结合多源视频生成全景视频。 First, the camera array generates a plurality of video sources; Then, the calibration apparatus and a camera calibration performed on the electrode position calibration, determining a plurality of source video image for each spatial region adjacent to the seam, and generates the pixel index table; again, video splicing apparatus using the average offset compensation value is formed index table entry update pixel pixel index table; Finally, pixel interpolation using the pixel mixer and the updated index table combined with multi-source video generating a panoramic video. 显然,此专利希望通过简化图像拼接算法,在拼接质量和拼接效率上寻求一个平衡点,从而实现对视频流的拼接,但随着采集视频的多样性和复杂性,单一串行的图形拼接算法根本无法满足数据量大,计算复杂的视频流拼接的实时性要求,实用性差。 Clearly, this patent desired by simplifying the image stitching algorithm, to find a balance in quality and splicing the splicing efficiency, enabling the spliced ​​video stream, but with the acquisition of the diversity and complexity of the video, a single serial pattern stitching algorithm simply can not meet the large volume of data, the computational complexity of the video stream splicing of real-time requirements, practical difference.

发明内容 SUMMARY

[0008] 针对本领域存在的不足之处,本发明的目的在于提供了一种针对多摄像机在不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行定点定向实时拼接的方法。 [0008] for the deficiencies present in the art, object of the present invention to provide a method different angles, different directions pointing oriented real image of the spliced ​​overlap becomes no effective structure for the same scene captured in multi-camera.

[0009] 实现本发明上述目的的技术方案为: [0009] The object of the present invention to achieve the above technical scheme:

[0010] 本发明公开了一种无有效重叠变结构的定点定向视频实时拼接方法,所述方法包括以下步骤: [0010] The present invention discloses an effective non-overlapping configuration variable directional real-time video splicing point, the method comprising the steps of:

[0011] 步骤一、安装多摄像机视频采集阵列,分别采集不同位置的视频流信息,并将所述视频流信息进行模数转换、同步和压缩处理; [0011] Step a, to install multi-camera video capture array, were collected at different locations of the video information stream, the video stream information and analog to digital conversion, synchronization and compression processing;

[0012] 步骤二、将压缩后的视频流信息转换为同一视频格式,按照时间顺序分为多个第一静态视频帧组;其中,每个所述第一静态视频帧组均包括同一时刻的所述多摄像机视频采集阵列采集的η路视频流信息; [0012] Step two, the video stream and the compressed information is converted to the same video format, into a first group of a plurality of still video frames in chronological order; wherein each of said first static video frame group comprises the same timing the multi-channel video cameras η stream information acquired video capture array;

[0013] 步骤三、将所述第一静态视频帧组中每一路视频流信息对应的静态图像,根据侧视图转俯视图几何模型,转换为被拍对象的俯视图,形成第二静态视频帧组; [0013] Step three, the first still image of each still video frame group information corresponding to the video streams, a top side view of FIG turn geometry, is converted to a plan view of the imaged object, a second set of still video frame;

[0014] 步骤四、根据定位模型,对所述第二静态视频帧组中的每个视频帧进行定位,进行全景粗拼接,得到全景粗拼接图; [0014] Step 4 The location model, the second set of still video frame of each video frame positioning, coarse panorama stitching, to give a crude panorama mosaic;

[0015] 步骤五、根据所述定位模型,确定所述全景粗拼接图中,有重叠区域拼接缝位置、 无缝无洞无重叠区域拼接缝位置以及有洞或有裂缝区域拼接缝位置; [0015] Step 5 is positioned according to the model to determine the panoramic mosaic of the crude, an overlapping area seam position, seamless, non-overlapping region of the hole and the position of the seam with holes or cracks in the seam region position;

[0016] 步骤六、对于所述有重叠区域拼接缝或无缝无洞无重叠区域拼接缝,利用亮度和颜色插值算法,对拼接缝进行融合拼接; [0016] Step six, the overlapping region or seam holes seamless, non-overlapping seam regions, using the luminance and color interpolation algorithm, to be fusion spliced ​​to the seam;

[0017] 步骤七、对于所述有洞或有裂缝区域拼接缝的拼接过程如下: [0017] Step seven, with holes for the crack region or seam of stitching follows:

[0018] 根据所述定位模块确定所述第二静态视频帧组中每一视频帧对应的子图像之间以及子图像与漏洞或裂缝之间的接壤关系,并根据接壤关系确定漏洞或裂缝子图像; [0018] The determination of the relationship between the border and the second module is positioned in between the still video frame group sub-images corresponding to each video frame and between the sub-images holes or cracks, and determined according to the relationship bordering holes or cracks sub image;

[0019] 提取与漏洞或裂缝子图像相邻的子图像的线特征; [0019] Extraction with holes or cracks in the sub-image lines adjacent sub-image characteristics;

[0020] 对提取的所述线特征进行匹配,得到线特征对; [0020] the line feature extraction by matching characteristics of the obtained line;

[0021] 利用相邻区域线的特征外推边界点来补偿裂缝或漏洞; [0021] With the feature region adjacent lines extrapolated boundary point to compensate crack or flaw;

[0022] 对漏洞或裂缝补偿后的所述全景粗拼接图,利用亮度和颜色插值算法,对拼接缝进行融合拼接,获得全景精拼接后的全景视频帧; [0022] The crude of the panoramic mosaic of holes or cracks after compensation, using the luminance and color interpolation algorithm, to be fusion spliced ​​seam, to obtain a panoramic video frames fine panorama stitching;

[0023] 步骤八、将不同时刻的第一静态视频帧组按照所述步骤三至步骤七进行处理,得到不同时刻的全景视频帧,按照时间顺序将所述全景视频帧进行合成,得到实时全景视频流。 [0023] Step eight, still video frame group of the first time according to the different steps three to seven step process to obtain a panoramic video frames of different times, in chronological order to the panoramic video frame are synthesized to obtain real-time panoramic video stream.

[0024] 其中,所述步骤二在收到同步视频分割指令后进行,并且所述步骤二运行结束后, 将所述第一静态视频帧按照时间顺序进行存储。 [0024] wherein said step of synchronizing two video segmentation after receiving the instruction, and the rear end of the two step operation, the first still video frame memory in chronological order.

[0025] 其中,所述步骤三中侧视图变俯视图几何模型为: [0025] wherein said step 3 side plan view geometry is variable:

[0026] [0026]

Figure CN104506828BD00071

[0027]其中,S为比例因子,fX,fy为摄像机的焦距,Cx,Cy为图像矫正参数,Rx,Ry,Rz为旋转矩阵的三个列向量,t为平移向量,(x,y,z)为所述静态图像侧视图的元素坐标,(X,Y,Z)为对应元素的俯视图坐标。 [0027] where, S is the scale factor, fX, fy is the focal length of the camera, Cx, Cy of the image correction parameter, Rx, Ry, Rz is a three column vector rotation matrix, t is a translation vector, (x, y, z) is a side view of the element of the coordinates of the still image, (X, Y, Z) is a plan view of the element corresponding to the coordinates.

[0028] 其中,所述步骤四、五中定位模型为: [0028] wherein said step four, positioned Fifth model:

[0029] [0029]

Figure CN104506828BD00072

[0030] 其中,XQ,yo,ZQ为摄像机镜头中心点坐标,Xi,yi,Zi为被拍对象与摄像平面xoy的交点坐标,(α,β, γ)为摄像机对应景域的圆锥体母线的方向角,X2,y2,Z2为摄像机对应景域的炜圆与所述圆锥体母线的交点坐标,x,y,z为摄像机对应景域与摄像平面x〇y交点坐标。 [0030] wherein, XQ, yo, ZQ is the center coordinate of the camera lens, Xi, yi, Zi is imaged object and the intersection coordinates of the imaging plane xoy, (α, β, γ) corresponding to the cone of the camera view field bus diagonal direction, X2, y2, Z2 corresponding to the intersection to the camera coordinates of the circle Wei cone generatrix King domain, x, y, z of the camera view field corresponding to the intersection of the imaging plane coordinate x〇y.

[0031] 其中,所述步骤四中全景粗拼接具体为: [0031] wherein, in the Step 4 The crude panorama stitching specifically:

[0032] 首先,生成一张和被拍物的全景图景域大小等大的空白图像; [0032] First, a larger blank image a photographic subject and a panoramic picture field size;

[0033] 其次,对所述第二静态视频帧组中的每个视频帧对应的子图像利用所述定位模型进行定位处理,确定每张子图像在空白图像中的位置、大小和方向; [0033] Next, the positioning sub-image using the second model in a still video frame group corresponding to each video frame performs positioning processing to determine the location, size and orientation of each blank sub-image in the image;

[0034] 再次,按照多摄像机阵列中每个摄像机预定的标号顺序和其拍摄子图像的定位信息逐张将子图像填充到空白图像中对应的地方,实现全景图的粗拼接。 [0034] Again, according to the location information in a multi-camera array for each camera and its sequence a predetermined reference sub-image captured by one place is filled into the sub image blank image corresponding realize coarse mosaic panorama.

[0035] 其中,所述步骤七,提取与漏洞或裂缝子图像相邻的子图像的线特征具体为: [0035] wherein said step seven, the sub-image extraction holes or cracks in the adjacent sub-images of line features is specifically:

[0036] 假设以C (X,y)为中心像素点,同时设L (X,y)和R (X,y)分别为以C (X,y)点沿某一个方向的左、右相邻区域的平均灰度值,则均值比估计如式(3)所示; [0036] Assuming that C (X, y) as the center pixel, while setting the left L (X, y) and R (X, y) are to C (X, y) point in one direction, the right phase average gray value of the neighborhood area, the estimated average ratio as shown in (3);

[0037] RoA:C(x,y) =max{R(x,y)/L(x,y),L(x,y)/R(x,y)} (3) [0037] RoA: C (x, y) = max {R (x, y) / L (x, y), L (x, y) / R (x, y)} (3)

[0038] 然后,比较 [0038] Then, Comparative

Figure CN104506828BD00081

与预先确定的阈值To进行比较,当 To compare with a threshold value determined in advance, when

Figure CN104506828BD00082

大于阈值To时则认为点C为边界点; To greater than the threshold point C is considered as a boundary point;

[0039] 将通过上述算法提取与漏洞或裂缝子图像相邻的子图像中的线特征片段,并重组织成线特征。 [0039] The feature extracting line segment sub-image with holes or cracks adjacent to the sub image by the above algorithm, both organized into line features.

[0040] 其中,所述步骤七中,对提取的所述线特征进行匹配具体为: [0040] wherein, in the step VII, the line matching the extracted feature specifically is:

[0041] 将所述线特征用对应的线段函数来描述,假设包围漏洞或裂缝的子图像有η个,首先,提取的每幅子图像的线段函数斜率,组成的集合I由式⑷表示如下, [0041] A feature of the line segment with the corresponding function will be described, assuming a sub-image surrounding the holes or cracks have a η, firstly, a function of the slope of the line segment extracted sub-images each piece, consisting of a set of I is represented by the following formula ⑷ ,

[0042] [0042]

Figure CN104506828BD00083

[0043] 其中,m,n,l均表示对应子图像中提取的线特征的总数; [0043] wherein, m, n, l represents the total number of each sub-image lines corresponding to the characteristic extracted;

[0044] 利用如下式⑶实现子图像间的线特征匹配, [0044] using the following formula ⑶ achieve matching between the characteristic line sub-images,

[0045] [0045]

Figure CN104506828BD00084

[0046] 其中, [0046] wherein,

Figure CN104506828BD00085

>均为集合I中任意的一个元素,T1为匹配阈值;满足式其中,所述步骤七中,利用相邻区域线特征外推边界点来补偿裂缝或漏洞具体为: > I are set to any one of the elements, Tl is a match threshold; satisfies the formula wherein the step VII, wherein neighboring areas of lines extrapolated boundary point to compensate for the crack or flaw specifically:

[0047] 首先,根据已匹配所述的线特征对的对应的第一线段函数,构造一个能满足对应特征对中所有的线特征的第二线段函数,同时认为所述第二线段函数是对漏洞或裂缝的线特征的合理拟合; [0047] First, a matched line features of the first segment corresponds to a function, a structure to meet all the characteristics of the corresponding feature of the second line segment function, while the second segment that is a function reasonable fit to holes or cracks in the line feature;

[0048] 然后,利用所述第二线段函数外推漏洞或裂缝处,由此对匹配线特征对确定的线特征所在的位置; [0048] Then, the second outer segment gaps or cracks at the push function, whereby the position of the match line characteristic features of the line where the determination;

[0049] 最后,对外推出来的漏洞或裂缝的线特征,利用与漏洞或裂缝子图像相邻的子图像中与其对应的匹配线特征对的颜色和亮度,运用颜色和亮度插值算,对拼接缝进行融合拼接。 [0049] Finally, to the outside introduction of holes or cracks in the line characteristics, color and brightness using sub-images with holes or cracks in an adjacent sub-image characteristics corresponding match line pairs, interpolation calculation using color and brightness, the fight joint fusion splicing.

[0050] 其中,所述步骤六、七中,利用颜色和亮度插值算法,进行融合拼接具体为: [0050] wherein said step six, seven, the use of color and luminance interpolation algorithm, fusion splicing is specifically:

[0051] 假定与漏洞或裂缝子图像相邻的子图像有m幅,则裂缝或漏洞中一点P的灰度、颜色和亮度值,可根据m幅子图像中距离点P最近的点的灰度、颜色和亮度值,通过式(6)计算获得 [0051] assumed that sub-image with holes or cracks in the adjacent sub-image has m pieces, the crack or a little gradation, color and brightness value P vulnerabilities, according to the m sub-images in the ash from the point closest to the point P , color and luminance values, calculated by the formula (6) is obtained

[0052] [0052]

Figure CN104506828BD00086

[0053] 其中,g (p)表示P点的灰度值、颜色值和亮度值中任意一种,gl (Xl,yi)表示第i幅子图像离P点最近点的对应于g (P)的灰度值、颜色值或者亮度值,函数ξ (X)是线性权重函数; [0053] wherein, g (p) represents the gray value of the point P, the color values ​​and luminance values ​​of any one, gl (Xl, yi) denotes the i-th sub-images from the nearest point P corresponds to the point G (P ) grayscale value, the luminance value or color value, the function ξ (X) is a linear weighting function;

[0054] 通过对裂缝或漏洞中每个像素点逐个进行如上所示的融合操作,获取完整的所述全景视频帧。 [0054] for each pixel by a crack or flaw in the fusion operation shown above individually for a complete frame of the panoramic video.

[0055] 其中,所述步骤八中,将所述全景视频流进行压缩、存储和显示。 [0055] wherein, in the step VIII, the panoramic video stream compression, storage and display.

[0056] 对应于本发明的方法,所用的装置包括多摄像机视频采集阵列U3,多路视频同步采集单元U4,多路视频同步分割单元U5,多路视频侧视图变俯视图单元U6,GPGPU视频帧定位、拼接、补偿和融合单元U7以及实时全景视频流生成单元U9,如图2所示; [0056] corresponding to the method of the present invention, the apparatus used comprises a multi-camera video capture array U3, multi-channel video synchronization acquisition unit U4, a multi-channel video synchronization dividing unit U5, multiple video side becomes a top view of the unit U6, GPGPU video frame positioning, splicing, and fusion compensation unit U7 and real-time panoramic video stream generating unit U9, shown in Figure 2;

[0057] 所述多摄像机视频采集阵列采集被拍对象不同位置的视频流信息,并传递给所述多路视频同步采集单元;所述多路视频同步采集单元将接收的视频流信息进行模数转换、 同步和压缩处理,并传递给所述多路视频同步分割单元;所述多路视频同步分割单元接收所述多路视频同步采集单元的同步视频分割指令,将其接收的信息转换为同一视频格式, 按照时间顺序分为多个第一静态视频帧组,并将所述第一静态视频帧组传递给所述多路视频侧视图变俯视图单元;其中,每个所述第一静态视频帧组均包括同一时刻的所述多摄像机视频采集阵列采集的η路视频流信息;所述多路视频侧视图变俯视图单元将接收的所述第一静态视频帧组中每一路视频流信息对应的静态图像,转换为被拍对象的俯视图,形成第二静态视频帧组,并将所述第二静态视频帧组传递给所 [0057] The multi-camera video capture video stream information acquisition array imaged at different positions of the object, and passed to the multi-channel video synchronous acquisition unit; the video stream multiplexed video information received synchronization acquisition unit for analog to digital conversion, synchronization and compression process, and passed to the multi-channel synchronous video division unit; dividing the multiplexed video synchronization unit receives the multi-channel video simultaneous acquisition of synchronized video division instruction unit, converts the information it receives for the same video format, in order of time into a first set of a plurality of still video frames and the first frame of still video set is transmitted to the video multiplex unit side plan view of a variant; wherein each of said first static video η frame group includes information of the video stream path multi-camera video capture array capture the same time; the video multiplex unit side plan view of a variation of the first received frame still video set information corresponding to each video stream static image is converted to a plan view of the object imaged, forming a second set of still video frames and the second set of still video frame is passed to the GPGPU视频帧定位、拼接、补偿和融合单元;所述GPGPU视频帧定位、拼接、补偿和融合单元对所述第二静态视频帧组中的每个视频帧进行定位、拼接、补偿和融合,得到被拍对象的全景视频帧,并将所述全景视频帧传递给所述实时全景视频流生成单元;所述实时全景视频流生成单元将接收到的不同时刻的全景视频帧,按照时间顺序合成实时全景视频流。 GPGPU video frame alignment, splicing, and fusion compensation unit; the GPGPU video frame alignment, splicing, and fusion compensation unit of the second group still video frame of each video frame in the positioning, splicing, and fusion compensation, to give imaged object panorama video frame, and the frame is transmitted to said panoramic video in real-time panoramic video stream generating unit; the real-time panoramic video stream generating unit receives the panoramic video frames at different times, chronologically real synthesis panoramic video stream. 上述GPGPU视频帧定位、拼接、补偿和融合单元基于CUDA平行计算构架。 GPGPU positioned above the video frame, splicing, and fusion compensation unit CUDA-based parallel computing architecture.

[0058] 多摄相机视频采集阵列 [0058] Multi-camera video capture array

[0059] 多摄相机视频采集阵列是由η个按照固定的安装参数进行安装的摄像机构成的摄像阵列,如图2所示,阵列中的各个摄像机通过采用不同的视向角的镜头,以及不同拍摄角度,实现对采集场景Ul的基本覆盖。 [0059] Multi-camera video capture η imaging array is an array of a fixed installation according to the parameters of the camera mounting configuration, as shown, each of the cameras in the array by using a different lens depending on the angle, and two different shooting angle, to achieve the basic coverage of the collection of scenes Ul. 但采集的同一场景Ul的各个摄像机图像U2间无有效拼接重叠区域,甚至存在较小的间隙和漏洞的。 However, the same scene captured between respective camera images Ul U2 no effective splice overlapping area, the presence of even small gaps and vulnerabilities.

[0060] 多路视频同步采集单元 [0060] Multi-channel video synchronization acquisition unit

[0061] 本单元如图2所示,是由多块具有多路视频同步采集功能的视频采集卡组成。 [0061] The module 2, by having a plurality of multi-channel video capture card video synchronization acquisition function of composition. 其工作流程为,多路视频同步采集单元U4将多摄相机视频采集阵列U3视频源的η路模拟信号通过采集卡上的A/D转换模块分路转换成数字信号,然后传至板卡自带的储存器中,再由视频采集卡上自带的视频压缩芯片和视频同步芯片对各路视频执行同步和压缩算法,从而将庞大的视频信号同步,压缩变小后形成η路视频流,再传递给多路视频同步分割单元U5,完成整个工作流程。 The workflow is, multi-channel video synchronous acquisition unit U4 multi-channel analog signal η camera video capture camera array U3 A video source on the card through the acquisition module D converter into a digital signal splitter /, and then transmitted from the board storage belt, and then executed by the built-in video capture card and a video compression chip video synchronization and chip synchronization channel of video compression algorithms, a video signal so as to synchronize a large, small form η compression video streams, before being passed to multi-channel video synchronization division unit U5, to complete the entire workflow.

[0062] 多路视频同步分割单元 [0062] Multi-channel video synchronization dividing unit

[0063] 本单元是一个FPGA可编程硬件平台,平台里预载了一个平行图像处理算法硬件逻辑电路,该图像处理算法的功能是将图2中多路视频同步采集单元U4传递过来的η路视频流按先后时间顺序分割成若干个静态子图像组(特别申明在本发明中“视频帧”与“子图像”不加区别),并且每组由η路视频流同一时刻的η个静态图像构成,同时再将各个时刻的图像组按先后顺序依次传送给多路视频帧侧视图变俯视图单元U6,完成该单元的整个工作流程。 [0063] This unit is a programmable FPGA hardware platform, the platform in a pre-loaded parallel image processing algorithm hardware logic circuit, the function of the image processing algorithms in FIG. 2 is a multi-channel video synchronous acquisition η transmitted over path unit U4 by the chronological order of the video stream is divided into several sub still picture group (in particular that of the present invention, "video frames" and "sub-image" indiscriminate), and video streams each consisting η η same time a still image configuration, and then the image of each group in descending order by time multiplex transmitted video frame to a top side view of a unit U6 variant, the entire workflow unit.

[0064] 多路视频帧侧视图变俯视图单元 [0064] Multi-channel video frame side plan view of a unit variation

[0065] 为了节省装置的成本,提高装置的集成度,如图2所示,本单元与多路视频同步分割单元U5同在一个FPGA可编程硬件平台进行实现。 [0065] The apparatus for cost savings, improved integration means, as shown, the present multi-channel video synchronization unit dividing unit U5 2 be implemented with a programmable FPGA hardware platform. 本单元的核心图像变换算法硬件逻辑电路作为多路视频同步分割单元U5的后续算法也已预载在平台里,为实现将同一时刻的η个视频帧在多路视频帧侧视图转换成摄像机正对被拍物拍摄的俯视图;本发明在图像变换算法中构建了基于多摄像机安装参数图像几何变换模型用于侧视图变俯视图,其具体步骤如下: The core algorithm Algorithm subsequent image conversion hardware logic circuitry of this unit as a division multiplexed video synchronization unit U5 has been preloaded in the platform, to achieve the same timing η video frame into multiple video cameras in the positive side frames a plan view of the photographic subject photographed; the present invention is constructed based on a multi-camera setup parameters for image geometric transformation model variable side plan view, in which the following steps image conversion algorithm:

[0066] (1)根据摄像机成像原理,可建立摄像机的实物侧视平面坐标系(x,y,z)到摄像机的虚拟俯视坐标系(X,Y,Z)的坐标变换方程如下式所示, [0066] (1) According to the principle of the imaging camera, the camera can establish physical side plane coordinate system (x, y, z) to a top of the virtual camera coordinates (X, Y, Z) coordinate transformation equation the following formula ,

[0067] [0067]

Figure CN104506828BD00101

[0068] 其中各个参量的具体含义如下所示: [0068] wherein the specific meaning of each parameter are as follows:

[0069] s 比例因子 [0069] s scale factor

[0070] fx,fy 摄像机的焦距 [0070] fx, fy camera focal length

[0071] Cx1Cy 图像校正参数 [0071] Cx1Cy image correction parameter

[0072] Rx,Ry,Rz旋转矩阵的三个列向量 [0072] Rx, Ry, Rz rotation matrix three column vectors

[0073] t 平移向量 [0073] t a translation vector

[0074] (2)根据多摄像的安装参数结合摄像机拍摄标准图像对摄像机的成像参数进行标定,获取侧视图变俯视图所需要的成像参数。 [0074] (2) imaging parameters of the camera calibration is performed in conjunction with the camera captured image in accordance with the standard multi-camera installation parameters, obtaining a top side view of the imaging parameter variations required. 其过程如图3所示,以黑白相间棋格图U61为例,通过成像标定程序标定参数计算所需的标定点U63,利用标定点的位置变化建立计算参数所需的方程组,从而求解侧视图变俯视图变换模型中的参数的值,完成摄像机成像参数标定工作。 The procedure shown in Figure 3, to black and white checkered FIG U61, for example, by imaging the calibration parameters to calculate the required calibration procedures U63 calibration points, equations were established using the parameters needed to calculate change in position of calibration points, thus solving side a top view of a variation of FIG parameter conversion model, the camera imaging parameter calibration work is completed.

[0075] (3)利用标定好摄像机参数,以及在此基础上建立的侧视图变俯视图的图像几何变换模型,即可实现如图3所示的黑白相间棋格侧视图U61变换成黑白相间棋格俯视图U62。 [0075] (3) using the camera parameter calibration well, and a side view of the variable is established based on the plan view image geometric transformation model, shown in Figure 3 can be achieved in black and white checkered black and white side move into U61 grid plan view U62.

[0076] 综上,多路视频帧侧视图变俯视图单元U6的工作流程为,首先其接收由多路视频同步分割单元U5传递来的某一时刻子图像组(第一静态视频帧组),然后按照摄像头标号顺序依次利用基于多摄像机安装参数图像几何变换模型将原子图像组的侧视图转换成俯视图,并且把转换后子图像组作为新的子图像组传递给GPGPU视频帧定位、拼接、补偿和融合单元U7,并准备接受下一时刻子图像组,依此类推完成整个工作流程。 [0076] In summary, the video multiplex unit frame side plan view of a variation of the workflow U6, which is received by the first video synchronization division multiplex transmission unit U5 to the sub-picture group at a time (a first still video frame group), then follow the camera numerals in sequence by the mounting parametric images of multiple cameras based on geometric transformation model to convert the side view atoms image group into a top plan view, and the converted sub-image group is transmitted to GPGPU video frame positioning, stitching, compensation as a new sub-image group and fusion unit U7, and ready to accept the next time sub-image group, and so on to complete the entire workflow.

[0077] GPGPU视频帧定位、拼接、补偿和融合单元 [0077] GPGPU video frame alignment, splicing, and fusion compensation unit

[0078] 本单元作为本发明的关键单元,是一个以英伟达公司的高性能GPU为硬件平台的图像处理软件系统。 [0078] This unit is a key unit of the present invention, is a high-performance NVIDIA GPU hardware platform for the image processing software. 其是基于CUDA平行运算构架开发的,由子图像定位功能模块U71、子图像组定点定向全景粗拼接功能模块U72、子图像组拼接缝补偿功能模块U73、子图像组拼接缝融合功能模块U74,4个功能子模块构成。 Which is a CUDA-based parallel computing architecture developed by the sub-image localization function module U71, the sub-groups designated oriented panoramic image stitching module crude U72, the sub-group of image compensation module U73 seam, the seam sub-image fusion function module group U74 , 4 sub-module configuration. 上述每个功能子模块均是本发明提出的一种图像处理算法,其原理分别说明如下: Each of the above sub-module are proposed an image processing algorithm according to the present invention, its principles are as follows:

[0079] (1)子图像定位功能模块 [0079] (1) the sub-image localization function module

[0080] 根据多摄相机视频采集阵列的单个摄像机安装方式是固定的如图4所示,再结合摄像景域形成的原理,则可建立拼接子图像的定位模型,从而确定每个摄像机拍摄的具体区域,其步骤如下, [0080] A multi-camera mounting a single camera video capture camera array is fixed and 4, combined with the principle of the imaging view field is formed, the positioning can be established model combination of sub-images, each camera to determine the shooting In particular region, the following steps,

[0081 ] 1)首先,根据图4所示单摄像机U711定点安装参数,可确定摄像机镜头中心点Po U712以及摄像机中心线Io U714与被拍对象所在xoy平面的交点P1 U713,在如图4建立的(x, y,z)坐标系中坐标分别为PO (XQ,yo,Z())和pi (xi,yi,zi),则摄像机中心线Io的空间直线方程如次下式所示, [0081] 1) First, a single camera mounted U711 pointing parameters shown in FIG. 4, the camera lens can be determined and a camera center point Po U712 centerline Io U714 and imaged object is located xoy plane of intersection P1 U713, based on FIG. 4 the (x, y, z) coordinate system is the coordinate space shown in linear equation such as hypophosphorous formula PO (XQ, yo, Z ()) and pi (xi, yi, zi), the center line of the camera Io, respectively,

[0082] [0082]

Figure CN104506828BD00111

[0083] 2)其次,根据图4所示单摄像机U711定向安装参数,可确定摄像机镜头中心线U714 的空间方向角,再结合摄像机的视场角可确定摄像机U711形成景域的圆锥体的母线I1 1]712的方向角(€^,丫),又因为母线111]712过摄像机镜头中点? [0083] 2) Next, according to the installation parameters U711 single camera orientation shown in FIG. 4, may determine the direction of the camera lens centerline U714 spatial angle, combined with the angle of view of the camera may determine the camera view cone formed U711 field bus direction I1 1] 712 corners (€ ^, Ah), and because the bus 111] 712 through the midpoint of the camera lens? 〇〇〇),7(),别),则空间直线11 U712方程如下式所示, 〇〇〇), 7 (), respectively), the spatial line shown in the following equation Formula 11 U712,

[0084] [0084]

Figure CN104506828BD00112

[0085] 3)最后,由炜圆法可知摄像机U711在xoy平面形成景域曲线Γ 2U717可由下式消除参数X2,y2,Z2求得, [0085] 3) Finally, the camera is clear from U711 forming method Wei circle curve Γ 2U717 view field cancellation parameters by the formula X2, y2, Z2 determined in xoy plane,

[0086] [0086]

Figure CN104506828BD00113

[0087] 其中,点M1为任意炜圆h U715上与母线I1的交点,坐标为(x2,y2,z2)。 [0087] wherein M1 is an intersection point of an arbitrary circle h U715 Wei I1 of the bus bar, coordinates (x2, y2, z2).

[0088] 综上所述,子图像定位功能模块,只需依据图2中多摄相机视频采集阵列U3中每个摄像机定点定向安装参数,即可建立的每个摄像机景域曲线方程,从而预知每个拼接子图像在全景图中的区域和大小,实现全景图中拼接子图像的定位。 [0088] In summary, the sub-image localization function module, simply based on FIG. 2 in a multi-camera video capture arrays in each camera pointing orientation U3 installation parameters of each camera view field to create the curve equation to predict each sub-image region and the splice in a panorama size, to achieve the panorama mosaic positioning sub-image.

[0089] ⑵子图像组定点定向全景粗拼接功能模块 [0089] ⑵ oriented sub-image units designated modules panorama stitching crude

[0090] 此功能子模块的功能是将多路视频帧侧视图变俯视图单元U6获得的俯视子图像组在子图像定位功能模块的作用下进行全景图的粗拼接。 [0090] This sub-module function is to multiplex video frame becomes a top side plan view of a unit U6 obtained crude sub-image group in the panorama mosaic effect sub-image localization function module. 其具体工作流程为:首先,根据预先设定生成一张和全景图景域大小等大的空白图像;其次,对接收的俯视子图像组中的每一张子图像利用图5中子图像定位功能模块U71依次进行定位处理,确定每张子图像在空白图中的位置、大小和方向;再次,按照多摄像机阵列中每个摄像机预定的标号顺序和其拍摄子图像的定位信息逐张将子图像填充到空白图中对应的地方实现全景图的粗拼接;最后, 对粗拼接完毕的全景图进行拼接缝标定,标出拼接缝中的重叠区域、缝或洞区域和无缝拼接区域的具体位置、形状和区域大小,完成子图像组定点定向全景粗拼接功能模块U72整个工作流程。 The specific workflow: First, based on a preset image generated a large blank size and a panoramic picture field; Secondly, one for each sub-image using the sub-image group received a top view of FIG. 5 in the neutron image localization function U71 sequentially performs positioning processing module to determine the location, size and orientation of each sub-image of the blank of FIG.; again, a multi-camera array according to the location information of each camera and its sequence a predetermined reference sub-image captured by one of the sub-images FIG filled into empty places corresponding coarse-mosaic panorama; Finally, the crude panorama stitching seam calibration is completed, the mark overlapping area stitching seam, the seam or hole regions and the seamless region specific location, shape and size of the area to complete the sub-image units designated modules oriented panorama stitching crude U72 entire workflow.

[0091] ⑶子图像组拼接缝补偿功能模块 [0091] ⑶ sub-image group seam compensation module

[0092] 此功能子模块如图6所示,由子图像线特征提取子模块U731、相邻子图像间线特征匹配子模块U732和相邻区域线特征外推边界点补偿缝和漏洞子模块U733三个子模块组成, 其工作原理分别说明如下, [0092] This sub-module as shown in FIG extracting sub-module U731 6 characterized by the sub-image line, image lines between adjacent sub-feature matching submodule area line U732 and wherein adjacent slits extrapolated boundary point and compensation sub-module U733 vulnerability three sub-modules, which are described below works,

[0093] 1)子图像线特征提取子模块 [0093] 1) sub-image line feature extraction submodule

[0094] 在此子模块中,要提取每张子图像的二维线特征,必须先获取图像中阶跃型边界。 [0094] In this sub-module, wherein for each line to extract a two-dimensional sub-image must be obtained step-image boundaries. 本发明中采用RoA算法来检测子图像的边界,该算法是通过计算相邻区域的均值比来确定目标像素点是否是边缘点。 RoA algorithm used in the invention to detect the boundaries of the sub-images, the algorithm is determined by calculating the mean of the target pixel than the neighboring region is an edge point. 由于该方法采用的是相邻区域的强度均值,所以极大的降低了因斑点噪声而引起的单个像素的强烈波动,使得通过此法获取的子图像的线特征可靠性较高。 Since this method uses the mean intensity of the adjacent region, it is greatly reduced due to strong fluctuations single pixel caused by speckle noise, so that the sub-images obtained by this method is highly reliable line features. 为了减少计算量,对于每张需要提取线特征的子图像只提取与拼接缝相邻的一定区域内的线特征。 To reduce the computational amount, the characteristic line for the sub-images required to extract features of each line to extract only a certain area adjacent to the seam. 算法是通过比较沿某一方向相邻区域来完成的。 Algorithm adjacent regions in a certain direction by comparing accomplished. 其步骤为,首先,假设以C (X, y)为中心像素点,同时设L (x,y)和R (x,y)分别为以C (x,y)点沿某一个方向的左、右相邻区域的平均灰度值,则均值比估计如下式所示; Comprises the following steps, firstly, we assumed that C (X, y) as the center pixel, and set L (x, y) and R (x, y) are to C (x, y) point in one direction, left , average gradation value right adjacent area, the estimated average ratio shown in the following formula;

[0095] RoA:C(x,y) =max{R(x,y) /L(x,y),L(x,y) /R(x,y)} [0095] RoA: C (x, y) = max {R (x, y) / L (x, y), L (x, y) / R (x, y)}

[0096] 然后,比较RoA:C (x,y)与预先确定的阈值To进行比较,当RoA:C (x,y)大于阈值时则认为点C为边界点;最后,将通过上述算法提取子图像中的线特征片段采用一定手段重组织成具有意义的线特征,完成整个功能子模块的功能。 [0096] Then, Comparative RoA: C (x, y) with a predetermined threshold value To are compared, when RoA: when C (x, y) is greater than the threshold, then the point C is a boundary point; Finally, extracted by the above algorithm line sub-image segment wherein the means for re-use of certain lines are organized into meaningful features, completion of the entire sub-module.

[0097] 2)相邻子图像间线特征匹配子模块 [0097] 2) between adjacent sub-image line feature matching submodule

[0098] 在此子模块中,要对相邻子图像间提取的二维线特征进行匹配,就必须先将线特征进行数学化。 [0098] In this sub-module, to be matched between the two-dimensional image extraction line wherein adjacent sub, it must be first mathematical characteristic line. 本发明采用数学拟合的方法,将相邻子图像中提取的线特征用对应的线段函数来描述。 The present invention employs mathematical fitting method, the line features extracted sub-image with the corresponding adjacent segment function described. 以拼接漏洞为例,假设包围拼接漏洞的子图像有η个。 Vulnerability to splice an example, assume that the sub-image surrounded by a mosaic of the vulnerability η. 首先,提取的每幅子图像的线段函数斜率组成的集合I由下式表示, First, a function of the slope of the line segment extracted sub-image set consisting of each piece represented by Formula I,

[0099] [0099]

Figure CN104506828BD00121

[0100] 其中,m,n,l等下标均表示对应子图像中提取的线特征的总数;然后利用下式实现子图像间的线特征匹配, [0100] wherein, m, n, l are like subscripts represent the total number of sub-image lines corresponding to the characteristic extracted; then using the characteristic line between matching sub-image to achieve the formula,

[0101] [0101]

Figure CN104506828BD00122

[0102] 其中 [0102] in which

Figure CN104506828BD00123

均为集合I中任意的一个元素,但 It is set I in any one element, but

Figure CN104506828BD00124

不同时代表同一个元素,T1S 匹配阈值是一个较小的正数;最后,将完美匹配的线特征重新组合成线特征对,完成整个子图像的线特征匹配过程。 Do not simultaneously represent the same element, T1S match threshold is a small positive number; line features Finally, the perfectly matched recombined into line characteristic features of the line, to complete the sub-image matching process.

[0103] 以图7所示的拼接漏洞T734为例,上述线特征匹配过程可描述为如下步骤:首先, 提取如图所示的左图T731、右图T732和下图T733的线特征为分别为{T7311,T7312,T7313, Τ7314}、{Τ7321,Τ7322}、{Τ7331,Τ7332,Τ7333};然后,利用线特征匹配公式的线特征匹配算法匹配从三幅图像提取的线特征;最后,对匹配的线特征重新组合成线特征对为{(Τ7311,Τ7331),(Τ7312,Τ7332),(Τ7313,Τ7322,Τ7333),(Τ7314,Τ7321)},从而完成如图7 所示的拼接漏洞的相邻的三幅图的线特征匹配过程。 [0103] In the vulnerability splice shown in Fig T734, for example, the above-described line feature matching process can be described as follows: First, as shown on the left extraction T731, T732, and lower right plots, respectively, for characterized T733 is {T7311, T7312, T7313, Τ7314}, {Τ7321, Τ7322}, {Τ7331, Τ7332, Τ7333}; then, the line characteristic using the line feature matching formula matching algorithm to match the three line image features extracted; Finally, feature matching line recombined into line pair characterized as {(Τ7311, Τ7331), (Τ7312, Τ7332), (Τ7313, Τ7322, Τ7333), (Τ7314, Τ7321)}, thereby completing the splice as shown in FIG. 7 vulnerability wherein the adjacent three lines of the matching process of FIG.

[0104] 3)相邻区域线特征外推边界点补偿缝和漏洞子模块 An outer [0104] 3) adjacent to the boundary area line feature point compensation push slits and gaps submodule

[0105] 在此子模块中,由于存在如图7所示的拼接漏洞Τ734,要对漏洞Τ734的图像进行补偿,就必须利用已有的图像信息进行外推来进行。 [0105] In this sub-module, due to the vulnerability shown in FIG splicing 7 Τ734, the image is to be compensated Τ734 vulnerability, it is necessary to use the existing image information to be extrapolated. 在本发明中,是利用相邻子图像间线特征匹配子模块U732中获得的图7中匹配的线特征对来进行补偿的。 In the present invention, using FIG adjacent sub-feature matching between image line U732 obtained submodule in line 7 matching features to compensate for the. 具体而言,首先根据已有匹配的线特征对的线段函数,构造一个能满足匹配特征对中所有的线特征的线段函数,同时认为此函数也是对拼接漏洞的线特征的合理拟合;然后利用新构造的线段函数来外推拼接漏洞处,由此对匹配线特征对确定的线特征所在的位置;最后对外推出来的漏洞的线特征, 利用原拼接子图像中与其对应的匹配线特征对的灰度值、颜色和亮度进行融合、填补。 Specifically, according to the features of the existing line segment matching function pair, a structure can meet the matching segment function wherein all of the features of the line, at the same time that the function is reasonable fit of the characteristic line of stitching vulnerability; and function of the newly constructed line segment extrapolated splice vulnerability, the match line position thereby determining the characteristic features of the line where; finally introduced to the vulnerability of the external line features using sub-image in the original splice corresponding match line wherein of the gray value, color and brightness fusion fill. 当对所有的匹配线特征对进行如上类似处理后,即可实现如图7中拼接漏洞中补偿的线特征图像,如T7341、T342、T343和T344所示。 When all of the match line as described above performs a similar processing characteristics, can be realized. 7 splicing vulnerability FIG compensation characteristic image lines, such as T7341, T342, T343 and T344 shown in FIG. 通过与原图T735在拼接漏洞处的比较,显然通过相邻区域线特征外推边界点补偿缝和漏洞子模块补偿的漏洞图像基本上与原图像较好的保持一致。 By comparison with the original T735 splicing at the vulnerability, the vulnerability clearly by the image area line wherein adjacent slits extrapolated boundary point and the compensation module compensates vulnerabilities sub original image is substantially better consistency.

[0106] 综上,子图像组拼接缝补偿功能模块工作流程如图6所示,首先,通过子图像线特征提取功能子模块U731对粗拼接视频帧全景图中所有子图像的线特征,并提取的每幅子图像的线段函数斜率组成集合;其次,由相邻子图像间线特征匹配功能子模块U732完成所有子图像的线特征匹配,获得子图像间所有匹配的线特征对;再次,相邻区域线特征外推边界点补偿缝和漏洞功能子模块U733根据已有匹配的线特征对的线段函数,重新构造一个能满足匹配特征对中所有的线特征的线段函数来进行漏洞和缝的线特征外推拟合,从而确定漏洞和缝中线特征的位置;最后,利用原拼接子图像的灰度值、颜色和亮度对漏洞中线特征和图像进行融合、填补,从而获得无拼接缝和漏洞的视频帧全景图,完成此功能模块的整个工作流程。 [0106] In summary, the sub-group of image compensation module seam work flow shown in FIG. 6, first, all the feature extraction line sub-image sub-module U731 crude panorama mosaic video frame by the sub-image line feature, function of the slope of the line and the extracted sub-images each web to form a set; Second line features, the sub-image lines between adjacent sub-module feature matching completion of all sub-images of U732 matching, matching is obtained between all features of the sub-image lines; again , pushing the outer neighboring region boundary line feature point compensation sub-module seams and gaps on line U733 function to all of the features of line segments in accordance with a function of vulnerability has been matched to the characteristic line of a re-configured to meet and match the features wherein the seam line extrapolation of fitting, to determine the location of the gaps and crevices line characteristics; Finally, the original image gradation value of the sub-splicing, color and brightness of the image features and vulnerabilities fusion line, fill, thereby obtaining spliceless panorama video frames and joints vulnerability, this entire workflow function module.

[0107] ⑷子图像组拼接缝融合功能模块 [0107] ⑷ sub-image fusion function module group seam

[0108] 由前面分析,要利用子图像组拼接缝补偿功能模块U73的实现获取自然完整的无拼接缝和漏洞的视频帧全景图,就必须消除相邻子图像间、相邻子图像与补偿的拼接缝和拼接漏洞间的灰度、颜色和亮度的差异,就必须进行它们之间的灰度、颜色和亮度的融合处理。 [0108] From the above analysis, to be implemented using the sub-image Assembling the joint compensation module U73 for natural panorama complete video frame without seam and vulnerabilities, it must be eliminated between the adjacent sub-images, neighboring sub-images gradation difference, color and brightness between the seam and stitching vulnerability compensation fight, it must be fused with grayscale, color and brightness between them. 在本发明中,子图像组拼接缝融合功能模块U74采用Szeliski提出的方法来进行融合操作,此方法是假定某个拼接缝或者拼接漏洞相邻的子图像有m幅,则拼接缝或者拼接漏洞中某一点P的灰度、颜色和亮度值,可由与m幅子图像中此拼接缝或者拼接漏相邻区域中,离P 点距离最近的点的灰度、颜色和亮度值,通过下式计算获得, In the present invention, the sub-group of image fusion function module U74 seam Szeliski proposed method to perform the merge operation, the method assumes that a seam or splice sub-picture adjacent to the vulnerability of m pieces, the seam or grayscale, color and luminance values ​​of a point P in the splicing of vulnerability, and may be m sub-images in this stitching seam or adjacent to the drain region, gradation, color and luminance values ​​closest point distance of the point P , obtained by the following equation,

[0109] [0109]

Figure CN104506828BD00131

[0110] 其中,g (P)表示P点的灰度值、颜色值和亮度值中任意一种,gl(Xl,yi)表示第i幅图像离P点最近点的对应于g (P)的灰度值、颜色值或者亮度值。 [0110] wherein, G (P) represents the gray value of the point P, the color values ​​and luminance values ​​of any one, gl (Xl, yi) denotes the i-th point P from the images corresponding to the closest point G (P) grayscale value, the luminance value or color value. 函数ξ (X)是线性权重函数,由第i幅图像离P点最近点与P点的距离来决定,距离越大权重越大,距离为最大距离时权重为1,反之距离为最小距离时权重为0。 Function ξ (X) is a linear weighting function, the closest point is determined by the i-images from the point P and the distance from the point P, a weight greater the distance weight, the maximum distance weight of a distance, whereas the distance is the minimum distance weight is 0. 通过对拼接缝和拼接漏洞中每个像素点逐个进行如上所示的融合操作,即可获取自然完整的无拼接缝和漏洞的视频帧全景图,实现子图像组拼接缝补偿功能模块U73的功能。 By fusing operation for each pixel one by one as shown above the seam and stitching of vulnerability, to obtain a complete video frame natural panorama vulnerability and no seam, the seam to achieve sub-image compensation module group U73 features.

[0111] 通过上面对GPGPU视频帧定位、拼接、补偿和融合单元的各个子功能模块的工作原理逐一说明,则GPGPU视频帧定位、拼接、补偿和融合单元U7的工作流程可描述为如图5所示:首先,将多路视频帧侧视图变俯视图单元U6输出的俯视子图像组作为全景图像拼接的原图像组,在子图像定位功能子模块U71的处理下,预知每个拼接子图像在全景图中的区域和大小;其次,在子图像组定点定向全景粗拼接功能子模块U72中,利用子图像定位功能子模块U71的子图像定位信息进行全景图的粗拼接,获得有拼接缝和拼接漏洞的视频帧全景图;再次,将具有拼接缝和拼接漏洞的视频帧全景图送到子图像组拼接缝补偿功能子模块U73中进行拼接缝和拼接漏洞的补偿;最后,通过子图像组拼接缝融合功能子模块U74对其进行拼接缝和拼接漏洞的融合处理,获得自然完整的无拼接缝和漏 [0111] By positioning the face GPGPU video frame, each sub-function blocks works splicing, and fusion compensation unit described one by one, the frame alignment GPGPU video, splicing, and fusion compensation unit U7 workflow can be described as in FIG. 5: first, a multiplexed video frame side becomes a top plan view of a unit U6 output sub-image group as a group of the panoramic image stitching the original image, in the sub-image localization processing sub-module of U71, predict each tiling sub-images region and size of the panorama; secondly, the orientation of the sub-units designated panoramic image stitching crude sub-module U72, the image positioning information using the sub-sub-module sub-image localization is performed U71 crude panorama stitching, stitching there is obtained video frame panorama stitching seams and vulnerability; panorama video frames again, and having a seam stitching vulnerability to the sub-image group seam compensation submodule U73 is compensated in the seam and stitching vulnerability; final by sub-image group fused seam sub-module U74 subjected to fusion seam stitching and vulnerabilities, to obtain a complete natural seam and no leakage 洞的视频帧全景图,完成GPGPU视频帧定位、拼接、补偿和融合单元U7的整个工作流程。 Hole panorama video frames, entire workflow GPGPU video frame alignment, splicing, and fusion compensation unit U7 is.

[0112] 实时全景视频流生成单元 [0112] Real-time panoramic video stream generating unit

[0113] 本单元作为本发明的输出单元如图2所示,实时全景视频流生成单元U9也是一个以英伟达公司的高性能GPU为硬件平台的视频流生成软件系统。 [0113] This unit as the output unit of the present invention shown in FIG. 2, the real-time panoramic video stream generating unit U9 is a high performance to NVIDIA GPU hardware platform for the software system to generate a video stream. 在此单元中,利用了多线程调度机制和CUDA并行计算架构,将GPGPU视频帧定位、拼接、补偿和融合单元U7获取的自然完整的无拼接缝和漏洞的视频帧全景图(全景视频帧),按照时间的先后顺序,以24帧每秒的方式形成视频流。 In this unit, by using a multi-thread scheduling mechanism and CUDA parallel computing architecture, no natural GPGPU complete video frame alignment, splicing, and fusion compensation unit U7 and acquires the vulnerability seam panorama video frames (video frame Panoramic ), a chronological order of 24 frames per second video stream is formed. 同时通过简单的视频压缩算法,将视频流压缩成常用的视频格式进行储存。 While simple video compression algorithm, the video stream is compressed into a conventional video format for storage. 完成本单元的整个工作流程。 Entire workflow this unit.

[0114] 本发明的有益效果在于: [0114] Advantageous effects of the present invention:

[0115] 本发明提供了一种针对不同视角、不同方向对同一场景采集的无有效重叠变结构的图像进行拼接处理,再实时生成视频流的视频实时拼接方法;该方法基于采集设备安装参数、性能参数和采集图像的线性特征,建立了图像的定位模型、变换模型以及补偿融合模型,通过利用这些高效的图像处理模型,本发明的拼接方法不但提高了图像拼接精度,同时保证了图像拼接效率,满足了视频流拼接的实时性要求。 [0115] The present invention provides an image without a valid variable structure overlapping the same scene for different viewing angles, different directions of stitching acquisition process, and then generate a real-time video splicing method of real-time video stream; acquisition apparatus based on the installation parameters, linear characteristics performance parameters and image acquisition is established positioning model image transformation model and a compensation fusion model through use of these highly efficient image processing model, splicing method of the present invention not only improves image stitching accuracy, while maintaining the image splicing efficiency to meet the requirements of real-time video stream splicing.

附图说明 BRIEF DESCRIPTION

[0116] 为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。 [0116] In order to more clearly illustrate the technical solutions in the embodiments or the prior art embodiment of the present invention, briefly introduced hereinafter, embodiments are described below in the accompanying drawings or described in the prior art needed to be used in describing the embodiments the drawings are only some embodiments of the present invention, those of ordinary skill in the art is concerned, without creative efforts, can derive from these drawings other drawings.

[0117] 图1为本发明的一种无有效重叠变结构的定点定向视频实时拼接方法; One kind of [0117] FIG. 1 of the present invention without overlapping the effective orientation change point configuration of real time video splicing method;

[0118] 图2为利用本发明实施例提供的拼接方法进行视频拼接的装置结构示意图; [0118] FIG. 2 is a method of using the present invention, the splicing device according to an embodiment schematic view of the structure of the video splicing performed;

[0119] 图3是本发明实施例提供的黑白相间琪格侧视图变俯视图示意图 [0119] FIG. 3 is a black and white Qige an embodiment of the present invention is a side view of a schematic plan view of variations

[0120] 图4是本发明实施例提供的子图像定位功能模块原理图; [0120] FIG. 4 is a sub-image localization according to the present invention is functional block diagram according to an embodiment;

[0121] 图5是本发明实施例提供的GPGPU视频帧定位、拼接、补偿和融合单元结构框图; [0121] FIG. 5 is an embodiment of the present invention provides a video GPGPU frame alignment, a block diagram of splicing, and fusion of the compensation unit;

[0122] 图6是本发明实施例提供的子图像组拼接缝补偿功能模块结构框图; [0122] FIG. 6 is a block diagram joints compensation module sub-image group according to an embodiment of the present invention to fight;

[0123] 图7是本发明实施例提供的子图像组拼接缝补偿功能模块原理示意图; [0123] FIG. 7 is a group of sub-image according to an embodiment of the present invention, the seams Schematic compensation module;

[0124] 图8为本发明实施例的一种无有效重叠变结构的定点定向视频实时拼接方法的流程图。 [0124] FIG. 8 a flowchart of a method for real-time mosaic embodiment no effective orientation point overlapping video variable structure embodiment of the present invention.

具体实施方式 Detailed ways

[0125] 下面结合附图和实施例对本发明作进一步详细描述。 Drawings and embodiments of the present invention will be further described in detail [0125] below in conjunction. 以下实施例用于说明本发明,但不能用来限制本发明的范围。 The following examples serve to illustrate embodiments of the present invention, but not to limit the scope of the present invention.

[0126] 结合图8,对本发明具体实施情况进行进一步的说明,本发明应用于国内某2650m3 高炉上,分别从高炉的不同方向安装了3个侧视拍摄料面的摄像头,用于获取直径为8.2米的圆面的料面。 [0126] in conjunction with FIG. 8, the specific implementation of the present invention will be further explained, the present invention is applied to a domestic 2650m3 blast furnace, it is attached to a side of the camera 3 photographing the material surface from different directions of the blast furnace, for acquiring diameter 8.2 m of material surface of the circular surfaces. 由于高炉炉内无光、高温和多尘的恶劣环境,使的无法使用单个摄像机获取高炉的整个料面,并且每个摄像机只能定点定向固定安装。 Since the blast furnace matte, high temperatures and dusty harsh environments, the use of a single camera can not get the entire blast furnace material surface, and each camera can only be fixed directional fixed installation. 这就满足了3个摄像机在不同视向角,不同方向对同一料面进行图像采集,采集图像间无能用于图像拼接的有效重叠区域, 甚至图像间有较小的间隙和漏洞的本发明使用前提。 This satisfies three cameras, of the same material in different directions at different surfaces depending on the angle of image acquisition, the effective overlap area between the images acquired for the inability of the image mosaic, using the present invention even smaller gap between the image and the vulnerability premise. 首先,根据图2安装视频拼接装置的必要设备,设备安装完毕后,再根据图1的流程开始对采集视频信息进行视频拼接,其工作步骤如下, First, according to FIG. 2 the necessary installation video splicing apparatus, the device is mounted and then starts collecting video information according to video splicing flow of FIG 1, the following work steps,

[0127] 1、根据多摄像机安装参数S61对安装的3个拍摄料面的摄像机标定其摄像机成像参数S62,再次基础上构建侧视图变俯视图几何变换模型S63,以便子图像组侧视图变俯视图S6使用; [0127] 1, a multi-camera setup parameters S61 calibrate their cameras imaging parameters S62 camera mounted three imaging material surface, again constructed on the basis of side becomes a plan view of a geometric transformation model S63,, so that the sub-image group side becomes plan view S6 use;

[0128] 2、利用3个摄像机定点定向参数及拍摄对象所在的平面S711,构建全景图中拼接子图像的定位模型S712,并利用全景图中拼接子图像的定位模型S712,确定子图像在全景图像中的位置S713、子图像拼接漏洞和断裂的位置S714、子图像拼接重合区域S715以及子图像之间及子图像与漏洞和断裂的接壤关系S716,以方便后期的拼接使用; [0128] 2, using three cameras pointing orientation flat S711 parameters and the subject is located, constructed localization model S712 tapping sub image panorama, and by the positioning Model S712 tapping sub image panorama, determined sub-image in the panorama S713 in the image position, the position of the sub-image stitching holes and broken S714, image stitching overlap between the sub-regions and the sub picture S715 and the sub-image bordering the holes and broken S716 relationship, to facilitate the use of post-splicing;

[0129] 3、安装在高炉炉顶的3个摄像机阵列,分别采集的高炉料面不同位置的视频,形成的多摄像机视频序列S4,通过视频流同步分割单元对视频流同步分割S51获取的第i个时刻对应的第i帧待拼接子图像组S52; [0129] 3, 3 mounted on the furnace top array cameras, video that are captured at different positions of the high surface charge, forming a multi-camera video sequence S4, the video stream by synchronizing the video stream dividing unit dividing the synchronization acquisition section S51 i corresponding to the i-th time frame to be spliced ​​S52 sub-image groups;

[0130] 4、利用步骤1构建的图像几何变换模型S63,对第i帧待拼接子图像组S52进行侧视图变俯视图; [0130] 4, Step 1 using geometric transformation model is constructed in S63, the image, the i group of sub-image frame to be spliced ​​variant S52 plan view of a side view;

[0131] 5、利用步骤2确定的子图像在全景图像中的位置S713,对以变换成俯视图进行子图像组定点定向全景粗拼接S721; [0131] 5, Step 2 using the determined sub-picture position in the panoramic image in S713, in order to plan converted into an image group designated sub crude oriented panoramic mosaic S721;

[0132] 6、根据步骤5获得的全景粗拼接图,通过判断拼接缝S722,将拼接缝区域分为三种情况:有重叠区域拼接缝、无缝无洞无重叠拼接缝和有洞有裂缝的拼接缝; [0132] 6. The panorama mosaic crude obtained in Step 5, by determining S722 seam, the seam region is divided into three cases: an overlapping area seam, seamless no holes and no overlap seam with holes cracked seam;

[0133] 7、对于步骤6确定的有重叠区域拼接缝,采用步骤2获得的子图像拼接重合区域S715的具体位置,利用传统亮度和颜色相似匹配法直接拼接和融合S741; [0133] 7, the step of determining an overlapping area of ​​the seam 6, sub-image obtained in Step 2 using splicing specific location area coincides S715, the brightness and color similar to the use of traditional direct and fusion splicing S741 matching method;

[0134] 8、对于步骤6确定的有无缝、无洞和无重叠拼接缝,在接缝处采用亮度和颜色融合进行拼接S742; [0134] 8, for the determination in step 6 has a seamless, non-overlapping and seam hole, brightness and color using the fusion splicing S742 at a seam;

[0135] 9、对于步骤6确定的有洞、有裂缝的拼接缝,首先利用步骤2确定的子图像之间及子图像与漏洞和断裂的接壤关系S716,确定拼接洞和缝相邻的子图像S731;其次,对子图像的相邻区域提取线特征并匹配S732,并在此基础上通过对相邻区域线特征外推边界点获取缝和漏洞的线特征S733,从而实现补偿漏洞和缝的线特征S734;最后对漏洞和其他部分采用颜色和亮度插值融合S735; [0135] 9, for the determination of step 6 with holes, cracks in the seam, and using the first image and the relationship between the sub-gaps and rupture S716 border between the sub-images determined in step 2, determining the stitching holes and the adjacent slit sub-picture S731; secondly, extracting line features and an adjacent area of ​​the matching sub-image S732, and based on this characteristic line by pushing the outer boundary point of an adjacent region acquisition seam lines characterized in S733 and the vulnerability, the vulnerability and in order to achieve compensation S734 seam line features; Finally, vulnerability and other parts of the luminance and color interpolation using fusion S735;

[0136] 10、通过步骤7、8、9即可获得第i时刻的第帧i完整的全景图,同时跳回步骤3对第i + 1时刻由3个摄像机采集的子图像组进行全景拼接,如此循环获得整个高炉全景料面随时间分布的图像序列;利用获取的高炉全景料面随时间分布的图像序列合成实时的视频流, 从而获取高炉全景料面的实时视频信息。 [0136] 10, can be obtained by the i-th step 7,8,9 time frame i of a complete panorama, while jumps back to step 3 for the first time i + 1 for panorama stitching of three sub-image group captured by the camera , so the cycle sequence of a panoramic image obtained material surface distributed over time throughout the furnace; sequence of images acquired using a blast furnace material surface with a panoramic synthesis time distribution of real-time video streaming, video information so as to acquire real-time panoramic furnace material surface.

[0137] 以上实施方式仅用于说明本发明,而非对本发明的限制。 [0137] the above embodiments are merely illustrative of the invention, not limitation of the invention. 尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行各种组合、 修改或者等同替换,都不脱离本发明技术方案的精神和范围,均应涵盖在本发明的权利要求范围当中。 Although the present invention with reference to a detailed description of embodiments, those of ordinary skill in the art should be understood, the technical solution of the present invention is that various combinations, modifications or equivalent substitutions without departing from the spirit and scope of the technical solutions of the present invention, should fall among the requirements of the scope of the claimed invention.

Claims (10)

  1. 1. 一种无有效重叠变结构的定点定向视频实时拼接方法,其特征在于,所述方法包括以下步骤: 步骤一、安装多摄像机视频采集阵列,分别采集不同位置的视频流信息,并将所述视频流信息进行模数转换、同步和压缩处理; 步骤二、将压缩后的视频流信息转换为同一视频格式,按照时间顺序分为多个第一静态视频帧组;其中,每个所述第一静态视频帧组均包括同一时刻的所述多摄像机视频采集阵列米集的η路视频流彳目息; 步骤三、将所述第一静态视频帧组中每一路视频流信息对应的静态图像,根据侧视图转俯视图几何模型,转换为被拍对象的俯视图,形成第二静态视频帧组; 步骤四、根据定位模型,对所述第二静态视频帧组中的每个视频帧进行定位,进行全景粗拼接,得到全景粗拼接图; 步骤五、根据所述定位模型,确定所述全景粗拼接图中 An effective non-directional real-time video splicing method of site-directed superimposed variable structure, characterized in that the method comprises the steps of: a step of mounting an array of multi-camera video capture, video stream information were collected at different locations, and the said video stream information for analog to digital conversion, synchronization and compression processing; video stream information converted in step two, the same as the compressed video format, into a first group of a plurality of still video frames in chronological order; wherein each of said the first group still video frame of video streams comprises η mesh information of the left foot multi-camera video capture of the array of m sets of the same time; step three, the still video frame a first group of video streams each corresponding to static information image, a top side view of FIG turn geometry, is converted to a plan view of the imaged object, the second still video frame group; step 4 the location model, the second set of still video frame of each video frame is positioned , crude panorama stitching, to give a crude panorama mosaic; step 5 according to the location model to determine the panoramic mosaic of the crude 有重叠区域拼接缝位置、无缝无洞无重叠区域拼接缝位置以及有洞或有裂缝区域拼接缝位置; 步骤六、对于所述有重叠区域拼接缝或无缝无洞无重叠区域拼接缝,利用亮度和颜色插值算法,对拼接缝进行融合拼接; 步骤七、对于所述有洞或有裂缝区域拼接缝的拼接过程如下: 根据所述定位模型确定所述第二静态视频帧组中每一视频帧对应的子图像之间以及子图像与漏洞或裂缝之间的接壤关系,并根据接壤关系确定漏洞或裂缝子图像; 提取与漏洞或裂缝子图像相邻的子图像的线特征; 对提取的所述线特征进行匹配,得到线特征对; 利用相邻区域线的特征外推边界点来补偿裂缝或漏洞; 对漏洞或裂缝补偿后的所述全景粗拼接图,利用亮度和颜色插值算法,对拼接缝进行融合拼接,获得全景精拼接后的全景视频帧; 步骤八、将不同时刻的第一静态 An overlapping area seam position, seamless, non-overlapping region of the hole and the position of the seam with holes or cracks in the seam zone position; Step 6 with respect to the seam or the overlapping region seamless, non-overlapping holes seam area, using the luminance and color interpolation algorithm, to be fusion spliced ​​seam; step 7 to the area with holes or cracks seam stitching process is as follows: the positioning of the model determined according to a second and the relationship between the sub-image bordering the holes or cracks in the still video frame group among the sub-images corresponding to each video frame, and determines holes or cracks according to the sub-image bordering relationship; extracting with holes or cracks in the adjacent sub-sub-image feature of the image line; the line feature extraction by matching characteristics of the obtained line; characterized in neighboring areas of the lines extrapolated boundary point to compensate for the crack or flaw; the post-holes or cracks compensation panoramic mosaic crude using the luminance and color interpolation algorithm, to be fusion spliced ​​seam, to obtain a panoramic video frames fine panorama stitching; step eight, the first static different time 频帧组按照所述步骤三至步骤七进行处理,得到不同时刻的全景视频帧,按照时间顺序将所述全景视频帧进行合成,得到实时全景视频流。 Frequency frame group according to step three to seven step process to obtain a panoramic video frames of different times, in chronological order to the panoramic video frame are synthesized to obtain real-time panoramic video stream.
  2. 2. 根据权利要求1所述的方法,其特征在于,所述步骤二在收到同步视频分割指令后进行,并且所述步骤二运行结束后,将所述第一静态视频帧按照时间顺序进行存储。 2. The method according to claim 1, wherein said step two is performed after receiving the synchronization instruction video segmentation, and the rear end of the two step operation, the first still video frames chronologically storage.
  3. 3. 根据权利要求2所述的方法,其特征在于,所述步骤三中侧视图变俯视图几何模型为: 3. The method according to claim 2, wherein said step 3 a side view of a top view of the geometric model becomes:
    Figure CN104506828BC00021
    其中,S为比例因子,fX,fy为摄像机的焦距,Cx,Cy为图像矫正参数,Rx,Ry,Rz为旋转矩阵的三个列向量,t为平移向量,(x,y,z)为所述静态图像侧视图的元素坐标,(X,Y,Z)为对应元素的俯视图坐标。 Wherein, S is the scale factor, fX, fy is the focal length of the camera, Cx, Cy of the image correction parameter, Rx, Ry, Rz is a three column vector rotation matrix, t is a translation vector, (x, y, z) of coordinates of elements of the static image side, (X, Y, Z) is a plan view of the element corresponding to the coordinates.
  4. 4. 根据权利要求3所述的方法,其特征在于,所述步骤四、五中定位模型为: 4. The method according to claim 3, wherein said step four, positioned Fifth model:
    Figure CN104506828BC00031
    其中,XQ,y Q,z Q为摄像机镜头中心点坐标,XI,y I,z 1为被拍对象与摄像平面xoy的交点坐标,(α,β,γ)为摄像机对应景域的圆锥体母线的方向角,X2,y2,Z2为摄像机对应景域的炜圆与所述圆锥体母线的交点坐标,X,y,Z为摄像机对应景域与摄像平面xoy交点坐标。 Wherein, XQ, y Q, z Q is the center coordinate of the camera lens, XI, y I, z 1 is imaged object and the intersection coordinates of the imaging plane xoy, (α, β, γ) corresponding to the cone of the camera view field bus direction angle, X2, y2, Z2 corresponding to the intersection to the camera coordinates of the circle Wei cone generatrix King domain, X, y, Z is a view field corresponding to the camera imaging plane xoy intersection coordinates.
  5. 5. 根据权利要求4所述的方法,其特征在于,所述步骤四中全景粗拼接具体为: 首先,生成一张和被拍物的全景图景域大小等大的空白图像; 其次,对所述第二静态视频帧组中的每个视频帧对应的子图像利用所述定位模型进行定位处理,确定每张子图像在空白图像中的位置、大小和方向; 再次,按照多摄像机阵列中每个摄像机预定的标号顺序和其拍摄子图像的定位信息逐张将子图像填充到空白图像中对应的地方,实现全景图的粗拼接。 The method according to claim 4, characterized in that, in the Step 4 The crude panorama stitching specifically: first, to generate a shot and a large blank image panorama picture field size thereof, and the like; Secondly, the for positioning said sub-image using the model corresponding to each video frame of video frames in a second static positioning process, is determined for each sub-image on a blank image, size and direction; again, according to a multi-camera array each location information of the predetermined reference sequence of cameras and their images captured by one sub-sub-image to fill the blank where the image corresponding realize coarse mosaic panorama.
  6. 6. 根据权利要求5所述的方法,其特征在于,所述步骤七,提取与漏洞或裂缝子图像相邻的子图像的线特征具体为: 假设以C(x,y)为中心像素点,同时设L(x,y)和R(x,y)分别为以C(x,y)点沿某一个方向的左、右相邻区域的平均灰度值,则均值比估计如式(3)所示; 6. The method according to claim 5, wherein said step seven, the sub-image extraction holes or cracks in an adjacent sub-image lines characterized in particular: Assuming that C (x, y) as the center pixel while set L (x, y) and R (x, y) are to C (x, y) point in one direction along the left, right adjacent average gradation value region, such as the estimated average ratio formula ( shown in FIG. 3);
    Figure CN104506828BC00032
    C3) 然后,比较RoA: C (X,y)与预先确定的阈值To进行比较,当RoA: C (X,y)大于阈值To时则认为点C为边界点; 将通过上述算法提取与漏洞或裂缝子图像相邻的子图像中的线特征片段,并重组织成线特征。 C3) is then compared RoA: C (X, y) with a predetermined threshold value To are compared, when RoA: C (X, y) is greater than a threshold is considered the point C is a boundary point value To; extracted with vulnerability by the above algorithm wherein the line image fragments or fractures in adjacent sub-images, both organized into line features.
  7. 7. 根据权利要求6所述的方法,其特征在于,所述步骤七中,对提取的所述线特征进行匹配具体为: 将所述线特征用对应的线段函数来描述,假设包围漏洞或裂缝的子图像有η个,首先, 提取的每幅子图像的线段函数斜率,组成的集合I由式⑷表示如下, 7. The method according to claim 6, wherein said step VII, the line matching the extracted feature is specifically: the line segment with the corresponding function feature described, assuming surrounded vulnerability or η sub-image has a crack, firstly, a function of the slope of the line segment extracted sub-images each web, the collection of the formula I is represented by the following ⑷,
    Figure CN104506828BC00033
    其中,m,n,l均表示对应子图像中提取的线特征的总数; 利用如下式⑶实现子图像间的线特征匹配, Wherein, m, n, l represents the total number of each sub-image lines corresponding to the characteristic extracted; ⑶ using the following formula to achieve matching between the characteristic line sub-images,
    Figure CN104506828BC00034
    (5) 其中, (5) wherein,
    Figure CN104506828BC00035
    均为集合I中任意的一个元素,T1为匹配阈值;满足式(5)则所述线特征配对成功。 Are set to any one of the elements I, T1 is a match threshold; wherein said line pair satisfies the formula (5) is successful.
  8. 8. 根据权利要求7所述的方法,其特征在于,所述步骤七中,利用相邻区域线特征外推边界点来补偿裂缝或漏洞具体为: 首先,根据已匹配所述的线特征对的对应的第一线段函数,构造一个能满足对应特征对中所有的线特征的第二线段函数,同时认为所述第二线段函数是对漏洞或裂缝的线特征的合理拟合; 然后,利用所述第二线段函数外推漏洞或裂缝处,由此对匹配线特征对确定的线特征所在的位置; 最后,对外推出来的漏洞或裂缝的线特征,利用与漏洞或裂缝子图像相邻的子图像中与其对应的匹配线特征对的颜色和亮度,运用颜色和亮度插值算法,对拼接缝进行融合拼接。 8. The method according to claim 7, wherein said step VII, using the extrapolated area line wherein adjacent boundary points to compensate for the crack or flaw particular: First, the features have been matched according to the line corresponding to the first segment of the function, a construct wherein the second segment corresponds to meet all of the function of the line features, while a second segment that is a reasonable fit to the function line feature holes or cracks; and then, using said second segment gaps or cracks extrapolation function, whereby the position of the match line wherein the line where the determined characteristics; Finally, the features of the external line Release vulnerabilities or cracks, holes or cracks using the sub-picture with wherein the color and brightness matching sub-image lines corresponding thereto adjacent to, the use of color and brightness interpolation algorithm, fusion splicing of the seam.
  9. 9. 根据权利要求8所述的方法,其特征在于,所述步骤六、七中,利用颜色和亮度插值算法,进行融合拼接具体为: 假定与漏洞或裂缝子图像相邻的子图像有m幅,则裂缝或漏洞中一点P的灰度、颜色和亮度值,可根据m幅子图像中距离点P最近的点的灰度、颜色和亮度值,通过式(6)计算获得 9. The method according to claim 8, wherein said step six, seven, using a color and luminance interpolation algorithm, fusion splicing is specifically: assumed holes or cracks in an adjacent sub-image with a sub image m web, the crack or flaw point in gradation, color and luminance values ​​of P, according to the gradation, the color and luminance values ​​of the nearest point of the m sub-images in the distance of a point P, is calculated by the formula (6) is obtained
    Figure CN104506828BC00041
    (6) 其中,g (P)表示P点的灰度值、颜色值和亮度值中任意一种,gi (xi,yi)表示第i幅子图像离P点最近点的对应于g (P)的灰度值、颜色值或者亮度值,函数ξ (X)是线性权重函数; 通过对裂缝或漏洞中每个像素点逐个进行如上所示的融合操作,获取完整的所述全景视频帧。 (6) wherein, G (P) represents the gray value of the point P, the color values ​​and luminance values ​​of any one, gi (xi, yi) denotes the i-th sub-images from the nearest point P corresponds to the point G (P ) grayscale value, the luminance value or color value, the function ξ (X) is a linear weighting function; by fusing operation on each pixel one by one as shown above or cracks of vulnerability, for a complete frame of the video panorama.
  10. 10. 根据权利要求9所述的方法,其特征在于,所述步骤八中,将所述全景视频流进行压缩、存储和显示。 10. The method according to claim 9, wherein, in the step VIII, the panoramic video stream compression, storage and display.
CN 201510016447 2015-01-13 2015-01-13 A non-overlapping variable structure effective real time video splicing method of site-directed orientation CN104506828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201510016447 CN104506828B (en) 2015-01-13 2015-01-13 A non-overlapping variable structure effective real time video splicing method of site-directed orientation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201510016447 CN104506828B (en) 2015-01-13 2015-01-13 A non-overlapping variable structure effective real time video splicing method of site-directed orientation

Publications (2)

Publication Number Publication Date
CN104506828A true CN104506828A (en) 2015-04-08
CN104506828B true CN104506828B (en) 2017-10-17

Family

ID=52948542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201510016447 CN104506828B (en) 2015-01-13 2015-01-13 A non-overlapping variable structure effective real time video splicing method of site-directed orientation

Country Status (1)

Country Link
CN (1) CN104506828B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008112776A2 (en) * 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for filling occluded information for 2-d to 3-d conversion
CN101479765A (en) * 2006-06-23 2009-07-08 图象公司 Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN103763479A (en) * 2013-12-31 2014-04-30 深圳英飞拓科技股份有限公司 Splicing device for real-time high speed high definition panoramic video and method thereof
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101479765A (en) * 2006-06-23 2009-07-08 图象公司 Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
WO2008112776A2 (en) * 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for filling occluded information for 2-d to 3-d conversion
WO2011121117A1 (en) * 2010-04-02 2011-10-06 Imec Virtual camera system
CN103763479A (en) * 2013-12-31 2014-04-30 深圳英飞拓科技股份有限公司 Splicing device for real-time high speed high definition panoramic video and method thereof
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring

Also Published As

Publication number Publication date Type
CN104506828A (en) 2015-04-08 application

Similar Documents

Publication Publication Date Title
US6778207B1 (en) Fast digital pan tilt zoom video
US20120026297A1 (en) Imaging apparatus and imaging method
US20060165310A1 (en) Method and apparatus for a virtual scene previewing system
US20060104541A1 (en) Methods and systems for producing seamless composite images without requiring overlap of source images
US20120081519A1 (en) Combining data from multiple image sensors
US20110242355A1 (en) Combining data from multiple image sensors
CN101064780A (en) Method and apparatus for improving image joint accuracy using lens distortion correction
CN102148965A (en) Video monitoring system for multi-target tracking close-up shooting
CN101621634A (en) Method for splicing large-scale video with separated dynamic foreground
US20110052093A1 (en) Method and apparatus for forming a composite image
JP2002077947A (en) Method for correcting stereoscopic image and stereoscopic image apparatus using the same
US20090324135A1 (en) Image processing apparatus, image processing method, program and recording medium
US7224392B2 (en) Electronic imaging system having a sensor for correcting perspective projection distortion
CN101276465A (en) Method for automatically split-jointing wide-angle image
CN101082766A (en) Device and method rapid capturing panoramic view image
CN101866482A (en) Panorama splicing method based on camera self-calibration technology, and device thereof
CN101710932A (en) Image stitching method and device
CN101794448A (en) Full automatic calibration method of master-slave camera chain
CN101963751A (en) Device and method for acquiring high-resolution full-scene image in high dynamic range in real time
CN101814181A (en) Unfolding method for restoration of fisheye image
CN101943839A (en) Integrated automatic focusing camera device and definition evaluation method
CN101146231A (en) Method for generating panoramic video according to multi-visual angle video stream
WO1995030312A1 (en) Improved chromakeying system
CN101424551A (en) Active vision non-contact type servomechanism parameter measurement method and apparatus thereof
US20110122232A1 (en) Stereoscopic image display apparatus, compound-eye imaging apparatus, and recording medium

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
GR01