CN104021535A - Method for splicing stepping and framing CCD image - Google Patents

Method for splicing stepping and framing CCD image Download PDF

Info

Publication number
CN104021535A
CN104021535A CN201410256944.5A CN201410256944A CN104021535A CN 104021535 A CN104021535 A CN 104021535A CN 201410256944 A CN201410256944 A CN 201410256944A CN 104021535 A CN104021535 A CN 104021535A
Authority
CN
China
Prior art keywords
image
same name
point
unique point
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410256944.5A
Other languages
Chinese (zh)
Other versions
CN104021535B (en
Inventor
尤红建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jigang Defense Technology Co ltd
Aerospace Information Research Institute of CAS
Original Assignee
Institute of Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Electronics of CAS filed Critical Institute of Electronics of CAS
Priority to CN201410256944.5A priority Critical patent/CN104021535B/en
Publication of CN104021535A publication Critical patent/CN104021535A/en
Application granted granted Critical
Publication of CN104021535B publication Critical patent/CN104021535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明提供了一种步进分幅CCD图像拼接方法。该方法包括:步骤A:在相邻两幅图像重叠区域内,利用尺度不变特征变换算子提取图像中距离最远两个同名特征点-同名特征点P1和同名特征点P2;步骤B:根据提取的两个同名特征点的坐标,计算相邻两幅图像间的弹性变换模型的四个参数;步骤C:按照弹性变换模型,对当前幅图像的每个点进行坐标变换处理;以及步骤D:将变换后的当前幅图像与上一幅图像按照坐标位置合成为一幅图像,得到拼接后的图像。本发明只需快速提取重叠区域内距离最远的两个同名特征点,较好地适应了步进分幅图像间低重叠的特点。

The invention provides a stepping and framing CCD image splicing method. The method includes: step A: in the overlapping region of two adjacent images, using a scale-invariant feature transformation operator to extract the two feature points with the same name that are farthest away from the image-the feature point with the same name P1 and the feature point with the same name P2; step B: According to the coordinates of the two extracted feature points of the same name, calculate four parameters of the elastic transformation model between two adjacent images; step C: according to the elastic transformation model, carry out coordinate transformation processing on each point of the current image; and step D: Combine the transformed current image and the previous image into one image according to the coordinate position, and obtain the spliced image. The present invention only needs to quickly extract two feature points with the same name which are the farthest in the overlapping area, and better adapts to the feature of low overlap between stepping and framing images.

Description

步进分幅CCD图像拼接的方法Method of stepping and framing CCD image mosaic

技术领域technical field

本发明涉及图像处理技术领域,尤其涉及一种步进分幅CCD图像拼接的方法。The invention relates to the technical field of image processing, in particular to a method for mosaicing CCD images by stepping and framing.

背景技术Background technique

航空步进分幅相机通过时间换取空间,在垂直于飞行航向的多个位置进行拍照成像,保证一定重叠率基础上进行图像的拼接处理,能够实现对地面的大视场观测。它具有体积小、质量轻的优点,可十分方便应用于无人机等平台上。其基本的工作原理是,相机的控制摆扫机构将视场运动到起始点位置曝光成像获取一幅图像,在图像数据转移存储的过程中,摆扫机构将视场步进运动到第二个位置并静止;等待前一幅图像数据转移完成后对第二个位置的视场进行成像,等完成了多幅图像的拍摄后,摆扫机构控制视场步进回归;在保证一定航向重叠率的前提下,机的控制摆扫机构又将视场步进运动到下一轮的第一个视场位置,开始新一轮摆扫步进拍摄过程。The aerial stepping and framing camera exchanges time for space, takes pictures and images at multiple positions perpendicular to the flight course, and performs image stitching processing on the basis of ensuring a certain overlap rate, which can realize large field of view observation on the ground. It has the advantages of small size and light weight, and can be easily applied to platforms such as drones. Its basic working principle is that the camera controls the swing mechanism to move the field of view to the starting point for exposure imaging to obtain an image. During the process of image data transfer and storage, the swing mechanism moves the field of view step by step to the second position. The position is still and static; after the data transfer of the previous image is completed, the field of view of the second position is imaged. Under the premise that the camera controls the sweeping mechanism, the field of view will be moved to the first field of view position in the next round, and a new round of sweeping and stepping shooting process will start.

目前国外典型代表有CA-260,CA-261,CA-265和CA-270等相机,以及A3超宽幅面航空数码相机,这些步进分幅相机已在RF-4、F-16、F-14、SR-71、P-3以及RQ-4全球鹰等飞机上装备。分幅相机从空间不同的点拍数张整幅图像,可获得单张几何保真度较高的图像,但是对图像拼接技术提出了较高的要求。由于图像在不同时刻获取具有一定重叠率的图像序列,受飞机姿态和飞行速度不确定性的影响,使得获取的区域图像之间的重叠也是不断变化的,增加了区域图像处理的复杂度和难度,因此图像的拼接算法要求较高。At present, foreign typical representatives include cameras such as CA-260, CA-261, CA-265 and CA-270, as well as A3 ultra-wide-format aerial digital cameras. -14, SR-71, P-3 and RQ-4 Global Hawk and other aircraft equipment. The framing camera takes several whole images from different points in space, and can obtain a single image with high geometric fidelity, but it puts forward higher requirements for image stitching technology. Because the image acquires an image sequence with a certain overlap rate at different times, affected by the uncertainty of the aircraft attitude and flight speed, the overlap between the acquired regional images is also constantly changing, which increases the complexity and difficulty of regional image processing. , so the image stitching algorithm has higher requirements.

目前国内外对步进分幅CCD图像拼接处理的研究成果报道极少,只有类似的摇摆图像拼接,如高颖慧(高颖慧,沈振康,航空摇摆图像组的配准拼接算法,第38卷第1期,红外与激光工程2009年)研究了航空摇摆相机的摇摆模型采用了旋转加透视变形的校正模型,提出了针对摇摆图像组的配准拼接方法,主要通过定义能量函数来搜索得到最优匹配区域,然后对摇摆图像组进行平滑拼接。At present, there are very few reports on the research results of stepping and framing CCD image mosaic processing at home and abroad, only similar rocking image mosaic, such as Gao Yinghui (Gao Yinghui, Shen Zhenkang, registration mosaic algorithm for aerial rocking image group, volume 38 No. 1 Period, Infrared and Laser Engineering 2009) studied the swing model of the aerial swing camera and adopted the correction model of rotation and perspective deformation, and proposed a registration and stitching method for the swing image group, mainly by defining the energy function to search for the optimal match region, followed by smooth stitching of groups of rocking images.

目前研究比较多的是无人机获取的CCD面阵图像或其他航空面阵图像的拼接。这些拼接处理方法重点都是如何通过匹配算法提取重叠区域的同名特征点,然后计算相邻面阵图像的空间变换关系,在变换的基础上完成图像的拼接。如于欢等人(于欢、孔博,无人机遥感影像自动无缝拼接技术研究,第27卷第3期,2012年6月,遥感技术与应用)提出了运用特征的控制点配准算法和图像灰度融合的对比度调制法实现多幅无人机影像自动无缝拼接技术,采用的八参数的变换模型,通过相邻图像之间至少10-20同名特征点。王勃等人(王勃、龚志辉、靳克强等,一种改进的无人机影像拼接方法,计算机工程与应用,2011,47卷35期)针对传统的基于SIFT的影像拼接处理速度慢的问题,通过分析原始影像与缩小影像之间单应矩阵(透视变换模型)的关系,提出了改进拼接速度的方法。At present, there are more studies on the splicing of CCD area array images acquired by drones or other aerial area array images. These splicing processing methods focus on how to extract the feature points of the same name in the overlapping area through the matching algorithm, and then calculate the spatial transformation relationship of adjacent area array images, and complete the image splicing on the basis of the transformation. For example, Yu Huan et al. (Yu Huan, Kong Bo, Research on Automatic Seamless Stitching Technology of UAV Remote Sensing Images, Vol. 27, No. 3, June 2012, Remote Sensing Technology and Application) proposed the control point registration using features The algorithm and the contrast modulation method of image grayscale fusion realize the automatic seamless splicing technology of multiple UAV images. The eight-parameter transformation model is adopted, and at least 10-20 feature points with the same name are passed between adjacent images. Wang Bo et al. (Wang Bo, Gong Zhihui, Jin Keqiang, etc., An Improved UAV Image Stitching Method, Computer Engineering and Application, 2011, Volume 47, Issue 35) Aiming at the slow processing speed of traditional SIFT-based image stitching, by analyzing The homography matrix (perspective transformation model) relationship between the original image and the reduced image is proposed to improve the stitching speed.

现在已有的航空遥感图像拼接技术,尤其是无人机获取的航空图像,都是针对多条航带、多幅图像的拼接问题,一般要求重叠率高,而且变换模型上往往采用参数较多的八参数变换模型、单应矩阵的透视变换或者需要六个参数的仿射变换,这就需要提取重叠区域的较多的同名特征点进行计算,如果同名特征点少或分布不合理则很难进行参数的计算。而步进分幅图像需要拼接的图像是在同一条航带上顺序获取的图像,具有图像之间重叠率低,不能采用参数较多的复杂变换模型,而必须根据重叠率低,提取极少量的同名特征点构建参数较少的变换模型。The existing aerial remote sensing image mosaic technology, especially the aerial image acquired by UAV, is aimed at the mosaic problem of multiple flight belts and multiple images. Generally, a high overlap rate is required, and more parameters are often used in the transformation model. The eight-parameter transformation model of the homography matrix, the perspective transformation of the homography matrix, or the affine transformation that requires six parameters, which requires extracting more feature points with the same name in the overlapping area for calculation. If there are few feature points with the same name or the distribution is unreasonable, it will be difficult. Calculate the parameters. However, the images that need to be spliced for step-by-step framing images are images that are acquired sequentially on the same airway, and have a low overlap rate between images. A complex transformation model with many parameters cannot be used, but a very small amount must be extracted based on the low overlap rate. The feature points with the same name construct a transformation model with fewer parameters.

发明内容Contents of the invention

(一)要解决的技术问题(1) Technical problems to be solved

鉴于上述技术问题,本发明提供了一种步进分幅CCD图像拼接的方法,以准确而快速地拼接多幅图像。In view of the above-mentioned technical problems, the present invention provides a step-by-frame CCD image stitching method to stitch multiple images accurately and quickly.

(二)技术方案(2) Technical solution

本发明步进分幅CCD图像拼接的方法包括:步骤A:在相邻两幅图像重叠区域内,利用尺度不变特征变换算子提取图像中距离最远两个同名特征点-同名特征点P1和同名特征点P2;步骤B:根据提取的两个同名特征点的坐标,计算相邻两幅图像间的弹性变换模型的四个参数,计算公式为:The method for stepping and framing CCD image mosaic of the present invention includes: Step A: In the overlapping region of two adjacent images, use the scale-invariant feature transformation operator to extract the two feature points with the same name at the farthest distance in the image-the feature point with the same name P1 and the feature point P2 with the same name; Step B: According to the coordinates of the extracted two feature points with the same name, calculate the four parameters of the elastic transformation model between two adjacent images, the calculation formula is:

kk 11 kk 22 dd xx dd ythe y == xx 11 ythe y 11 11 00 -- ythe y 11 xx 11 00 11 xx 22 ythe y 22 11 00 -- ythe y 22 xx 22 00 11 -- 11 ·&Center Dot; uu 11 vv 11 uu 22 vv 22

其中:k1,k2,dx,dy为弹性变换模型的四个参数,(x1,y1)和(u1,v1)分别为同名特征点P1在当前幅图像和上一幅图像中的图像坐标,(x2,y2)和(u2,v2)分别为同名特征点P2在当前幅图像和上一幅图像中的图像坐标表示同名特征点;步骤C:按照弹性变换模型,对当前幅图像的每个点进行坐标变换处理,变换的公式为:Among them: k 1 , k 2 , d x , d y are the four parameters of the elastic transformation model, (x 1 , y 1 ) and (u 1 , v 1 ) are the feature point P1 with the same name in the current image and the previous image respectively. The image coordinates in the image, (x 2 , y 2 ) and (u 2 , v 2 ) are respectively the image coordinates of the feature point P2 with the same name in the current image and the previous image, indicating the feature point with the same name; step C: according to The elastic transformation model performs coordinate transformation processing on each point of the current image, and the transformation formula is:

uu == kk 11 xx ++ kk 22 ythe y ++ dd xx vv == -- kk 22 xx ++ kk 11 ythe y ++ dd ythe y

其中,(x,y)为当前幅图像的原图像坐标,(u,v)为当前幅图像变换后的图像坐标;以及步骤D:将变换后的当前幅图像与上一幅图像按照坐标位置合成为一幅图像,得到拼接后的图像。Wherein, (x, y) are the original image coordinates of the current image, and (u, v) are the image coordinates after the transformation of the current image; and step D: the transformed current image and the previous image according to the coordinate position Synthesize into an image to obtain a spliced image.

(三)有益效果(3) Beneficial effects

从上述技术方案可以看出,本发明步进分幅CCD图像拼接的方法具有以下有益效果:As can be seen from the foregoing technical solutions, the step-by-step framing CCD image splicing method of the present invention has the following beneficial effects:

(1)根据相邻图像间重叠率低的特点,只需快速提取重叠区域内距离最远的两个同名特征点,较好地适应了步进分幅图像间低重叠的特点。而且由于提取的同名特征点具有距离相距最远,确保了模型参数的准确求解。(1) According to the characteristics of low overlap rate between adjacent images, it only needs to quickly extract the two feature points with the same name at the farthest distance in the overlapping area, which better adapts to the characteristics of low overlap between stepping and framing images. Moreover, since the extracted feature points with the same name have the farthest distance, the accurate solution of the model parameters is ensured.

(2)相邻图像间的变换模型考虑了平移、尺度和旋转的四参数弹性变换模型,具有参数模型少、实用性强的特点,较好地解决了步进分幅图像间的几何变换问题。(2) The transformation model between adjacent images considers the four-parameter elastic transformation model of translation, scale and rotation, which has the characteristics of less parameter models and strong practicability, and better solves the geometric transformation problem between stepping and framing images .

附图说明Description of drawings

图1根据本发明实施例步进分幅CCD图像拼接方法的流程图;Fig. 1 is the flow chart of the method for splicing CCD images stepping and framing according to an embodiment of the present invention;

图2A为拼接前获取的16幅步进分幅图像;Fig. 2A is 16 step-by-step framing images obtained before splicing;

图2B为利用图1方法对图2A所示的步进分幅图像进行拼接后的效果图。FIG. 2B is an effect diagram after splicing the stepping and framing images shown in FIG. 2A by using the method in FIG. 1 .

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。需要说明的是,在附图或说明书描述中,相似或相同的部分都使用相同的图号。附图中未绘示或描述的实现方式,为所属技术领域中普通技术人员所知的形式。另外,虽然本文可提供包含特定值的参数的示范,但应了解,参数无需确切等于相应的值,而是可在可接受的误差容限或设计约束内近似于相应的值。实施例中提到的方向用语,例如“上”、“下”、“前”、“后”、“左”、“右”等,仅是参考附图的方向。因此,使用的方向用语是用来说明并非用来限制本发明的保护范围。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings. It should be noted that, in the drawings or descriptions of the specification, similar or identical parts all use the same figure numbers. Implementations not shown or described in the accompanying drawings are forms known to those of ordinary skill in the art. Additionally, while illustrations of parameters including particular values may be provided herein, it should be understood that the parameters need not be exactly equal to the corresponding values, but rather may approximate the corresponding values within acceptable error margins or design constraints. The directional terms mentioned in the embodiments, such as "upper", "lower", "front", "rear", "left", "right", etc., are only referring to the directions of the drawings. Therefore, the directional terms used are for illustration and not for limiting the protection scope of the present invention.

本发明拼接步进分幅CCD图像方法应用空间相距最远的两个同名特征点计算相邻两幅图像间弹性变换模型的四个参数,通过坐标变换处理实现了相邻两幅图像间的空间坐标的统一;统一的空间坐标基础上将相邻图像合成在一起,从而确保了多幅图像的拼接。The method for splicing and stepping and framing CCD images of the present invention uses two feature points with the same name that are farthest apart in space to calculate the four parameters of the elastic transformation model between two adjacent images, and realizes the space between two adjacent images through coordinate transformation processing. Unification of coordinates; Adjacent images are synthesized together on the basis of unified spatial coordinates, thus ensuring the splicing of multiple images.

在本发明的一个示例性实施例中,提供了一种步进分幅CCD图像快速拼接的方法。图1为根据本发明实施例步进分幅CCD图像拼接方法的流程图。如图1所示,本实施例步进分幅CCD图像拼接的方法包括:In an exemplary embodiment of the present invention, a method for fast stitching of CCD images with stepping and framing is provided. FIG. 1 is a flow chart of a step-and-frame CCD image mosaic method according to an embodiment of the present invention. As shown in Figure 1, the method for stepping and framing CCD image mosaic of the present embodiment comprises:

步骤A:在相邻两幅图像重叠区域内,利用尺度不变特征变换(SIFT)算子提取图像中距离最远两个同名特征点-特征点P1和特征点P2;Step A: In the overlapping area of two adjacent images, use the Scale Invariant Feature Transform (SIFT) operator to extract the two feature points with the same name that are farthest away in the image - feature point P1 and feature point P2;

本步骤中,利用尺度不变特征变换算子提取相邻两幅图像同名特征点的方法为本领域通用的方法,大致来讲,其主要包括:In this step, the method of using the scale-invariant feature transformation operator to extract the feature points of the same name in two adjacent images is a common method in the field. Generally speaking, it mainly includes:

子步骤A1,分别针对当前幅图像和上一幅图像利用差分高斯算子检测图像尺度空间中的兴趣点;Sub-step A1, using the difference Gaussian operator to detect the points of interest in the image scale space for the current image and the previous image respectively;

子步骤A2,在检测的兴趣点基础上应用该兴趣点邻域窗口内的梯度信息计算梯度主方向,并根据梯度直方图构建该兴趣点对应的128个维度的特征向量;Sub-step A2, on the basis of the detected interest point, apply the gradient information in the neighborhood window of the interest point to calculate the main direction of the gradient, and construct the 128-dimensional feature vector corresponding to the interest point according to the gradient histogram;

子步骤A3,逐一计算当前幅图像中检测的兴趣点特征向量和上一帧图像中检测的兴趣点特征向量之间的欧式距离,并计算最近距离和次最近距离的比值,将比值小于0.5的那些兴趣点作为候选的同名特征点;Sub-step A3, calculate the Euclidean distance between the feature vector of the point of interest detected in the current image and the feature vector of the point of interest detected in the previous frame of image one by one, and calculate the ratio of the closest distance to the next closest distance, and set the ratio less than 0.5 Those interest points are candidate feature points with the same name;

子步骤A4,利用得到的这些候选同名特征点,根据同名特征点的坐标逐个计算其中任意一个特征点和剩余的其他特征点之间的空间距离,从计算得到的空间中选择距离最大值所对应的两个点,作为最终的特征点P1和特征点P2。Sub-step A4, using these candidate feature points with the same name, calculate the spatial distance between any one of the feature points and the remaining feature points one by one according to the coordinates of the feature points of the same name, and select the distance corresponding to the maximum value from the calculated space The two points of , as the final feature point P1 and feature point P2.

步骤B:根据提取的两个同名特征点坐标,计算相邻两幅图像间的弹性变换模型的四个参数,计算公式为:Step B: Calculate the four parameters of the elastic transformation model between two adjacent images according to the extracted coordinates of the two feature points with the same name, the calculation formula is:

kk 11 kk 22 dd xx dd ythe y == xx 11 ythe y 11 11 00 -- ythe y 11 xx 11 00 11 xx 22 ythe y 22 11 00 -- ythe y 22 xx 22 00 11 -- 11 ·· uu 11 vv 11 uu 22 vv 22 -- -- -- (( 11 ))

其中:k1,k2,dx,dy为所要计算的四个参数,(x1,y1)表示同名特征点P1在当前幅图像中的图像坐标,(u1,v1)表示同名特征点P1在上一幅图像中的图像坐标,(x2,y2)表示同名特征点P2在当前幅图像中的图像坐标(u2,v2)表示同名特征点P2在上一幅图像中的图像坐标,[]是线性代数矩阵的表达形式,[]·[]表示两个矩阵相乘,而[]-1表示对该矩阵进行求逆计算。Among them: k 1 , k 2 , d x , d y are the four parameters to be calculated, (x 1 , y 1 ) represents the image coordinates of the feature point P1 with the same name in the current image, (u 1 , v 1 ) represents The image coordinates of the feature point P1 with the same name in the previous image, (x 2 , y 2 ) means the image coordinates of the feature point P2 with the same name in the current image (u 2 , v 2 ) means the feature point P2 with the same name in the previous image The image coordinates in the image, [] is the expression form of a linear algebra matrix, []·[] means multiplying two matrices, and [] -1 means inverting the matrix.

步骤C:按照计算的四参数对当前幅的图像进行坐标的变换处理,变换的公式为:Step C: Transform the coordinates of the current image according to the calculated four parameters. The transformation formula is:

uu == kk 11 xx ++ kk 22 ythe y ++ dd xx vv == -- kk 22 xx ++ kk 11 ythe y ++ dd ythe y

其中:k1,k2,dx,dy为四个变换参数,这四个值由步骤B计算得到,(x,y)表示当前幅图像原先的图像坐标,(u,v)表示当前幅变换后的图像坐标。Among them: k 1 , k 2 , d x , d y are four transformation parameters, these four values are calculated by step B, (x, y) represents the original image coordinates of the current image, (u, v) represents the current The transformed image coordinates.

步骤D:将变换后的当前图像与上一幅图像按照坐标位置合成为一幅图像,即可得到拼接后的图像。Step D: Combining the transformed current image and the previous image into one image according to the coordinate positions, and then the spliced image can be obtained.

本实施例仅给出了两幅步进图像实现拼接的方法。对于多幅的两幅步进图像,则只需将每一幅图像都和拼接的图像进行上面的类似处理即可,也就是每次处理时都将拼接后的图像作为上一幅图像,而将需要拼接的图像作为当前幅图像,经过多次拼接处理后就得到了多幅拼接的步进图像。This embodiment only provides a method for realizing splicing of two stepping images. For multiple two stepping images, it is only necessary to perform the above similar processing on each image and the spliced image, that is, the spliced image is used as the previous image for each processing, and The image to be spliced is used as the current image, and multiple spliced stepping images are obtained after multiple splicing processes.

为了验证本实施例的效果,对图2A所示的为拼接前获取的16幅步进分幅图像;图2B为拼接处理后的效果图。从图2A和图2B可以看出:拼接后的图像在相邻的结合部分没有缝隙,地物过度也很自然而准确和流畅,达到了相邻图像的准确拼接。In order to verify the effect of this embodiment, FIG. 2A shows 16 stepping and framing images obtained before splicing; FIG. 2B is an effect diagram after splicing processing. From Figure 2A and Figure 2B, it can be seen that the spliced images have no gaps in the adjacent joint parts, and the transition of ground objects is natural, accurate and smooth, and the accurate splicing of adjacent images is achieved.

需要说明的是,由于图2A是由航空步进分幅相机获得的真实图像,其自带了部分经纬度信号,而图2B中也有部分的经纬度信号。这部分信号与本发明的关系并不是很大,较为模糊,可以忽略。It should be noted that, since Figure 2A is a real image obtained by an aerial stepping and framing camera, it carries part of the longitude and latitude signals, and Figure 2B also has part of the longitude and latitude signals. The relationship between this part of the signal and the present invention is not very great, rather vague, and can be ignored.

至此,已经结合附图对本实施例进行了详细描述。依据以上描述,本领域技术人员应当对本发明拼接航空多元并扫图像的方法有了清楚的认识。So far, the present embodiment has been described in detail with reference to the drawings. Based on the above description, those skilled in the art should have a clear understanding of the method for stitching aerial multiple simultaneous images of the present invention.

此外,上述对各元件和方法的定义并不仅限于实施方式中提到的各种具体结构、形状和方式,本领域的普通技术人员可对其进行简单地熟知地替换。In addition, the above-mentioned definitions of each element and method are not limited to the various specific structures, shapes and methods mentioned in the embodiments, and those skilled in the art can easily and well-known replace them.

综上所述,本发明拼接步进图像的方法中,通过只提取相邻图像重叠区域内空间相距最远的两个同名特征点,较好地适应了步进分幅图像间低重叠的特点。而利用2个同名特征点计算相邻图像间的四参数变换模型,克服了常规相机变换模型参数较多的不足,具有参数模型少、实用性强的特点,较好地解决了步进分幅图像间的几何变换问题。In summary, in the method for splicing stepping images of the present invention, by only extracting two feature points with the same name that are farthest apart in space in the overlapping area of adjacent images, it better adapts to the characteristics of low overlap between stepping and framing images . However, using two feature points with the same name to calculate the four-parameter transformation model between adjacent images overcomes the shortcomings of the conventional camera transformation model with more parameters, has the characteristics of less parameter models and strong practicability, and better solves the problem of stepping and framing. The problem of geometric transformation between images.

以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.

Claims (3)

1. a method for stepping framing ccd image splicing, is characterized in that, comprising:
Steps A: in adjacent two width doubling of the image regions, utilize yardstick invariant features transformation operator to extract image middle distance two unique point of the same name-unique point P1 of the same name and unique point P2 of the same name farthest;
Step B: according to the coordinate of two unique points of the same name extracting, calculate four parameters of the elastic registration model between adjacent two width images, computing formula is:
k 1 k 2 d x d y = x 1 y 1 1 0 - y 1 x 1 0 1 x 2 y 2 1 0 - y 2 x 2 0 1 - 1 · u 1 v 1 u 2 v 2
Wherein: k 1, k 2, d x, d yfor four parameters of elastic registration model, (x 1, y 1) and (u 1, v 1) be respectively the image coordinate of unique point P1 of the same name in current width image and upper piece image, (x 2, y 2) and (u 2, v 2) be respectively the image coordinate of unique point P2 of the same name in current width image and upper piece image and represent unique point of the same name;
Step C: according to elastic registration model, each point of current width image is carried out to coordinate transform processing, the formula of conversion is:
u = k 1 x + k 2 y + d x v = - k 2 x + k 1 y + d y
Wherein, (x, y) is the original image coordinate of current width image, and (u, v) is the image coordinate after current width image conversion; And
Step D: current width image and upper piece image after conversion are synthesized to piece image according to coordinate position, obtain spliced image.
2. method according to claim 1, is characterized in that, if while still having image to be spliced, after described step D, also comprises:
Using spliced image as upper piece image, using image to be spliced as current width image, continue execution step A~D.
3. method according to claim 1 and 2, is characterized in that, described steps A comprises:
Sub-step A1, utilizes the point of interest in difference Gauss operator detected image metric space for current width image and upper piece image respectively;
Sub-step A2 applies the gradient information compute gradient principal direction in this point of interest neighborhood window, and according to histogram of gradients, builds the proper vector of 128 dimensions that this point of interest is corresponding on the point of interest basis of detecting;
Sub-step A3, calculate one by one the Euclidean distance between the point of interest proper vector detecting in the point of interest proper vector that detects in current width image and previous frame image, and calculate the ratio of minimum distance and time minimum distance, ratio is less than to those points of interest of 0.5 as candidate's unique point of the same name; And
Sub-step A4, these candidates unique point of the same name that utilization obtains, according to the coordinate of unique point of the same name, calculate one by one the wherein space length between any one unique point and remaining other unique points, corresponding two points of chosen distance maximal value from the space calculating, as final unique point P1 and unique point P2.
CN201410256944.5A 2014-06-11 2014-06-11 The method of stepping framing ccd image splicing Active CN104021535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410256944.5A CN104021535B (en) 2014-06-11 2014-06-11 The method of stepping framing ccd image splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410256944.5A CN104021535B (en) 2014-06-11 2014-06-11 The method of stepping framing ccd image splicing

Publications (2)

Publication Number Publication Date
CN104021535A true CN104021535A (en) 2014-09-03
CN104021535B CN104021535B (en) 2016-09-21

Family

ID=51438274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410256944.5A Active CN104021535B (en) 2014-06-11 2014-06-11 The method of stepping framing ccd image splicing

Country Status (1)

Country Link
CN (1) CN104021535B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107367267A (en) * 2017-07-28 2017-11-21 长光卫星技术有限公司 The method that aerial surveying camera and boat based on the imaging of stepping framing type take the photograph imaging
CN110189392A (en) * 2019-06-21 2019-08-30 重庆大学 An automatic framing method for flow velocity and flow direction map
CN111932655A (en) * 2020-07-28 2020-11-13 中铁第六勘察设计院集团有限公司 Automatic processing method for building railway line information model based on AutoCAD

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1168047C (en) * 2002-12-19 2004-09-22 上海交通大学 A Nonlinear Registration Method for Remote Sensing Images
CN101710932B (en) * 2009-12-21 2011-06-22 华为终端有限公司 Image stitching method and device
EP2423871B1 (en) * 2010-08-25 2014-06-18 Lakeside Labs GmbH Apparatus and method for generating an overview image of a plurality of images using an accuracy information
CN102497539A (en) * 2011-12-15 2012-06-13 航天科工哈尔滨风华有限公司 Panoramic monitoring system and monitoring method of the same based on improved SIFT feature matching
CN103455992B (en) * 2013-09-11 2015-12-23 中国科学院电子学研究所 The method of splicing aviation multi-element scanning image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107367267A (en) * 2017-07-28 2017-11-21 长光卫星技术有限公司 The method that aerial surveying camera and boat based on the imaging of stepping framing type take the photograph imaging
CN107367267B (en) * 2017-07-28 2019-08-16 长光卫星技术有限公司 The method that aerial surveying camera and boat based on the imaging of stepping framing type take the photograph imaging
CN110189392A (en) * 2019-06-21 2019-08-30 重庆大学 An automatic framing method for flow velocity and flow direction map
CN110189392B (en) * 2019-06-21 2023-02-03 重庆大学 A Method for Automatic Framing of Velocity and Direction Mapping
CN111932655A (en) * 2020-07-28 2020-11-13 中铁第六勘察设计院集团有限公司 Automatic processing method for building railway line information model based on AutoCAD
CN111932655B (en) * 2020-07-28 2023-04-25 中铁第六勘察设计院集团有限公司 Automatic processing method for constructing railway line information model based on AutoCAD

Also Published As

Publication number Publication date
CN104021535B (en) 2016-09-21

Similar Documents

Publication Publication Date Title
TWI742690B (en) Method and apparatus for detecting a human body, computer device, and storage medium
CN108242079B (en) A VSLAM method based on multi-feature visual odometry and graph optimization model
CN110111248B (en) Image splicing method based on feature points, virtual reality system and camera
Mueggler et al. Continuous-time trajectory estimation for event-based vision sensors
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
Klingner et al. Street view motion-from-structure-from-motion
CN104333675B (en) A Method of Panoramic Electronic Image Stabilization Based on Spherical Projection
CN111462135A (en) Semantic Mapping Method Based on Visual SLAM and 2D Semantic Segmentation
CN104200523B (en) A kind of large scene three-dimensional rebuilding method for merging additional information
CN103258328B (en) A kind of center of distortion localization method of wide visual field camera lens
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN106204574A (en) Camera pose self-calibrating method based on objective plane motion feature
CN110146099A (en) A Synchronous Localization and Map Construction Method Based on Deep Learning
CN105469389B (en) A kind of grid ball target for vision sensor calibration and corresponding scaling method
CN107423772A (en) A kind of new binocular image feature matching method based on RANSAC
CN106910208A (en) A kind of scene image joining method that there is moving target
CN107240067A (en) A kind of sequence image method for automatically split-jointing based on three-dimensional reconstruction
CN107578376A (en) Image Mosaic Method Based on Feature Point Clustering Quaternary Partition and Local Transformation Matrix
CN102903101B (en) Method for carrying out water-surface data acquisition and reconstruction by using multiple cameras
CN105654476A (en) Binocular calibration method based on chaotic particle swarm optimization algorithm
CN114693720A (en) Design method of monocular visual odometry based on unsupervised deep learning
CN105303518A (en) Region feature based video inter-frame splicing method
CN106296587A (en) The joining method of tire-mold image
CN103903263A (en) Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images
CN115330594A (en) A rapid target recognition and calibration method based on UAV oblique photography 3D model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: 250101 No.9, Kuangyuan Road, Gongye North Road, Wangsheren street, Licheng District, Jinan City, Shandong Province

Patentee after: Jigang Defense Technology Co.,Ltd.

Address before: 100190 No. 19 West North Fourth Ring Road, Haidian District, Beijing

Patentee before: Aerospace Information Research Institute,Chinese Academy of Sciences

Effective date of registration: 20210114

Address after: 100190 No. 19 West North Fourth Ring Road, Haidian District, Beijing

Patentee after: Aerospace Information Research Institute,Chinese Academy of Sciences

Address before: 100190 No. 19 West North Fourth Ring Road, Haidian District, Beijing

Patentee before: Institute of Electronics, Chinese Academy of Sciences

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method for Splicing Stepped Frame CCD Images

Effective date of registration: 20230331

Granted publication date: 20160921

Pledgee: Ji'nan rural commercial bank Limited by Share Ltd. high tech branch

Pledgor: Jigang Defense Technology Co.,Ltd.

Registration number: Y2023980036938