CN103824303A - Image perspective distortion adjusting method and device based on position and direction of photographed object - Google Patents

Image perspective distortion adjusting method and device based on position and direction of photographed object Download PDF

Info

Publication number
CN103824303A
CN103824303A CN201410096007.8A CN201410096007A CN103824303A CN 103824303 A CN103824303 A CN 103824303A CN 201410096007 A CN201410096007 A CN 201410096007A CN 103824303 A CN103824303 A CN 103824303A
Authority
CN
China
Prior art keywords
image
point
information
subject
based
Prior art date
Application number
CN201410096007.8A
Other languages
Chinese (zh)
Inventor
赵立新
焉逢运
史嘉
Original Assignee
格科微电子(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 格科微电子(上海)有限公司 filed Critical 格科微电子(上海)有限公司
Priority to CN201410096007.8A priority Critical patent/CN103824303A/en
Publication of CN103824303A publication Critical patent/CN103824303A/en

Links

Abstract

The invention provides an image perspective distortion adjusting method and device based on a position and a direction of a photographed object. The method comprises the following steps: step A, namely photographing the photographed object by using an imaging module through focal length adjustment from the infinity point to the nearest point to capture a plurality of original images; step B, namely carrying out operation on imaging information of the original images to obtain depth information of the photographed object; and step C, namely carrying out perspective distortion processing on the first original image in the original images by the depth information. With the adoption of the method and the device, a plurality of pictures with different photographing angles in a same scene can be obtained by using the imaging module and imaging information is used for carrying out the operation on the original images; the scene depth is obtained by calculation; the distortion processing is carried out on the photographed images based on the obtained scene depth so as to overcome the perspective distortion.

Description

基于被摄物的位置、方向调整图像透视畸变的方法和装置 Method and apparatus based on the subject, directions to adjust the image distortion perspective

技术领域 FIELD

[0001] 本发明涉及图像处理技术,尤其涉及一种基于被摄物的位置、方向调整图像透视畸变的方法和装置。 [0001] The present invention relates to an image processing technique, particularly to a subject based on the position and direction of an image adjustment method and apparatus for perspective distortion.

背景技术 Background technique

[0002] 因为拍摄位置限制,或者是构图需要,常常会遇到无法保持相机传感器与被摄物体平行的情况。 [0002] Since the shooting location restrictions, or composition needs, often encounter can not keep the camera sensor and the situation is parallel to the subject. 如图1所示,以拍摄建筑物为例,由于图像视角关系,不能将整幢建筑都拍摄到。 As shown in FIG 1, to capture the building, for example, since the relationship between the image angle of view can not be picked up all over the building. 为此,就需要调整相机拍摄角度。 To do this, we need to adjust the camera angle. 但如果以图2所示的方式调整相机的拍摄角度,那么由于建筑物下部较近而上部较远,则其各部分相对相机镜头的深度就会发生变化。 However, if the camera is adjusted in the manner shown in FIG. 2 shooting angle, then since the lower and upper structures closer farther, the depth which opposing portions of the camera lens will change. 相应地,在传感器上的成像尺寸也会发生变化,从而产生透视变形。 Accordingly, the image size on the sensor will change, resulting in optical distortion.

[0003] 随着智能手机的普及,视频通话及自拍等各种应用被广泛应用。 [0003] With the popularity of smart phones in various applications, such as video calls and self-timer it is widely used. 但是现有的拍摄技术没有考虑到上述的由于拍摄距离,造成的图像视角问题,使拍摄出来的图片与实际存在一定误差,例如,一般拍摄出来的人脸就会比实际显得肥大,影响了拍摄效果和用户体验。 But the existing technology does not take into account the shooting image viewing angle due to the shooting distance, due to the above, the shooting out of the picture and the actual method is used, for example, is generally taken out of the face will look than the actual hypertrophy, affecting the shot performance and user experience.

发明内容 SUMMARY

[0004] 本发明实施例解决的问题是如何解决图像处理中产生的透视变形。 Example embodiments of the present invention to solve the problem [0004] This is how to solve the perspective distortion generated in the image processing.

[0005] 为解决上述问题,本发明实施例提供一种基于被摄物的位置、方向调整图像透视畸变的方法,包含如下所述步骤:步骤A:通过从无穷远至最近的焦距调节,用成像模组对被摄物拍摄,捕捉到多幅原始图像;步骤B:通过对所述原始图像的成像信息进行运算,得到被摄物的深度信息;步骤C:通过所述的深度信息对所述原始图像中的第一原始图像进行透视畸变处理。 [0005] In order to solve the above problems, embodiments of the present invention provides a subject based on a position, a direction image perspective distortion adjusting method, comprising the following steps: Step A: By adjusting the focal length of infinity to the closest, with the imaging module for shooting a subject, the plurality of original images captured; step B: imaging information by performing the operation on the original image to obtain depth information of the subject; step C: by the depth information of their said original image is an original image of a first perspective distortion process.

[0006] 可选的,当焦平面与所述被摄物不完全为一平面时,所述的步骤B中,还包括:基于所述成像信息的特征的匹配、基于所述成像信息的区域的匹配、或基于成像信息的相位的匹配,在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 [0006] Alternatively, when the focal plane of the subject is not a completely flat, the step B, further comprising: a match based on a characteristic of the imaging information, the imaging information based on the region matching, based on the matching or phase of the imaging information after obtaining the respective intermediate parameter information, further calculates the depth information is obtained.

[0007] 可选的,所述的基于所述成像信息的特征的匹配包括:步骤B1:采用横向双摄像头,捕捉获得的原始图像分别为k和Ικ;步骤B2:特征采集步骤:得到模板内单像素 [0007] Alternatively, the feature-based matching the imaging information includes: Step B1: using a lateral double camera, capturing the original image are obtained and k Ικ; Step B2: characteristic acquisition step: the obtained template single-pixel

的检测值c: Detection value c:

Figure CN103824303AD00061

,检测对模板中的每个像素进行,Kx0- , The detection of the template for each pixel, Kx0-

Yo)是模板中心点灰度值,I(x, y)是模板上其他点灰度值,t是确定相似程度的阈值,X,y为以源图像I左下角为原点的坐标系内的坐标,对属于模板A的点的检测值C求和,得 Yo) is the center point of the template gradation values, I (x, y) is the gradation value of other points on the template, t is a threshold value to determine the degree of similarity, X, y is the lower left corner of the source image I within the coordinate origin coordinate detection value C summing point a belonging to the template to give

到输出的游程和S: The output of the run and S:

Figure CN103824303AD00062

;源图像I的相应点(x0,y0)的特征值R为: ; Corresponding point source image I (x0, y0) of the feature value R is:

Figure CN103824303AD00063

,其中h为几何阈值且h=3Smax/4,其中Smax是游程和S所能取的最大值,对所述两幅原始图像k和Ik作处理,得到特征图分别为扎和Hk ;步骤B3:视差矩阵计算步骤:以扎中待匹配点(Xtl, y0)为中心点来创建一个大小为宽m、高η的矩形窗Q ;在Hk中,沿水平方向偏移量dx在视差范围内取出与待匹配点(Χ(ι,%)相邻同样大小为mXn的另一矩形窗Q' ;将第一特征图扎的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较;则,扎中以待匹配点(¾, y0)为中心点的mXn矩形窗与Hk中对应尺寸水平偏移量 , Where h is the geometric threshold and h = 3Smax / 4, where Smax is the maximum value a run can take and S, the two original image k and Ik as to give the bar respectively and wherein FIG Hk; Step B3: parallax matrix calculation step: to be matched to the tie point (Xtl, y0) to create a size of width m, η rectangular window high Q as the center; in Hk, the parallax dx is removed within the range of the horizontal direction offset point to be matched (Χ (ι,%) adjacent to the same size as the rectangular window mXn another Q '; FIG pierced first feature and the second feature rectangular window Q of the rectangular window of FIG Hk Q' are compared; the , the size of the horizontal bar to be matched to the offset point (¾, y0) is the center point of the window and the mXn rectangle corresponding Hk

dx 的矩形窗的匹配系数为 dx matching rectangular window coefficients is

Figure CN103824303AD00071

i,j是以矩形窗Q为坐标系的坐标;预选设定一个几何阈值k,如果rdx(X(l,yci) <k,即为匹配成功;rdx(X(l,yci)取得最小值时dx的取值,这里用视差矩阵D来记录匹配成功的点的偏移值dx,D (x0, y0) =dx ;在遍历特征图扎后,对视差矩阵D插值,对未匹配成功的特征点以及未成功提取特征点的坐标估值;将视差矩阵D包含的偏移量信息用于计算深度;步骤B4:IL上一点Plt (Xi,Yi» Z1),通过匹配到Ik上一点Pkt (x2,Y2,Z2),计算出空间点PW(XQ,Y0, Z0)的深度Zo;对于源图像k上的任意一点U,y),在成像模块光轴中心基线长b和镜头焦距f情 i, j is the Q of the rectangular window of the coordinate system; a pre-set geometric threshold k, if rdx (X (l, yci) <k, that is, the matching is successful; rdx (X (l, yci) to obtain the minimum value when dx values, where the parallax matrix D successfully matched record point offset values ​​dx, D (x0, y0) = dx; FIG feature after traversing bar, disparity interpolation matrix D, is not successful match of feature point and the feature point is not extracted successfully estimate coordinates; the offset matrix D contains parallax information for calculating the depth; step B4: Plt point on the IL (Xi, Yi »Z1), by matching point to the Pkt Ik (x2, Y2, Z2), to calculate the depth Zo spatial point PW (XQ, Y0, Z0); for an arbitrary point on the source image k U, y), in the imaging module optical axis baseline length b and focal length f of the lens situation

况下,其对应的空间点的深度为: Under conditions, depth of spatial points corresponding to:

Figure CN103824303AD00072

D是包含偏移量信息的所述视差矩阵。 D is a matrix comprising said parallax offset information.

[0008] 可选的,当焦平面与所述被摄物为一平面,并且与所述的成像模块的镜头平行时,通过:公式:1/L' -1/L=1/F'计算出所述被摄物的深度信息,其中:L为物距,L'为像距,F为镜头焦距,所述的深度信息即为物距。 [0008] Alternatively, when the focal plane is subject to a plane, and parallel to the lens of the imaging module by: Equation: 1 / L '-1 / L = 1 / F' is calculated the depth information of the subject matter, where: L is the object distance, L 'is the image distance, F is the lens focal length, the object distance is the depth information.

[0009] 可选的,所述的步骤C中,对于所述Pu=(XpyDZ1),其空间坐标在经传感 [0009] Alternatively, said step C, for the Pu = (XpyDZ1), its spatial coordinates in the sensor by

器为原点的坐标系中为Pw= (x0, y0, Ztl),有: Is the origin of the coordinate system is Pw = (x0, y0, Ztl), are:

Figure CN103824303AD00073

;若旋转θ角, ; When the rotation angle θ,

投影点 Projection point

Figure CN103824303AD00074
Figure CN103824303AD00075

;重新计算Pw,=(x0,,y0,,z0,)在左侧传感器上的理论像点 ; Recalculate Pw, = (x0,, y0,, z0,) the theoretical image point on the left sensor

Figure CN103824303AD00076

;依次计算每一点,得到图像E ; ; Successively calculating each point to obtain an image E;

对图像E作插值,剪裁,得到所述的处理后的第一原始图像图G。 E image for interpolation, trimming, to obtain the first processed original image G. FIG.

[0010] 可选的,所述的旋转角度Θ通过以下任意一种方式得到:加速度传感器、用户指定、图像建模。 [0010] Alternatively, the rotation angle Θ is obtained by any of the following ways: an acceleration sensor, designated by the user, image-based modeling.

[0011] 本发明实施例还提供了一种基于被摄物的位置、方向调整图像透视畸变的装置,包括:拍摄模块,对被摄物进行多个角度的拍摄,捕捉到多幅原始图像;深度计算模块,通过对所述些原始图像的成像信息进行运算,得到被摄物的深度信息;图像处理模块,通过所述的深度信息对所述原始图像中的第一原始图像进行透视畸变处理。 [0011] The present invention further provides a subject based on a position, a direction to adjust the image distortion perspective apparatus comprising: a photographing module for photographing the subject a plurality of angles, the plurality of the captured original image; depth calculation module, by performing some operation on the image information of the original image to obtain depth information of the subject; an image processing module for processing the perspective distortion in the original image of the first original image through the depth information .

[0012] 可选的,当焦平面与所述被摄物不完全为一平面时,所述的深度计算模块可基于所述成像信息的特征的匹配、基于所述成像信息的区域的匹配、或基于成像信息的相位的匹配在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 [0012] Alternatively, when the focal plane of the subject is not a completely flat, the depth calculation module may be based on the feature of the imaging information, the matching region based on the imaging information, or based on the phase matching in obtaining imaging information corresponding intermediate parameter information, further calculates the depth information is obtained. [0013] 可选的,所述深度计算模炔基于所述成像信息的特征的匹配包括:采用横向双摄像头,捕捉获得的原始图像分别为k和Ικ;得到模板内单像素的检测值C:1 I [0013] Alternatively, the depth calculation based on the characteristic of the imaging mode alkynyl matching information comprising: using a lateral dual cameras capturing the original image are obtained and k Ικ; to give the template single-pixel detection value C: 1 I

Figure CN103824303AD00081

检测对模板中的每个像素进行,1。 Detection of the template each pixel, a. y0)是模 y0) modulo

板中心点灰度值,I (X,y)是模板上其他点灰度值,t是确定相似程度的阈值,X, y为以源图像I左下角为原点的坐标系内的坐标,对属于模板A的点的检测值C求和,得 Plate center point of the gradation values, I (X, y) is the gradation value of other points on the template, t is a threshold value to determine the degree of similarity, X, y coordinates in the source image I in the lower left corner as the origin of the coordinate system, on a template belonging to the detected value summing point C to give

到输出的游程和S: The output of the run and S:

Figure CN103824303AD00082

源图像I的相应点(χ。,yQ)的特征值R为: Corresponding points of the source image I (. Χ, yQ) of value R is wherein:

Figure CN103824303AD00083

Λ(%Λ)= 0 S(xy 其中h为几何阈值且h=3Smax/4,其中Smax是游程和、 * Λ (% Λ) = 0 S (xy geometry where h is a threshold value and h = 3Smax / 4, where Smax is run and, *

S所能取的最大值。 The maximum value of S can be taken. 对所述两幅原始图像k和Ik作处理,得到特征图分别为扎和Hk ;视差矩阵计算步骤:¾中待匹配点(¾, Y0)为中心点来创建一个大小为宽m、高η的矩形窗Q ;Ηε中,沿水平方向偏移量dx在视差范围内取出与待匹配点(X(l,y(l)相邻同样大小为mXn的另一矩形窗Q' ;将第一特征图扎的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较;则,扎中以待匹配点(Χ(ι,%)为中心点的mXn矩形窗与Hk中对应尺寸水平偏移量dx的矩形窗的 Two of the original image and the k Ik as to give the bar respectively and wherein FIG Hk; disparity matrix calculation step: ¾ to be the matching point (¾, Y0) for the center to create a size of width m, a high η rectangular window Q; Ηε, the shift amount dx in the horizontal direction within the parallax range taken to be the matching point (X (l, y (l) of the same size adjacent to another rectangular window mXn Q '; the first FIG bar rectangular window feature and the second feature Q rectangular window of FIG Hk Q 'are compared; it, tie the mXn rectangle window and the points to be matched to Hk (Χ (ι,%) corresponding to the center point of the horizontal deflection size shift amount dx of the rectangular window

匹配系数为: Matching coefficient is:

Figure CN103824303AD00084

是以矩形窗 It is a rectangular window

为坐标系的坐标;预选设定一个几何阈值k,如果rdx(X(l,ytl) <k,即为匹配成功。通过视差矩阵D来记录匹配成功的点的偏移值dx,D(x0, y0)=dx ;在遍历特征图扎后,对视差矩阵D插值,对未匹配成功的特征点以及未成功提取特征点的坐标估值;将视差矩阵D包含的偏移量信息用于计算深度。Il上一点Pw(XpypZ1),通过匹配到Ik上一点Pkt(x2, y2, z2),计算出空间点Pw(Xd,yci,Zci)的深度Zci;对于源图像Ilj上的任意一点(X,y),在成像模块光轴中心 Of the coordinate system; a pre-set geometric threshold k, if rdx (X (l, ytl) <k, that is, the parallax successful match matrix D to the successfully matched record point offset values ​​dx, D (x0. , y0) = dx; FIG feature after traversing bar, disparity interpolation matrix D of not successfully matched feature point and a coordinate extracting feature points is not successfully estimate; the offset matrix D contains parallax information for calculating .Il depth point Pw (XpypZ1), by matching the point to Ik Pkt (x2, y2, z2), calculates the depth Zci spatial point Pw (Xd, yci, Zci); for an arbitrary point on the source image Ilj ( X, y), the center of the optical axis in the imaging module

基线长b和镜头焦距f情况下,其对应的空间点的深度为: The baseline length b and focal length f of the lens, the depth of the spatial points corresponding to:

Figure CN103824303AD00085

取D是包含偏 D taken partial comprising

移量信息的所述视差矩阵。 Disparity shift amount information of the matrix.

[0014] 可选的,当焦平面与所述被摄物为一平面,并且与所述的成像模块的镜头平行时,通过:公式:1/L' -1/L=1/F'计算出所述被摄物的深度信息;其中:L为物距,L'为像距,F为镜头焦距,所述的深度信息即为物距。 [0014] Alternatively, when the focal plane is subject to a plane, and parallel to the lens of the imaging module by: Equation: 1 / L '-1 / L = 1 / F' is calculated the depth information of the subject thereof; wherein: L is the object distance, L 'is the image distance, F is the lens focal length, the object distance is the depth information.

[0015] 可选的,所述的步骤C中,对于所述Pu=(XpyDZ1),其空间坐标在经传感 [0015] Alternatively, said step C, for the Pu = (XpyDZ1), its spatial coordinates in the sensor by

器为原点的坐标系中为Pw= (x0, 10, Zt)),有: Is the origin of the coordinate system is Pw = (x0, 10, Zt)), are:

Figure CN103824303AD00086

若旋转Θ 角,投影点pw,=(χ0',y0,,ζ0,),有:《 When the rotation angle Θ, the projection point pw, = (χ0 ', y0,, ζ0,), are: "

Figure CN103824303AD00087
Figure CN103824303AD00088

;重新计算Pw' =(xQ',y0',zQ')在左侧传感器上的理论像点PLT,=(x/ , Y1',z/ ),有 ; Recalculate Pw '= (xQ', y0 ', zQ') PLT theoretical image point on the left side of the sensor, = (x /, Y1 ', z /), there

Figure CN103824303AD00091

;依次计算每一点,得到图像E.zH zO ; Successively calculating each point to obtain an image E.zH zO

对图像E作插值,剪裁,得到所述的处理后的第一原始图像图G。 E image for interpolation, trimming, to obtain the first processed original image G. FIG.

[0016] 可选的,所述的旋转角度Θ通过以下任意一种方式得到:加速度传感器、用户指定、图像建模。 [0016] Alternatively, the rotation angle Θ is obtained by any of the following ways: an acceleration sensor, designated by the user, image-based modeling.

[0017] 与现有技术相比,本发明实施例的技术方案具有以下优点: [0017] Compared with the prior art, the technical solutions of the embodiments of the present invention has the following advantages:

[0018] 使用成像模组取得同一场景不同摄像角度的多张照片,并通过对原始图像的成像信息进行运算,计算得到场景深度,并基于获取的场景深度对拍摄图像作畸变处理,从而克服透视变形。 [0018] The imaging module is made using a different camera angles of the same scene a plurality of pictures and information of the original image by the imaging operation is performed, the depth of a scene is calculated, and based on the acquired scene depth as distortion processing on the captured image, thereby overcoming a perspective deformation.

附图说明 BRIEF DESCRIPTION

[0019] 图1是现有技术中通过成像设备拍摄建筑物的拍摄示意图; [0019] FIG. 1 is a schematic view of a building prior art photographing by photographing the image forming apparatus;

[0020] 图2是现有技术中改变拍摄角度后的拍摄示意图; [0020] FIG. 2 is a prior art photographing after changing the shooting angle schematic;

[0021] 图3是本发明实施例的一种基于被摄物的位置、方向调整图像透视畸变的方法的流程示意图; [0021] FIG. 3 is an embodiment of the present invention is based on the position of the subject, the direction of flow of the method of adjusting the image distortion schematic perspective view;

[0022] 图4是本发明实施例中使用SUSAN检测模板的示意图; [0022] FIG. 4 is a schematic diagram used SUSAN detection template of the present invention;

[0023] 图5是本发明实施例的进行特征匹配的示意图; [0023] FIG. 5 is a schematic diagram of feature matching will be embodiments of the invention;

[0024] 图6是本发明实施例的以横向双摄像头方案为例,对深度信息进行计算的示意图; [0024] FIG. 6 is a transverse dual camera program, for example, the calculated depth information diagram of an embodiment of the present invention;

[0025] 图7是本发明实施例的横向双摄像头成像的水平面投影示意图; [0025] FIG. 7 is a schematic view of a projection plane transverse bis imaging camera according to an embodiment of the present invention;

[0026] 图8是本发明实施例的纵向双摄像头成像的水平面投影的示意图; [0026] FIG. 8 is a schematic horizontal longitudinal dual camera imaging an embodiment of the present invention is projected;

[0027] 图9是本发明实施例的计算对焦映射的示意图; [0027] FIG. 9 illustrates a calculation example of the present invention, the focus mapping of the embodiment;

[0028] 图10是本发明实施例的一种基于被摄物的位置、方向调整图像透视畸变的装置的结构不意图; [0028] FIG. 10 is an embodiment of the present invention is based on the position of the subject, the image perspective distortion reorientation device structure is not intended;

[0029] 图11表示本发明在照相设备中的一种应用形式,其中在照相设备的左右各放置一颗传感器用于测距。 [0029] FIG. 11 shows an application of the present invention is in the form of a photographic apparatus, wherein each camera device is placed around a sensor for ranging.

[0030] 图12表示本发明在照相设备中的另一种应用形式,其中照相设备的一侧为潜望式主传感器,另一侧为微型传感器用于测距; [0030] FIG. 12 shows an alternative form of the present invention is applied in a photographic apparatus, wherein a side of the periscope camera device is the master sensor, the other side is a miniature sensor for ranging;

[0031] 图13表示本发明在照相设备中的另一种应用形式,其中照相设备包括左右对称的潜望式传感器; [0031] FIG. 13 shows another application of the present invention in a photographic apparatus, wherein the periscope camera device comprises a symmetrical sensor;

[0032] 图14表示本发明在摄像设备中的一种应用形式,其中在摄像设备的一侧为主传感器,另一侧为微型传感器用于测距。 [0032] FIG. 14 shows an application of the present invention is in the form of image pickup apparatus, wherein the imaging apparatus based on the side of the sensor, the other side for a micro distance measuring sensors.

具体实施方式 Detailed ways

[0033] 在现有的技术方案中,由于在摄像的过程中,需要调整成像设备的拍摄角度,导致被拍摄物的部分较近而部分较远,从而使其各部分相对相机镜头的深度发生变化,造成透视变形。 [0033] In the prior art embodiment, since the imaging process, it is necessary to adjust the shooting angle of the image forming apparatus, resulting in the portion closer and farther portion of the subject, so that it all part of the camera lens the depth of occurrence changes, resulting in optical distortion.

[0034] 本发明实施例,使用成像模组取得同一场景不同摄像角度的多张照片,并通过对原始图像的成像信息进行运算,计算得到场景深度,从而对图像的透视效果进行改变,防止透视变形。 [0034] Example embodiments of the present invention, the imaging module is made using different camera angles of the same scene a plurality of pictures and information of the original image by the imaging operation is performed, the calculated scene depth, so that the effect on the perspective image is changed, preventing a perspective deformation.

[0035] 为使本发明的上述目的、特征和优点能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。 [0035] For the above-described objects, features and advantages of the present invention can be more fully understood by reading the following detailed description of the drawings Specific embodiments of the present invention binds.

[0036] 本发明实施例提供了一种基于被摄物的位置、方向调整图像透视畸变的方法。 Example embodiments provide a location based on the subject of the [0036] present invention, a method to adjust the direction of image perspective distortion. 参照图3,以下通过具体步骤进行详细说明。 Referring to FIG. 3, described in detail below through specific steps.

[0037] 步骤A,通过从无穷远至最近的焦距调节,用成像模组对被摄物拍摄,捕捉到多幅原始图像。 [0037] Step A, by adjusting the focal length of infinity to the closest of the subject captured with an imaging module to capture the plurality of original images.

[0038] 在具体实施中,上述的成像模组可以是单个的摄像头,也可以是多个摄像头。 [0038] In a particular embodiment, the imaging module described above may be a single camera, multiple cameras may be. 基于单摄像头的多次成像,可与操作者约定多次成像时拍照设备位置的水平或者垂直移动距离,也可与操作者约定多次成像时拍照设备位置的前后移动距离。 Based on the multi-pass single camera, photographing the operator may agree with the horizontal position or the vertical movement device distance imaging times, can also be an agreement with the mobile operator photographing distance in front of the imaging device location when a plurality of times. 基于双摄像头,可以通过特定设备设置预定的水平或者垂直移动距离,应用横向双摄像头方案计算视差矩阵,或通过特定设备设置预定的前后移动距离,应用纵向双摄像头计算视差矩阵。 Based on two cameras, a particular device may be provided by predetermined horizontal or vertical movement distance, a lateral double application program calculates the parallax matrix camera, or set by a predetermined distance of a particular device before and after the movement, the longitudinal dual camera application calculates the parallax matrix. 基于更多摄像头的多次成像,与双摄像头类似,可以是应用多个摄像头计算视差矩阵。 More times based on the imaging camera, the dual cameras Similarly, a plurality of cameras may be applied calculation parallax matrix.

[0039] 步骤B,通过对所述些原始图像的成像信息进行运算,得到被摄物的深度信息。 [0039] Procedure B, by the imaging information calculation some original images to obtain depth information of the subject.

[0040] 在一种具体实施例中,当所述的被摄物为一平面,当所述的被摄物为一平面,并且与所述的成像模块的镜头平行时,通过公式:1/L' _1/L=1/F',计算出所述被摄物的深度信息,所述的深度信息即为焦距。 [0040] In one particular embodiment, when the subject is a flat, when the subject is a flat, and parallel to the lens when the imaging module by the formula: 1 / L '_1 / L = 1 / F', the calculated depth information of the subject matter, the depth information is the focal length. 其中:L为物距,L'为像距,F为镜头焦距。 Wherein: L is the object distance, L 'is the image distance, F is the lens focal length.

[0041] 在另一种具体实施例中,当所述的被摄物不完全为一平面时,即镜头和传感器与被摄物之间不平行时,可以通过区域匹配算法,在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 [0041] In another specific embodiment, when the subject is not a completely flat, i.e. non-parallel between the lens and the sensor and the subject, the algorithm can be matched by area, to obtain the corresponding after the intermediate parameter information, further calculates the depth information is obtained. 具体来说,测距所使用的区域匹配的方法可以包括三类:基于所述成像信息的特征的匹配,基于所述成像信息的区域的匹配和基于所述成像信息的相位的匹配。 In particular, the method of matching the ranging region may be used include three categories: feature-based matching the imaging information, based on matching the imaging region information and the phase matching based on the imaging information.

[0042] 其中,基于特征的匹配使用的匹配基元包含了丰富的统计特性以及算法编程上的灵活性,易于通过硬件实现。 [0042] wherein, based on the matching feature matching primitives used include statistical properties of rich flexibility and programming algorithms, easily realized by hardware. 基于区域的匹配较为适用于室内等具有显著特征的环境,有较大的局限性,需要有其他人工智能方法来辅助。 Region-based matching is more suitable for indoor environment and other significant features, is greatly limited, the need for other methods of artificial intelligence to assist. 基于相位的匹配,由于周期性模式、光滑区域的存在以及遮挡效应等原因会导致视差图产生误差,还需要有其他的方法来进行误差检测和校正,较为复杂。 Based on the phase matching, due to the periodical pattern, the presence of a smooth area shielding effect and the like cause an error disparity map, but also to other methods for error detection and correction, is more complex.

[0043] 在上述的具体实施例中,以基于特征的匹配方法来阐述本发明的一种具体实现方法。 [0043] In the above embodiments, feature-based matching method to be described a specific implementation of the method of the present invention. 其中视差计算包含特征提取和特征匹配。 Wherein the parallax calculation comprises feature extraction and feature matching. 然而应该理解的是,本发明并不限于基于特征的匹配。 However, it should be appreciated that the present invention is not limited to the feature-based matching.

[0044] 步骤BI,首先需要获取源图像I。 [0044] Step BI, one needs to obtain a source image I. 以横向双摄像头为例,通过左右两个传感器获得的源图像分别为k和Ικ。 Laterally dual cameras, for example, the left and right source images obtained by the two sensors, respectively and k Ικ. 在对源图像经过图像增强、滤波、缩放等预处理,进入特征提取。 After extraction of the source image in image enhancement, filtering, scaling preprocessing proceeds characteristics. 特征提取可以通过以下步骤实现: Feature extraction may be achieved by the following steps:

[0045] 步骤Β2,通过特征点检测得到模板内单像素的检测值C。 [0045] Step Β2, the detection value obtained by the one-pixel template feature point detecting C. 所述特征点是图像的极值点,基本上具备平移、旋转、缩放、仿射不变性,比如像素灰度值、角点、边缘、拐点等等。 The feature points are the extreme points of the image, substantially comprising translation, rotation, scaling, affine invariant, such as pixel grayscale values, corners, edges, like an inflection point. 常用的特征点检测方法包括:SUSAN (Smallest Univalue Segment Assimilating Nucleus,最小核同值区)角点提取,Harris 角点提取,SIFT (Scale-1nvariant feature transform,尺度不变特征)提取等等。 The common feature point detection method comprising: SUSAN (Smallest Univalue Segment Assimilating Nucleus, the minimum value of the same core region) corner detection, Harris corner point extraction, SIFT (Scale-1nvariant feature transform, scale invariant feature) extraction and the like. 作为一种示例,这里以SUSAN角点提取为例进行说明: As an example, an example where extracted SUSAN corner to be described:

[0046] 在SUSAN角点提取中,核同值区是相对于模板的核,模板中有一定的区域与它具有相同的灰度。 [0046] In the SUSAN corner detection, the core region is the same value with respect to the core template, the template in a certain region and it has the same gray scale. 如图4所示,使用所述原始图中的37个像素作为检测模板,通过公式 4, using the original image pixels as the detection of a template 37, by the equation

Figure CN103824303AD00111

可以得到检测模板内每个单像素的检测值C。 Each single pixel can be detected within the detection value of the template C. 其中, among them,

Kx0, y0)是模板中心点灰度值,Ι(χ, y)是检测模板上其他点灰度值,t是确定相似程度的阈值,χ,y为以源图像I左下角为原点的坐标系内的坐标。 Kx0, y0) is the center point of the template gradation value, Ι (χ, y) is the gray values ​​of other detection template, t is the determined degree of similarity threshold, χ, y is the lower left corner of the source image I coordinate origin in the coordinate system.

[0047] 计算得到检测模板内各点的检测值后,对检测值C求和,得到输出的游程和S为: [0047] After calculating the detected value of each point obtained in the detection template, the detection value C is summed to obtain the output of the run and S:

Figure CN103824303AD00112

进一步的,根据公式 Further, according to the formula

Figure CN103824303AD00113

算得到原始图像I的相应点(Xtl, y0)的特征值R(Xo,Yo)。 Wherein the original image I obtained count corresponding point (Xtl, y0) value R (Xo, Yo). 其中,h为几何阈值且h=3Smax/4,Sfflax是游程和S所能取的最大值。 Wherein, h is a threshold value and the geometric h = 3Smax / 4, Sfflax and S is the maximum value a run can take.

[0048] 通过上述步骤,分别对两副原始图像k和Ik进行上述的计算处理,得到对应的特征图Hl和Ηκ。 [0048], respectively, two of the original image and the k Ik perform the calculation processing by the above procedure to give the corresponding features of FIG Hl and Ηκ.

[0049] 步骤B3,在此基础之上,需要对获得的特征图做进一步的特征匹配,以获取视差矩阵。 [0049] Step B3, on this basis, characterized in need of FIG obtained further matching is used to obtain the parallax matrix. 特征匹配可以通过如下步骤实现: Feature matching may be achieved by the following steps:

[0050] 如图5所示,以第一特征图扎中待匹配点(X(l,y0)为中心点,创建一个宽为m、高为η的矩形窗Q。在第二特征图Hk中,沿水平方向偏移量dx,在视差范围内取出与待匹配点(X(l,y。)(基准点)相邻,同样大小为mXn的另一矩形窗Q'。将第一特征图凡的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较。如果能够在两个矩形窗中匹配到具有最大相似性的对应点,就可判定为最佳匹配。 [0050] As shown in FIG. 5, a first feature point to be matched in FIG rolling (X (l, y0) as the center point, creating a width m, a height of a rectangular window in the second characteristic η of Q. FIG Hk in the horizontal direction offset amount dx, taken to be adjacent to the matching point (X (l, y.) (reference point) within the parallax range, the same size as the rectangular window mXn another Q '. the first feature FIG rectangular window where Q characteristic and a second rectangular window of FIG Hk Q 'are compared. If the pattern matches the corresponding points having the largest similarity in two rectangular windows, can be determined as the best match.

[0051] 可通过多种方法对两张特征图对应的矩形窗进行匹配。 [0051] may be two matching features corresponding to FIG rectangular window by various methods. 以采用灰度差的平方和算法为例,扎中以待匹配点(¾, y(l)为中心点的mXn矩形窗与仏中对应尺寸水平偏移量dx In grayscale sum of squared differences algorithm as an example, be tied to the matching point (¾, y (l) of the window and the center point of the mXn rectangle corresponding to the size of the horizontal Fo offset dx

的矩形窗Q,的匹配系数为: Rectangular window Q, the matching coefficient is:

Figure CN103824303AD00114

其中,i,j是以矩形窗为坐标系的一点的坐标。 Wherein, i, j is the coordinate system of the rectangular window coordinates of a point.

[0052] 所述坐标系的原点为矩形窗Q和矩形窗Q'的对应处。 [0052] The origin of the coordinate system is a rectangular window and a rectangular window Q Q 'corresponding place. 例如,可以是以矩形窗Q和矩形窗Q'左下角为原点的坐标系内的坐标,也可以是以矩形窗Q和矩形窗Q'右下角为原点的坐标系内的坐标等等。 For example, Q may be a rectangular window and a rectangular window Q 'lower left corner as the origin of coordinates in the coordinate system may be a rectangular window is a rectangular window Q and Q' is the origin of the coordinates within a coordinate system like the lower right corner.

[0053] 将gamma dx (x0, y0)与预先设定一个几何阈值k比较。 [0053] The gamma dx (x0, y0) with a predetermined geometric comparison threshold value k. 当gamma dx (x0, y0)≤k时,即可判定为匹配成功。 When gamma dx (x0, y0) ≤k, can be determined that the matching is successful. 如果rdx(X(l,ytl)能够取得最小值,即为模板完全匹配。通过视差矩阵D来记录匹配成功的点的偏移值dx, D(x0, y0) =dxo If rdx (X (l, ytl) can be made minimum, i.e. exact match template. Successfully matched to record point offset values ​​dx, D (x0, y0) by the matrix D = dxo parallax

[0054] 重复上述的方法,遍历整张特征图扎后与特征图Hk进行匹配比较,并通过对视差矩阵D插值,即对未匹配成功的特征点以及未成功提取特征点的坐标估值,以形成完整的视差矩阵D。 [0054] Repeat the above process, wherein the entire traversal view of the tie match comparison feature FIG Hk, and by disparity interpolation matrix D, that is not successfully matched feature points and a feature point coordinate is not successfully extracted valuations to form a complete matrix parallax D.

[0055] 可以理解的是,上述获取视差矩阵的方法也可以是在Hk中选取矩形框,与扎中对应的矩形框作匹配,并通过进一步遍历Ηκ,计算得到完整的视差矩阵D。 [0055] It will be appreciated that the above method of obtaining a disparity matrix may be selected in a rectangular frame Hk, the corresponding bar rectangular frame for matching, and by further traversing Ηκ, full parallax calculated matrix D. 为了方便起见,下文基于本步骤中对特征图扎的遍历进行说明。 For convenience, hereinafter this step based on the features will be described in FIG traversal bar.

[0056] 步骤B4,基于视差矩阵D中所包含的偏移量信息,可以进一步地计算深度,即景深。 [0056] Step B4, based on the parallax offset information contained in the matrix D can be further calculate a depth, i.e. the depth of field. 在一种可选的具体实施例中,以采用横向双摄像头方案为例,可以通过三角公式计算深度。 In an alternative embodiment, the dual cameras transverse to employ an example embodiment, the depth may be calculated through trigonometric formulas.

[0057] 结合图6-8所示,对于两个平行放置的传感器,左侧传感器获取的源图像L上一点Pu^x1, yl,Z1),通过匹配到空间点点Pw(Xd,y(i,z0)在右侧传感器上的一点Pkt(x2,y2» z2),可以计算出空间点点Pw(Xd,lQ,z0)的深度Zm对于源图像Ilj上的任意一点(χ, y),在已知两传感器光轴中心基线长b和镜头焦距f的情况下,其对应的空间点的深度为: [0057] in conjunction with FIG. 6-8, the sensor placed two parallel, L the left side of the image sensor acquires a source point Pu ^ x1, yl, Z1), by matching the space bit Pw (Xd, y (i , z0) on the right side of the sensor point Pkt (x2, y2 »z2), little space can be calculated Pw (Xd, lQ, z0) for any depth Zm Ilj one o'clock on the source image (χ, y), in two sensor case of the known baseline length b and the center axis focal length f of the lens, the depth of spatial points corresponding to:

Figure CN103824303AD00121

[0059] 其中,D是特征匹配中计算出的包含偏移量信息的视差矩阵。 [0059] where, D is the calculated feature matching matrix including parallax offset information. 经过遍历源图像込,即可得景深的深度矩阵Z。 After traversing the image source includes the postage, you can get the depth of field depth matrix Z.

[0060] 步骤C,通过所述的深度信息对所述些原始图像中的第一原始图像进行透视畸变处理,提高处理的第一原始图像的效果。 [0060] Procedure C, the process for some perspective distortion in the original image of the first original image through the depth information, the effect of improving the processing of the first original image.

[0061] 以测距传感器设置在成像设备左侧为例,对于込上一点Pu=(Xl,Y1, Z1),其空间坐 [0061] In a distance measuring sensor provided at the left side of the image forming apparatus, for example, for the point Pu = (Xl, Y1, Z1) includes the postage on which sit the space

标在经传感器,在原点的坐标系中为Pw= (x0, y。,Z0)。 Marked by the sensor, the coordinate origin for Pw = (x0, y., Z0). 其中 among them

Figure CN103824303AD00122
Figure CN103824303AD00123

.[0063] 以三维空间内一根直线为轴心,对三维空间每一点重映射,从而形成最终的目标图像。 . [0063] In a three-dimensional space as a straight line axis, three-dimensional space remapping each point, to form the final object image. 可以基于成像设备左侧传感器的水平轴线或垂直轴线的旋转。 Horizontal axis or a vertical axis of rotation can be left sensor based on the image forming apparatus. 本发明实施例以基于水平轴线的旋转作为示例,假设旋转角度为9,那么投影点为? In embodiments of the present invention based on a horizontal axis of rotation by way of example, it assumed that the rotation angle is 9, then the projection point? „'=0^',3^',2(1')。由图9可 '' = 0 ^ '^ 3', 2 (1 '). FIG. 9 may be made

Figure CN103824303AD00124

[0064] 重新计算Pw' =(x0',yQ',z0' )在左侧传感器上的理论像点ΡίΤ' =(x/,Y1' , Z1'),得 [0064] recalculated Pw '= (x0', yQ ', z0') ΡίΤ theoretical image point on the left sensor '= (x /, Y1', Z1 '), to give

Figure CN103824303AD00125

[0065] 依次对拍照后的图像内每一点计算得到新图像E,并对新图像E作插值,剪裁,从而得到最终的目标图G。 [0065] sequentially calculated for each point within the image obtained by photographing a new image E, and E for the new image interpolation, trimming, to give the final target G. FIG.

[0066] 上述的旋转角度Θ是如背景技术中所述的,用户为了能够完全拍摄到被摄物,而调整成像设备的拍摄角度。 [0066] The rotation angle Θ is as described in the background art, the user can be completely captured in order to subject, the image forming apparatus to adjust the shooting angle. 由此可见,尽管调整了拍摄角度,但是通过如本发明实施例的空间重定位处理,可以克服由于拍摄角度变化所造成的透视变形问题。 Thus, although the adjustment of the shooting angle, but by the relocation process of the present invention as described in Example space, we can overcome the problems due to the perspective distortion caused by changes in camera angle.

[0067] 在上述实施例中,可以通过多种方法获得旋转角度Θ。 [0067] In the above embodiment, the rotation angle Θ can be obtained by various methods. 例如,可以利用加速度传感器(G-Sensor)获取。 For example, the acceleration sensor (G-Sensor) acquired. 加速度传感器是一种能够测量加速力的电子设备,包括,如能够检测角速度变化的陀螺仪等。 The acceleration sensor is capable of measuring acceleration forces electronic device comprising, as capable of detecting the angular velocity change of a gyroscope. 通过所述加速度传感器,可以自动获取成像设备的转动角度,进而根据本发明的实施例,计算得到相应的理论像点位置。 By the acceleration sensor, it can automatically acquire the rotational angle of the imaging device, according to further embodiments of the present invention, the corresponding calculated theoretical image point position.

[0068] 上述的通过加速度传感器获取旋转角度的实施方式仅仅只是一种实例。 [0068] The acceleration sensor is acquired by the rotation angle of merely one example embodiment. 其他能够获取旋转角度的方法均属于本发明的技术思想。 Other methods can obtain the rotational angle belong to the technical idea of ​​the present invention. 例如,由用户指定旋转角度大小,或者可以通过图像建模计算得到。 For example, the rotation angle of the size specified by the user, or may be calculated by modeling the image.

[0069] 本发明上述实施例中,传感器设置在成像设备左侧仅仅只是一种示例,传感器的设置位置也可以是设置在成像设备的右侧或其他位置,或者成像设备上可以设有多个传感器。 The above embodiments [0069] In the present invention, the sensor is provided only at the left side of the image forming apparatus is only an example, the position sensor is provided may be disposed on the right side of the image forming apparatus or other location, or may be provided with a plurality of the imaging apparatus sensor. 可以理解的是,其他的实施方式均可以依照本发明实施例的技术思想,即基于测距步骤中获得的深度参数,通过空间的重映射,在相应的传感器上匹配得到理论的像点,从而完成对图像透视畸变的调整,因此此处不再一一赘述。 It will be appreciated that other embodiments are technical idea may be implemented in accordance with embodiments of the present invention, i.e., the depth parameter based on the distance obtained in the step, by re-mapping space, matching the theoretical image point obtained on the corresponding sensor, thereby to complete the adjustment of image distortion perspective, and therefore not further described herein.

[0070] 综上,本发明实施例通过景深测距,并基于获取的景深参数完成图像修正,可以补偿因镜头和传感器与被摄物之间不平行而造成的透视变形。 [0070] In summary, the embodiments of the depth of field distance, depth of field and the image is corrected based on the acquired parameter, optical distortion may be compensated by the parallel between the lens and the sensor is not subject to the present invention caused. 其中,景深测距可以是按照拍摄、匹配、深度计算等几个步骤获取场景内各特征点的深度数据(即景深)。 Wherein the depth of field ranging feature points may be acquired in accordance with the scene shot several steps, matching the depth calculation depth data (i.e., depth).

[0071] 本发明实施例还提供了对应的一种基于被摄物的位置、方向调整图像透视畸变的装置,如图10所示,包括:拍摄模块101,对被摄物进行多个角度的拍摄,捕捉到多幅原始图像。 [0071] The present invention further provides a corresponding position based on the subject, image perspective distortion reorientation means 10, comprising: capturing module 101, to subject a plurality of angles shoot to capture pieces of the original image. 深度计算模块102,通过对所述些原始图像的成像信息进行运算,得到被摄物的深度信息。 Depth calculation module 102, imaging information by performing the operation on some of the original image to obtain depth information of the subject. 图像处理模块103,通过所述的深度信息对所述原始图像中的第一原始图像进行透视畸变处理,致使处理的第一原始图像与所述被摄物形状一致,提高第一原始图像的效果。 The image processing module 103, by the depth information of the original image in a first perspective distortion of the original image processing, a first original image so that the process is consistent with the shape of the subject, the effect of improving the first original image .

[0072] 在具体实例中,当焦平面与所述被摄物不完全为一平面时,所述的深度计算模块102可基于所述成像信息的特征的匹配、基于所述成像信息的区域的匹配、基于成像信息的相位的匹配在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 [0072] In a particular example, the focal plane when the subject was not a completely flat, the depth calculation module 102 may match based on a characteristic of the imaging information, the imaging information based on the area of matching, the phase matching of the imaging information after obtaining parameter information corresponding intermediate, is further calculated based on the depth information.

[0073] 在具体实例中,所述深度计算模炔基于所述成像信息的特征的匹配包括:采用横向双摄像头,捕捉获得的原始图像分别为k和Ικ;得到模板内单像素的检测值C: [0073] In a specific example, the depth calculation based on the characteristic of the imaging mode alkynyl matching information comprising: using a lateral dual cameras capturing the original image are obtained and k Ικ; to give the template single-pixel detection value C :

Figure CN103824303AD00131

检测对模板中的每个像素进行,I (xQ,yQ)是模板中心 Detection of the template each pixel, I (xQ, yQ) is the center of the template

点灰度值,I (x,y)是模板上其他点灰度值,t是确定相似程度的阈值,x,y为以源图像I左下角为原点的坐标系内的坐标。 Gray values, I (x, y) is the gradation value of other points on the template, t is the determined degree of similarity threshold, x, y coordinates of the lower left corner of the source image I is the origin of the coordinate system.

[0074] 对属于模板A的点的检测值C求和,得到输出的游程和 [0074] summing the detection value C of the point A belonging to the template, and the resulting output of the run

Figure CN103824303AD00132

[0075] 源图像I的相应点(XQ,y。) 的特征值R为: [0075] The corresponding point source image I (. XQ, y) of the feature value R is:

Figure CN103824303AD00133

其中h为几何阈值且h=3Smax/4,其中Smax是游程和 Where h is the geometric threshold and h = 3Smax / 4, where Smax is run and

S所能取的最大值,对所述两幅原始图像k和Ik作处理,得到特征图分别为扎和仏。 S can take a maximum of two of the original image and the k Ik as to give the bar respectively and wherein Fo FIG.

[0076] 视差矩阵计算步骤:¾中待匹配点(X(l,y0)为中心点来创建一个大小为宽m、高η的矩形窗Q迅中,沿水平方向偏移量dx在视差范围内取出与待匹配点(Xt^ytl)相邻同样大小为mXn的另一矩形窗Q' ;将第一特征图扎的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较。 [0076] The parallax matrix calculation step: ¾ to be the matching point (X (l, y0) to create a size of width m, a high η rectangular window fast as the central point Q, the shift amount dx in the horizontal direction a parallax range taken to be the matching point (Xt ^ ytl) adjacent to the same size as the rectangular window mXn another Q '; FIG pierced first feature and the second feature rectangular window Q of the rectangular window of FIG Hk Q' are compared.

[0077] Hl中以待匹配点(¾, y0)为中心点的mXn矩形窗与Hk中对应尺寸水平偏移量dx的矩形窗的匹配系数为: [0077] Hl matching points to be (¾, y0) matching coefficient of the size of the horizontal shift amount dx mXn rectangular window and a rectangular window corresponding to the center point Hk is:

Figure CN103824303AD00134

是以矩形窗为坐标系的坐标。 It is a rectangular window to the coordinate system. 预选设定一个几何阈值k,如果rdx(X(l,yci) <k,即为匹配成功。通过视差矩阵D来记录匹配成功的点的偏移值dx, D(x0, y^zdx。[0078] 在遍历特征图扎后,对视差矩阵D插值,对未匹配成功的特征点以及未成功提取特征点的坐标估值;将视差矩阵D包含的偏移量信息用于计算深度。 Setting a threshold value preselected geometric k, if rdx (X (l, yci) <k, that is, the match succeeds. Successfully matched to record point offset values ​​dx, D (x0, y ^ zdx parallax D matrix. [ 0078] after traversing FIG bar feature, disparity interpolation matrix D of not successfully matched feature point and a coordinate extracting feature points is not successfully estimate; the offset matrix D contains parallax information used to calculate the depth.

[0079] Il上一点PltU1, y1? Z1),通过匹配到Ik上一点Pkt (x2,Y2» z2),计算出空间点Pw(xQ,y0»z0)的深度Ztl;对于源图像Ilj上的任意一点(x,y),在成像模块光轴中心基线长b和镜头 ? [0079] point PltU1, y1 Z1 on Il), by matching the point Pkt the Ik (x2, Y2 »z2), calculate the spatial point Pw (xQ, y0» z0) depth ZTL; for the source image Ilj of any point (x, y), in the imaging module of the optical axis and a lens center of the baseline length b

焦距f情况下,其对应的空间点的深度为 The focal length f, the spatial points corresponding depth

Figure CN103824303AD00141

其中D是包含偏移量信息的 Where D is the information including the offset

所述视差矩阵。 The disparity matrix.

[0080] 在具体实例中,当焦平面与所述被摄物为一平面,并且与所述的成像模块的镜头平行时,通过:公式:1/L' -1/L=1/F'计算出所述被摄物的深度信息,所述的深度信息即为物距。 [0080] In a specific example, when the subject with the focal plane as a plane, and parallel to the lens of the imaging module by: Equation: 1 / L '-1 / L = 1 / F' calculating the depth information of the subject, the object distance is the depth information. 其中:L为物距,L'为像距,F为镜头焦距。 Wherein: L is the object distance, L 'is the image distance, F is the lens focal length.

[0081] 在具体实例中,所述的步骤C中,对于Ilj上一点Pw=U1, Y1, Z1),,其空间坐标在经 [0081] In a particular embodiment, said step C, for a point Pw = U1, Y1, Z1) on the spatial coordinates Ilj ,, by

左传感器为原点的坐标系中为Pw= (X0, y0, Z0),有: Left sensor coordinate system whose origin is Pw = (X0, y0, Z0), are:

Figure CN103824303AD00142

[0082]若旋转 Θ 角,投影点Pw,=(x0,,y0,,z0,),有, [0082] when the rotation angle Θ, the projection point Pw, = (x0,, y0,, z0,), there is,

Figure CN103824303AD00143

.[0083] 重新计算 . [0083] recalculation

Figure CN103824303AD00144

在左侧传感器上的理论像点ΡίΤ' =(x1',y/,Z1'),有, Theory on the left sensor image point ΡίΤ '= (x1', y /, Z1 '), has,

Figure CN103824303AD00145

[0084] 根据上述的方法,依次对拍照后的图像上每一点进行计算,得到图像E ;对图像E作插值,剪裁,得到所述的处理后的第一原始图像图G。 [0084] According to the above method, for each point sequentially on the photographed image is calculated to obtain an image E; E for the interpolation image, clipping, the original image to obtain a first view of the process of G.

[0085] 在具体实例中,所述的旋转角度Θ通过:加速度传感器、用户指定、图像建模计算获得。 [0085] In a particular example, the rotation angle Θ by: obtaining an acceleration sensor, the user designates an image modeling calculation.

[0086] 可以理解的是,本发明的上述实施例均可以运用于各种成像设备,例如智能手机,照相机或摄像机等。 [0086] It will be appreciated that the above-described embodiments of the present invention can be applied to various imaging devices, such as smart phones and the like, a camera or a video camera. 由于本发明实施例无需使用大尺寸传感器、长焦距大光圈镜头或移轴镜头等装置,因此可以在小型化成像设备中运用,从而大幅减少了成像设备的硬件成本和空间要求。 Since the embodiment of the present invention without using a large sensor size, a large aperture lens or long focal length lens shift and other devices, can be used in the miniaturized image forming apparatus, thus significantly reducing hardware costs and space requirements of the image forming apparatus.

[0087] 具体来说,本发明的具体应用可以包括但不限于以下形式: [0087] In particular, the specific application of the present invention may include but are not limited to, the following forms:

[0088] 1、照相设备(例如:卡片式照相机,手机等) [0088] 1, camera device (e.g.: card type cameras, mobile phones, etc.)

[0089] 图11表示本发明在照相设备中的一种应用形式,其中在照相设备的左右各放置一颗传感器用于测距。 [0089] FIG. 11 shows an application of the present invention is in the form of a photographic apparatus, wherein each camera device is placed around a sensor for ranging.

[0090] 图12表示本发明在照相设备中的另一种应用形式,其中照相设备的一侧为潜望式主传感器,另一侧为微型传感器用于测距。 [0090] FIG. 12 shows an alternative form of the present invention is applied in a photographic apparatus, wherein a side of the periscope camera device is the master sensor, the other side for a micro distance measuring sensors.

[0091] 图13表示本发明在照相设备中的另一种应用形式,其中照相设备包括左右对称的潜望式传感器。 [0091] FIG. 13 shows another application of the present invention in a photographic apparatus, wherein the periscope camera device comprises a sensor symmetrical.

[0092] 2、摄像设备(例如:摄像机等) [0092] 2, the imaging device (example: a video camera)

[0093] 图14表示本发明在摄像设备中的一种应用形式,其中在摄像设备的一侧为主传感器,另一侧为微型传感器用于测距。 [0093] FIG. 14 shows an application of the present invention is in the form of image pickup apparatus, wherein the imaging apparatus based on the side of the sensor, the other side for a micro distance measuring sensors.

[0094] 本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。 [0094] Those of ordinary skill in the art can appreciate that various embodiments of the method of the above-described embodiments all or part of the steps may be relevant hardware instructed by a program, the program may be stored in a computer-readable storage medium, the storage medium It may include: ROM, RAM, magnetic disk, or optical disk.

[0095] 虽然本发明披露如上,但本发明并非限定于此。 [0095] Although the present invention is disclosed as above, but the present invention is not limited thereto. 任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。 Anyone skilled in the art, without departing from the spirit and scope of the present invention, various changes or modifications may be made, and therefore the scope of the present invention reference should be made to the scope defined by the claims.

Claims (12)

1.一种基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,包含如下所述步骤: 步骤A:通过从无穷远至最近的焦距调节,用成像模组对被摄物拍摄,捕捉到多幅原始图像; 步骤B:通过对所述原始图像的成像信息进行运算,得到被摄物的深度信息; 步骤C:通过所述的深度信息对所述原始图像中的第一原始图像进行透视畸变处理。 1. A method for adjusting an image based on the perspective distortion of the subject, directions, characterized in that it comprises the following steps: Step A: By adjusting the focal length to the nearest infinity, with an imaging module of the subject shooting, the plurality of original images captured; step B: imaging information by performing the operation on the original image to obtain depth information of the subject; step C: through said first depth information for the original image processing original image perspective distortion.
2.根据权利要求1所述的基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,当焦平面与所述被摄物不完全为一平面时,所述的步骤B中,还包括:基于所述成像信息的特征的匹配、基于所述成像信息的区域的匹配、或基于成像信息的相位的匹配,在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 2. Based on the position of the subject of claim 1, the method of adjusting the image distortion perspective direction, wherein, when the focal plane of the subject is not completely a plane, in said step B further comprising: a feature-based matching the imaging information, based on matching the imaging region information or imaging information phase matching based on the corresponding intermediate is obtained after the parameter information, further calculates the depth information obtained .
3.根据权利要求2所述的基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,所述的基于所述成像信息的特征的匹配包括: 步骤B1:采用横向双摄像头,捕捉获得的原始图像分别为k和Ik ; 步骤B2:特征采集步骤:得到模板内单像素的检测值C: 3. The position of the subject based on claim 2, the direction adjustment method of the image perspective distortion, wherein, based on the feature information matching said imaging comprising: Step B1: using horizontal dual cameras, capturing the original image are obtained and k Ik; step B2: characteristic acquisition step: the template to give a single detection pixel value C:
Figure CN103824303AC00021
检测对模板中的每个像素进行,I(XQ,Yo)是模板中心点灰度值,I (X,y)是模板上其他点灰度值,t是确定相似程度的阈值,X, y为以源图像I左下角为原点的坐标系内的坐标,对属于模板A的点的检测值C求和,得到输出的游程和S: Detection of the template each pixel, I (XQ, Yo) is the center point of the template gradation values, I (X, y) is the gradation value of other points on the template, t is a threshold value to determine the degree of similarity, X, y in the lower left corner of the source image I is the origin of coordinates in a coordinate system, the detection value C summing point a belonging to the template, and the resulting output of the run S:
Figure CN103824303AC00022
源图像I的相应点(Xo,y0)的特征值R为: Corresponding source image point I (Xo, y0) of the feature value R is:
Figure CN103824303AC00023
其中h为几何阈值且h=3Smax/4,其中Smax是游程和S所能取的最大值,对所述两幅原始图像Il和Ie作处理,得到特征图分别为Hl和He ; 步骤B3:视差矩阵计算步骤:以扎中待匹配点(¾, y0)为中心点来创建一个大小为宽m、高η的矩形窗Q ;在Hk中,沿水平方向偏移量dx在视差范围内取出与待匹配点(X(l,%)相邻同样大小为mXn的另一矩形窗Q';将第一特征图扎的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较; 贝U,扎中以待匹配点(¾, y0)为中心点的mXn矩形窗与Hk中对应尺寸水平偏移量dx的矩形窗的匹配系数为: Where h is the geometric threshold and h = 3Smax / 4, where Smax is the maximum runlength and S can assume, the original two images Il and Ie as to give Hl and wherein FIG respectively of He; Step B3: parallax matrix calculation step: to be matched to the tie point (¾, y0) to create a size of width m, η rectangular window high Q as the center; in Hk, the shift amount dx in the horizontal direction and removed within the parallax range be matched point (X (l,%) adjacent to another of the same size as the rectangular window mXn Q; comparing 'the first feature rectangular window of FIG bar Q characteristic and the second rectangular window of FIG Hk Q'; Pui U , matching coefficient to be matched to tie point (¾, y0) is the center point of the rectangular window and Hk mXn rectangle window size corresponding to the horizontal offset of dx is:
Figure CN103824303AC00024
i,j是以矩形窗为坐标系的坐标; 预选设定一个几何阈值k,如果dx (x0, y0) ( k,即为匹配成功; dx(x0, y0)取得最小值时dx的取值,这里用视差矩阵D来记录匹配成功的点的偏移值dx,D (x0, y0) =dx ; 在遍历特征图扎后,对视差矩阵D插值,对未匹配成功的特征点以及未成功提取特征点的坐标估值; 将视差矩阵D包含的偏移量信息用于计算深度; 步骤B4 A上一点PuU1, y1; Z1),通过匹配到Ik上一点Pkt (x2, j2, z2),计算出空间点PwUtl, J0, z0)的深度Zci ;对于源图像Ilj上的任意一点(X,y),在成像模块光轴中心基线长b和镜头焦距f情况下,其对应的空间点的深度为: i, j is the coordinate system of the rectangular window coordinates; a pre-set geometric threshold k, if dx (x0, y0) (k, that is, the matching is successful; DX when dx (x0, y0) to obtain the minimum value here the parallax matrix D successfully matched record point offset values ​​dx, D (x0, y0) = dx; FIG feature after traversing bar, disparity interpolation matrix D of not successfully matched feature points and unsuccessful extracting a feature point coordinate estimates; offset information contains parallax matrix D for calculating depth; one o'clock PuU1 step B4 a, y1; Z1), by matching the point on Ik Pkt (x2, j2, z2), calculated spatial point PwUtl, J0, z0) depth Zci; for any point on the source image Ilj (X, y), the optical axis of the imaging module in the baseline length b and the center of lens focal length f, the corresponding spatial points depth:
Figure CN103824303AC00031
D是包含偏移量信息的所述视差矩阵。 D is a matrix comprising said parallax offset information.
4.根据权利要求1所述的基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,当焦平面与所述被摄物为一平面,并且与所述的成像模块的镜头平行时,通过:公式:1/L' -1/L=1/F'计算出所述被摄物的深度信息,其中:L为物距,L'为像距,F为镜头焦距,所述的深度信息即为物距。 The position based on the subject according to claim 1, the method of adjusting the image distortion perspective direction, wherein, when the focal plane is a flat subject, and the imaging lens module when parallel, by: equation: 1 / L '-1 / L = 1 / F' calculated depth information of the subject, wherein: L is the object distance, L 'is the image distance, F is the lens focal length, the is the depth of the object distance information described later.
5.根据权利要求3所述的基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,所述的步骤C中,对于所述Pu=(Xl,Y1, Z1),其空间坐标在经传感器为原点的坐标系中为PW- (X。,y。,Z(i),: 5. Based on a position of the subject according to claim 3, the direction of the image adjustment method of perspective distortion, wherein, in said step C, for the Pu = (Xl, Y1, Z1), which space coordinates PW- (X., y by the sensor in the origin of coordinates., Z (i) ,:
Figure CN103824303AC00032
若旋转θ角,投影点Pw' = (χ0',yci',Zci'),有: When the rotation angle θ, the projected point Pw '= (χ0', yci ', Zci'), are:
Figure CN103824303AC00033
重新计算Pw' = (X。',yQ',Z0')在左侧传感器上的理论像点ΡίΤ' = (χ/,y/,z/ ),有: Recalculating Pw '= (. X', yQ ', Z0') ΡίΤ theoretical image point on the left sensor '= (χ /, y /, z /), are:
Figure CN103824303AC00034
依次计算每一点,得到图像E ; 对图像E作插值,剪裁,得到所述的处理后的第一原始图像图G。 Successively calculating each point to obtain an image E; E for the interpolation image, clipping, the original image to obtain a first view of the process of G.
6.根据权利要求5所述的基于被摄物的位置、方向调整图像透视畸变的方法,其特征在于,所述的旋转角度Θ通过以下任意一种方式得到:加速度传感器、用户指定、图像建模。 The subject based on a position, a direction image adjustment method according to the perspective distortion 5, wherein said rotation angle Θ is obtained by way of one of the following claims: an acceleration sensor, the user designates an image built mold.
7.一种基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,包括: 拍摄模块,对被摄物进行多个角度的拍摄,捕捉到多幅原始图像; 深度计算模块,通过对所述些原始图像的成像信息进行运算,得到被摄物的深度信息;图像处理模块,通过所述的深度信息对所述原始图像中的第一原始图像进行透视畸变处理。 A position based on the subject, the image reorientation perspective distortion device, comprising: capturing module, for photographing the subject a plurality of angles, the plurality of the captured original image; depth calculation module, by performing some operation on the image information of the original image to obtain depth information of the subject; an image processing module for processing the perspective distortion in the original image of the first original image through the depth information.
8.根据权利要求7所述的基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,当焦平面与所述被摄物不完全为一平面时,所述的深度计算模块可基于所述成像信息的特征的匹配、基于所述成像信息的区域的匹配、或基于成像信息的相位的匹配在获得相应的中间参数信息后,进一步计算得到所述的深度信息。 8. The position of the subject based on the claim 7, the adjustment direction of the image perspective distortion means, wherein, when the focal plane of the subject is not a completely flat, the depth calculation module may be based on characteristics of the imaging information matching, matching the depth information based on the imaging region information or imaging information based on the phase matching is obtained in the respective intermediate parameter information, according to a further calculated.
9.根据权利要求8所述的基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,所述深度计算模炔基于所述成像信息的特征的匹配包括: 采用横向双摄像头,捕捉获得的原始图像分别为k和Ik ; 得到模板内单像素的检测值C: Based on the position of the subject according to claim 8, image perspective distortion reorientation means, wherein said mold depth calculation based on the characteristic information of the alkynyl matching imaging comprising: using a lateral dual cameras, capturing the original image are obtained and k Ik; to give the template single-pixel detection value C:
Figure CN103824303AC00041
检测对模板中的每个像素进行,I (Xtl, y。)是模板中心点灰度值,I (X,y)是模板上其他点灰度值,t是确定相似程度的阈值,X, y为以源图像I左下角为原点的坐标系内的坐标,对属于模板A的点的检测值C求和,得到输出的游程和S: Detection of the template each pixel, I (Xtl, y.) Gray scale value central point is the template, I (X, y) is the gradation value of other points on the template, t is a threshold value to determine the degree of similarity, X, y is the lower left corner of the source image I as the origin of coordinates in the coordinate system, the detection value C summing point a belonging to the template, and the resulting output of the run S:
Figure CN103824303AC00042
源图像I的相应点(Xo,Yo)的特征值R为: Corresponding source image point I (Xo, Yo) is the feature value R:
Figure CN103824303AC00043
其中h为几何阈值且h=3Smax/4,其中Smax是游程和S所能取的最大值, 对所述两幅原始图像k和Ik作处理,得到特征图分别为扎和Hk ; 视差矩阵计算步骤:¾中待匹配点(X(l,y0)为中心点来创建一个大小为宽m、高η的矩形窗Q;HK中,沿水平方向偏移量dx在视差范围内取出与待匹配点(Χ(ι,%)相邻同样大小为mXn的另一矩形窗Q' ;将第一特征图扎的矩形窗Q与第二特征图Hk的矩形窗Q'进行比较; 贝U,扎中以待匹配点(¾, y0)为中心点的mXn矩形窗与Hk中对应尺寸水平偏移量dx的矩形窗的匹配系数为: Where h is the geometric threshold and h = 3Smax / 4, where Smax is the maximum value a run can take and S, the two original image k and Ik as to give the bar respectively and wherein FIG Hk; matrix calculating step parallax : ¾ to be the matching point (X (l, y0) as the center point to create a size of width m, η rectangular window high Q; HK, the offset dx taken to be in the range of parallax in the horizontal direction matching point (Χ (ι,%) adjacent to the same size as the rectangular window mXn another Q '; FIG pierced first feature and the second feature rectangular window Q of the rectangular window of FIG Hk Q' are compared; Pui U, in bar matching coefficients to be matched points (¾, y0) is the center point of the rectangular window and Hk mXn rectangle window size corresponding to the horizontal offset of dx is:
Figure CN103824303AC00044
i,j是以矩形窗为坐标系的坐标; 预选设定一个几何阈值k,如果r dx (x0, y0) ( k,即为匹配成功; 通过视差矩阵D来记录匹配成功的点的偏移值dx, D (x0, y0) =dx ;在遍历特征图扎后,对视差矩阵D插值,对未匹配成功的特征点以及未成功提取特征点的坐标估值; 将视差矩阵D包含的偏移量信息用于计算深度; Il上一点Pu(Xi,Yi» Z1),通过匹配到Ik上一点Pkt (χ2, y2,Z2),计算出空间点Pw(xQ, y0,z0)的深度Ztl ;对于源图像L上的任意一点(X,y),在成像模块光轴中心基线长b和镜头焦距f情况下,其对应的空间点的深度为: i, j is the coordinate system of the rectangular window coordinates; a pre-set geometric threshold k, if r dx (x0, y0) (k, that is, the matching is successful; successfully matched dot is recorded by the parallax shift matrix D value dx, D (x0, y0) = dx; FIG feature after traversing bar, disparity interpolation matrix D of not successfully matched feature points and feature point extraction coordinate is not successfully estimate; partial matrix D contains parallax Ztl depth point Pu (Xi, Yi »Z1) on the Il, by matching the point Pkt (χ2, y2, Z2) on Ik, calculated spatial point Pw (xQ, y0, z0); the shift amount used to calculate the depth information ; for any point on the source image L (X, y), the optical axis of the imaging module in the baseline length b and the center of lens focal length f, the depth of the spatial points corresponding to:
Figure CN103824303AC00051
.D是包含偏移量信息的所述视差矩阵。 .D matrix comprising said parallax offset information.
10.根据权利要求7所述的基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,当焦平面与所述被摄物为一平面,并且与所述的成像模块的镜头平行时,通过:公式:1/L' -1/L=1/F'计算出所述被摄物的深度信息;其中:L为物距,L'为像距,F为镜头焦距,所述的深度信息即为物距。 Based on the position of the subject according to claim 7, a direction adjusting means perspective distortion of the image, wherein, when the focal plane is a flat subject, and the imaging lens module when parallel, by: equation: 1 / L '-1 / L = 1 / F' calculated depth information of the subject; wherein: L is the object distance, L 'is the image distance, F is the lens focal length, the is the depth of the object distance information described later.
11.根据权利要求9所述的基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,所述的步骤C中,对于所述Pu=(Xl,yi,Zl),其空间坐标在经传感器为原点的坐标系中为Pw_ (X。,y。,Z。),有*: According to claim 9 based on the subject, directions image perspective distortion adjustment means, wherein, in said step C, for the Pu = (Xl, yi, Zl), which space the sensor coordinates by the coordinate system whose origin is Pw_ (X., y, Z..), with *:
Figure CN103824303AC00052
若旋转θ角,投影点Pw' = (V,yj,z。'),有: When the rotation angle θ, the projected point Pw '= (. V, yj, z'), are:
Figure CN103824303AC00053
重新计算Pw' = (X。',yQ',Z0')在左侧传感器上的理论像点ΡίΤ' = (χ/,y/,z/ ),有: Recalculating Pw '= (. X', yQ ', Z0') ΡίΤ theoretical image point on the left sensor '= (χ /, y /, z /), are:
Figure CN103824303AC00054
依次计算每一点,得到图像E ; 对图像E作插值,剪裁,得到所述的处理后的第一原始图像图G。 Successively calculating each point to obtain an image E; E for the interpolation image, clipping, the original image to obtain a first view of the process of G.
12.根据权利要求11所述的基于被摄物的位置、方向调整图像透视畸变的装置,其特征在于,所述的旋转角度Θ通过以下任意一种方式得到:加速度传感器、用户指定、图像建模。 According to claim based on the subject, directions 11 to adjust the image distortion perspective apparatus, wherein said rotation angle Θ is obtained by any of the following ways: an acceleration sensor, the user designates an image built mold.
CN201410096007.8A 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object CN103824303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410096007.8A CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410096007.8A CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Publications (1)

Publication Number Publication Date
CN103824303A true CN103824303A (en) 2014-05-28

Family

ID=50759344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410096007.8A CN103824303A (en) 2014-03-14 2014-03-14 Image perspective distortion adjusting method and device based on position and direction of photographed object

Country Status (1)

Country Link
CN (1) CN103824303A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867113A (en) * 2015-03-31 2015-08-26 酷派软件技术(深圳)有限公司 Method and system for perspective distortion correction of image
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 Method and device for finding distorted region in image
CN105335959A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Quick focusing method and device for imaging apparatus
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
WO2016150291A1 (en) * 2015-03-24 2016-09-29 Beijing Zhigu Rui Tuo Tech Co., Ltd. Imaging control methods and apparatuses, and imaging devices
CN107222737A (en) * 2017-07-26 2017-09-29 维沃移动通信有限公司 Depth image data processing method and mobile terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161614B1 (en) * 1999-11-26 2007-01-09 Sanyo Electric Co., Ltd. Device and method for converting two-dimensional video to three-dimensional video
CN101813467A (en) * 2010-04-23 2010-08-25 哈尔滨工程大学 Blade running elevation measurement device and method based on binocular stereovision technology
CN102867304A (en) * 2012-09-04 2013-01-09 南京航空航天大学 Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system
CN103353663A (en) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 Imaging adjustment apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7161614B1 (en) * 1999-11-26 2007-01-09 Sanyo Electric Co., Ltd. Device and method for converting two-dimensional video to three-dimensional video
CN101813467A (en) * 2010-04-23 2010-08-25 哈尔滨工程大学 Blade running elevation measurement device and method based on binocular stereovision technology
CN102867304A (en) * 2012-09-04 2013-01-09 南京航空航天大学 Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system
CN103353663A (en) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 Imaging adjustment apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹曙等: "融合SUSAN特征的医学图像Graph Cuts算法", 《电子测量与仪器学报》, vol. 27, no. 6, 30 June 2013 (2013-06-30), pages 5 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335958B (en) * 2014-08-15 2018-12-28 格科微电子(上海)有限公司 The processing method and equipment of flash lighting
CN105335959A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Quick focusing method and device for imaging apparatus
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
CN105335959B (en) * 2014-08-15 2019-03-22 格科微电子(上海)有限公司 Imaging device quick focusing method and its equipment
WO2016150291A1 (en) * 2015-03-24 2016-09-29 Beijing Zhigu Rui Tuo Tech Co., Ltd. Imaging control methods and apparatuses, and imaging devices
US10298835B2 (en) 2015-03-24 2019-05-21 Beijing Zhigu Rui Tuo Tech Co., Ltd. Image control methods and apparatuses, and imaging devices with control of deformation of image sensor
CN104867113A (en) * 2015-03-31 2015-08-26 酷派软件技术(深圳)有限公司 Method and system for perspective distortion correction of image
CN104867113B (en) * 2015-03-31 2017-11-17 酷派软件技术(深圳)有限公司 Perspective distortion correction method of the image system and
CN105227948A (en) * 2015-09-18 2016-01-06 广东欧珀移动通信有限公司 Method and device for finding distorted region in image
CN107222737A (en) * 2017-07-26 2017-09-29 维沃移动通信有限公司 Depth image data processing method and mobile terminal

Similar Documents

Publication Publication Date Title
Eigen et al. Depth map prediction from a single image using a multi-scale deep network
Hirschmuller Stereo processing by semiglobal matching and mutual information
CN101785025B (en) System and method for three-dimensional object reconstruction from two-dimensional images
US9361680B2 (en) Image processing apparatus, image processing method, and imaging apparatus
US20130222556A1 (en) Stereo picture generating device, and stereo picture generating method
EP2064675B1 (en) Method for determining a depth map from images, device for determining a depth map
CN103052968B (en) Object detection apparatus and method for detecting an object
US20100194851A1 (en) Panorama image stitching
KR101862889B1 (en) Autofocus for stereoscopic camera
WO2014024579A1 (en) Optical data processing device, optical data processing system, optical data processing method, and optical data processing-use program
US8452081B2 (en) Forming 3D models using multiple images
JP2011504262A (en) System and method for depth map extraction using region-based filtering
KR101333871B1 (en) Method and arrangement for multi-camera calibration
US8447099B2 (en) Forming 3D models using two images
CN101377814A (en) Face image processing apparatus, face image processing method, and computer program
US9898856B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN101984463A (en) Method and device for synthesizing panoramic image
WO2012113732A1 (en) Determining model parameters based on transforming a model of an object
WO2010028559A1 (en) Image splicing method and device
EP2386998B1 (en) A Two-Stage Correlation Method for Correspondence Search
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN101630406B (en) And a camera calibration method of camera calibration apparatus
US9888235B2 (en) Image processing method, particularly used in a vision-based localization of a device
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
EP2859528A1 (en) A multi-frame image calibrator

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
RJ01