WO2021082264A1 - 一种基于双目视觉的投影图像自动校正方法及系统 - Google Patents

一种基于双目视觉的投影图像自动校正方法及系统 Download PDF

Info

Publication number
WO2021082264A1
WO2021082264A1 PCT/CN2019/129586 CN2019129586W WO2021082264A1 WO 2021082264 A1 WO2021082264 A1 WO 2021082264A1 CN 2019129586 W CN2019129586 W CN 2019129586W WO 2021082264 A1 WO2021082264 A1 WO 2021082264A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
camera
relationship
projected
Prior art date
Application number
PCT/CN2019/129586
Other languages
English (en)
French (fr)
Inventor
张绍谦
毕禛妮
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Publication of WO2021082264A1 publication Critical patent/WO2021082264A1/zh
Priority to US17/395,649 priority Critical patent/US11606542B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention belongs to the technical field of projectors, and in particular relates to a method and system for automatically correcting projected images based on binocular vision.
  • the most commonly used mode of the projector is vertical projection, that is, the beam emitted by the projector is projected vertically onto a plane to form an image.
  • the position of the projector should be at right angles to the projection screen as much as possible to ensure the projection effect. If the two cannot be kept perpendicular, the projected image on the projection screen will be distorted.
  • a keystone correction technology is needed to correct the distorted projected image into a normal rectangular image through a keystone correction (or rectangle correction) algorithm, so as to obtain the desired projection effect.
  • the existing automatic keystone correction of the projector generally uses the acceleration sensor to measure the offset of the projector, and corrects the projected image by backward compensation of the projected image.
  • the acceleration sensor can measure the deflection angle in the vertical direction, but when the projector is only in When deflection occurs in the horizontal direction, the deflection angle with the normal vector of the projection plane cannot be measured, resulting in poor picture correction effect; if you want to measure the deflection angle in the horizontal direction, you need to assist multiple acceleration sensors, which increases the difficulty and difficulty of the structural design of the projector. Space also increases input costs. Therefore, a method for effectively adjusting the projection device to solve the distortion of the projection image of the projection device is needed.
  • the present invention provides a projection image automatic correction method and system based on binocular vision.
  • the projection image is three-dimensionally reconstructed based on the principle of binocular parallax, and the projection image is reversely transformed to realize efficient automatic correction of the projection image without being affected by The projection area is limited, and the correction flexibility is high.
  • the present invention proposes the following technical solutions to solve them:
  • a method for automatically correcting a projection image based on binocular vision which is characterized in that it comprises: acquiring a depth map of a projection image on a projection plane; calculating a first transformation relationship between the source image and the projection image; and according to the depth map , Obtain a distorted image; obtain a corrected image after correcting the distorted image; calculate a second transformation relationship between the distorted image and the corrected image; A correction relationship between the source image and the corrected image; and the projection image is corrected according to the correction relationship.
  • calculating the first transformation relationship between the source image and the projection image specifically includes: calculating the source image according to the matching relationship between the source image and the camera image The transformation relationship A between the image and the camera image; according to the depth value corresponding to the depth map and the matching relationship between the camera image and the projected image, calculate the difference between the camera image and the projected image The transformation relationship B between the two; calculating the first transformation relationship according to the transformation relationship A and the transformation relationship B.
  • the camera images are taken by the same camera.
  • calculating the first transformation relationship between the source image and the projection image specifically includes: according to the depth value corresponding to the depth map, and the source image and The matching relationship between the projection images calculates the first transformation relationship.
  • the method for automatically correcting the projected image based on binocular vision as described above to obtain the depth map of the projected image on the projection plane also includes; performing camera calibration on the left camera and the right camera; and performing camera calibration on the left camera image and the right camera image taken by the left camera. Perform stereo correction on the right camera image taken by the camera; stereo match the left and right camera images; obtain the disparity map of all corresponding points on the left and right camera images; calculate the depth map according to the disparity map.
  • the method for automatically correcting a projected image based on binocular vision as described above, obtaining a corrected image after correcting the distorted image is specifically: obtaining an inscribed rectangle of the distorted image.
  • the method for automatically correcting projection images based on binocular vision as described above acquiring the inscribed rectangle of the distorted image, specifically includes: identifying the corner coordinates of the four corners formed by the four outer contour edges of the distorted image; Calculate the midpoint coordinates of the midpoint of each outer contour edge; connect the midpoints of the relative outer contour edges to form an intersection point, which serves as the center point of the inscribed rectangle; calculate the distance between the intersection point and each outer contour edge
  • the shortest distance; the shortest distance is decomposed on the horizontal axis and the vertical axis in the image coordinate system, and the length of twice the resolution of the horizontal axis is regarded as the length of one side of the inscribed rectangle, and the length of the resolution of the vertical axis is twice that of the vertical axis. The length is taken as the length of the other side of the inscribed rectangle.
  • the above-mentioned binocular vision-based projection image automatic correction method further includes the step of adjusting the size of the solved inscribed rectangle.
  • the present invention also relates to a projection image automatic correction system based on binocular vision, including a projector, two cameras, a main control unit, and the following parts communicatively connected with the main control unit: a projection image reconstruction module, which is used for Obtain the depth map of the projected image, and obtain a distorted image according to the depth map; a first calculation module for calculating the first transformation relationship between the source image and the projected image; an image acquisition module for The second calculation module is used to calculate the second transformation relationship between the distorted image and the corrected image; the third calculation module is used to calculate the second transformation relationship between the distorted image and the corrected image; The relationship and the second transformation relationship are used to calculate the correction relationship between the source image and the corrected image; the correction module corrects the projection image projected on the projection plane according to the correction relationship.
  • a projection image reconstruction module which is used for Obtain the depth map of the projected image, and obtain a distorted image according to the depth map
  • a first calculation module for calculating the first transformation relationship between the source image and the projected image
  • the two cameras are symmetrically arranged on both sides of the projection surface of the projector, and the field of view of the two cameras is larger than that of the projector The projection range.
  • the advantages and beneficial effects of the present invention are: by visually reconstructing the projection image, acquiring the depth map and the transformation relationship between the source image and the projection image, and acquiring the distortion image of the projection image according to the depth map, Correct the distorted image to obtain the corrected image.
  • the correction method is based on binocular camera acquisition, reconstruction and integration. The projected image is restored without being limited by the projection area, the correction flexibility is high, and the correction method is simple and fast, and the correction efficiency is high.
  • FIG. 1 is a flowchart of an embodiment of a method for automatically correcting a projection image based on binocular vision proposed by the present invention
  • FIG. 2 is a flowchart of acquiring a depth image in an embodiment of the method for automatically correcting a projection image based on binocular vision proposed by the present invention
  • FIG. 3 is a flowchart of obtaining a corrected image in an embodiment of the method for automatically correcting a projection image based on binocular vision proposed by the present invention
  • FIG. 4 is a schematic diagram of the relationship between the source image, the projected image, and the camera image in the embodiment of the automatic correction method for projected images based on binocular vision proposed by the present invention
  • FIG. 5 is a schematic diagram of the relationship between the source image and the projected image in an embodiment of the method for automatically correcting a projected image based on binocular vision proposed by the present invention.
  • Fig. 1 shows the flow relationship of each stage, it does not represent the order of realization of the automatic correction method proposed by the present invention.
  • the order of realization of these stages is not necessarily the order shown in Fig. 1, which is shown in Fig. 1
  • the implementation sequence is only to clearly illustrate the automatic correction process of the projected image.
  • two cameras are symmetrically arranged on both sides of the projector, specifically on both sides of the projection surface of the projector.
  • the cameras can detect depth, and the fields of view of the two cameras can mutually be on the projection plane.
  • the camera overlaps, and the field of view of the camera is larger than the projection range of the projector, so that both cameras can capture the projected image P projected on the projection plane.
  • FIG. 2 shows the flow chart of obtaining the depth map of the projection image P, including (1) camera calibration; (2) stereo correction; (3) stereo matching; (4) obtaining the depth map.
  • the off-line calibration method is used to calibrate the camera to obtain the camera internal parameter coefficient matrix Kc, the external parameter rotation matrix R and translation vector t and the distortion coefficient.
  • the method of camera calibration generally uses OpenCV The calibration function or MATLAB calibration toolbox is implemented. This camera calibration method is a mature and open content in the field of digital image processing, so I will not repeat it here.
  • OpenCV is a cross-platform open source computer vision library. It also provides interfaces to languages such as Python, Ruby, and MATLAB, and implements many general algorithms in image processing and computer vision.
  • the specific realization process of stereo correction is: a. Turn on the two cameras and the projector, and the projector projects a white noise image to the projection plane to obtain the left camera image and the right camera image taken by the two cameras at the same time; b. Read the camera calibration parameter file after the above camera calibration, and obtain the internal parameter matrix, external parameter rotation matrix, translation vector and distortion coefficient of the left camera and the right camera respectively; c. Use the stereoRctify function in OpenCV to calculate the left camera image and the right camera image D. Use the initUndistortRectifyMap function in OpenCV to calculate the mapping matrix in the X direction and the Y direction on the left camera image and the right camera image according to the internal parameter matrix and the external parameter rotation matrix; e.
  • the internal circuit components of the camera during the working process are due to their own defects, the unstable working environment, the complex response and interference within the electronic circuit and the noise during the image acquisition process. Therefore, the acquired images can be preprocessed, including image processing. Grayscale, geometric transformation, Gaussian filtering and other methods can reduce noise and enhance the image.
  • the stereo matching process is performed after finding the corresponding pixels in the two images in the left camera image and the right camera image after stereo correction, that is, some pixels in the left camera image found during the correction and corresponding to these pixels
  • the image can be preprocessed such as grayscale or Gaussian filtering, and then the matching cost of the left camera image and the right camera image and the matching cost aggregation (see the application for details) No. "201711321133.9", the title of the invention is "A dust detection device and method based on stereo vision"), and then perform parallax calculation and parallax optimization, the above-mentioned stereo matching process is mature and public in the field of digital image processing The content will not be repeated here.
  • the depth value of each pixel in the image is calculated z:
  • b is the distance between the optical centers of the two cameras
  • D is the parallax value between points m1 and m2.
  • the point p of any point o(u,v) in the source image projected to the three-dimensional coordinate position on the projection plane is (s, t, z), and the point p is then used by one of the left camera and the right camera (for ease of explanation, in In S2, the left camera is selected to capture, and the coordinate feedback to the camera image C is c(x, y).
  • (s, t) in the point p(s, t, z) on the projection plane is obtained by transforming in the camera coordinate system, the depth z is obtained from the above depth map, and the point o(u, v) on the source image And the point c(x,y) on the camera image is in the image coordinate system.
  • the first transformation relationship Q between the source image O and the projection image P is calculated by calculating the relationship between the source image O, the projection image P, and the camera image C.
  • the matrix T can be obtained through the following description process.
  • the projector can project a known image (such as a standard checkerboard image), and use the Harris corner detection function of OpenCV to obtain the coordinates of the corner points in the source image O and the camera image C.
  • the feature point matching method for example, by reading the position of the specified corner point or using the feature point matching method such as SIFT, SURF, FAST, ORB in OpenCV, the coordinates of, for example, four points in the source image O Input into the source array src, and store the coordinates of the four points in the camera image C corresponding to the four points in the target array dst, then the source image O to the camera image C are solved by calling the findHomography function of OpenCV
  • the homography matrix T between, for example, T is a 3 ⁇ 3 matrix.
  • the matrix T may not be obtained, and the first transformation relationship Q may be obtained directly from the relationship between the source image O and the projection image P.
  • the points on the projected image P in the camera coordinate system can be obtained
  • the point x p (s, t, z) on the projection image P and the point x o (u, v) on the source image O solve the first transformation relationship Q between the source image O and the projection image P.
  • R(s, t, 1) T (x, y, 1) T , where the point c(x, y) is detected by corner points in advance (for example, Harris Corner detection, Fast corner detection, SIFT feature point extraction, etc.). Therefore, (s, t) in the corresponding point p(s, t, z) can be obtained correspondingly, and the z coordinate of point p Obtained from the depth map.
  • the point p x p (s, t, z) in the source image O corresponding to the point x o (u, v)
  • the transformation matrix Q between the source image O and the projection image P is a 3 ⁇ 4 matrix. Since it is a homogeneous transformation, Q has 11 For an unknown coefficient, at least 6 corresponding points are needed to solve Q.
  • this Q calculation method does not need to calculate the matrix T, which simplifies the algorithm and avoids the introduction of errors caused by calculating the matrix T between the source image O and the camera image C.
  • the above-mentioned camera images C are taken by the same camera, and may all be taken by the left camera, or both may be taken by the right camera.
  • the inscribed rectangle of the distorted image can be selected as the corrected image after correction. Specifically, see Figure 3.
  • Image edge detection is also a mature public content in the field of digital image processing.
  • the contour detection function findContours in OpenCV can be used to identify the coordinates of the outer contour points of the distorted image.
  • the corner coordinates of the four corners formed by the outer contour edges are recorded.
  • x1, x2, x3 and x4 where the points of x1, x2, x3 and x4 are in the clockwise direction.
  • O1 as the midpoint between x1 and x2
  • O2 as the midpoint between x2 and x3
  • O3 as the midpoint between x3 and x4
  • O4 as the midpoint between x4 and x1
  • the coordinates of point O are passed into the PointPolygonTest function of OpenCV, and the shortest distance d from point O to the edges of each outer contour is returned;
  • the shortest distance d is decomposed in the x direction (that is, the horizontal axis direction) and the y direction (that is, the vertical axis direction) under the image coordinate system xy where the camera image C is located, and the length of the component decomposed in the x direction is twice as
  • the length of one side of the inscribed rectangle is twice the length of the decomposed component in the y direction as the length of the other side of the inscribed rectangle.
  • the inscribed rectangle is the image after the distortion image is corrected.
  • the image scaling function resize in OpenCV is used to adjust the size of the inscribed rectangle.
  • the characteristic points of the distorted image can be selected from the corner points of the four corners formed by the outer contour edge, and the characteristic points of the corrected image correspond to the four corners of the inscribed rectangle.
  • the transformation relationship matrix S from the distorted image to the corrected image can be obtained.
  • the projection image automatic correction method in the present invention has a small amount of calculation, improves correction efficiency, and is based on binocular camera acquisition, reconstruction and restoration of the projection image, and dynamically calculates the first transformation relationship Q and the second transformation relationship matrix S, which is not subject to projection
  • the area limitation can realize the automatic correction function on any projection plane, which improves the flexibility of the projector.
  • the present invention also provides an automatic correction system for projection images based on binocular vision, which is used to implement the automatic correction method for projection images as described above.
  • the system includes a projector (not shown) and two cameras. (Not shown) and the main control unit (not shown), the two cameras are symmetrically located on both sides of the projection surface of the projector, and the fields of view overlap and the field of view range is larger than the projection range of the projector; the main control unit is used for Control the operation of the entire control system of the projector and two cameras, image recognition and processing, projection video signal processing, and projector and camera power management functions, which can be MCU (Microcontroller Unit), AP ( Application Processor, application processor) and other core processors.
  • MCU Microcontroller Unit
  • AP Application Processor, application processor
  • the automatic correction system of this embodiment also includes a projection image reconstruction module (not shown), which is used to obtain a depth map of the projection image P, and obtain a distorted image according to the depth map; a first calculation module (not shown), which uses For calculating the first transformation relationship Q between the source image O and the projected image P; an image acquisition module (not shown), which is used to acquire a corrected image after the distortion image is corrected; a second calculation module (not shown), which Used to calculate the second transformation relationship S between the distorted image and the corrected image; a third calculation module (not shown), which is used to calculate the source image O and the corrected image according to the first transformation relationship Q and the second transformation relationship S The correction relationship between W; a correction module (not shown), which corrects the projected image P according to the correction relationship W.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种基于双目视觉的投影图像(P)自动校正方法及系统,该方法包括:(S1)获取投影平面上投影图像(P)的深度图;(S2)计算源图像(O)和投影图像(P)之间的第一变换关系(Q);(S3)根据深度图,获取畸变图像;(S4)获取对畸变图像校正后的校正图像;(S5)计算畸变图像和校正图像之间的第二变换关系(S);(S6)根据第一变换关系(Q)和第二变换关系(S),获取对源图像(O)和校正图像之间的校正关系(W);(S7)根据校正关系(W),对投影图像(P)进行校正。该基于双目视觉的投影图像(P)自动校正方法及系统用于实现对投影图像(P)的高效自动校正,且不受投影区域限制,校正灵活性高。

Description

一种基于双目视觉的投影图像自动校正方法及系统
本申请要求于2019年10月30日提交中国专利局、申请号为201911045424.9、发明名称为“一种基于双目视觉的投影图像自动校正方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明属于投影仪技术领域,具体涉及一种基于双目视觉的投影图像自动校正方法及系统。
背景技术
投影仪最常用的模式是垂直投影,即,投影仪发出的光束垂直投射到一个平面上而形成图像。在垂直投影中,投影仪的位置应尽可能与投影屏幕成直角才能够保证投影效果,如果两者不能保持垂直,投影屏幕上的投影图像就会产生畸变。在这个过程中,需要使用一个梯形校正的技术,通过梯形校正(或称矩形校正)算法,把畸变的投影图像纠正为正常的矩形画面,从而得到想要的投影效果。
现有的投影仪自动梯形校正一般采用加速度传感器测量投影仪的偏移量,通过对投影图像反向补偿的方法校正投影图像,但加速度传感器可以测量垂直方向上偏转角,但是当投影仪仅仅在水平方向发生偏转时,无法测得与投影平面法向量的偏转角,导致画面校正效果不好;如果想要测量水平方向的偏转角需要辅助多个加速度传感器,增加了投影仪的结构设计难度和空间,也增加了投入成本。因此,需要一种有效调整投影设备以解决投影设备的投影图像发生畸变的方法。
发明内容
本发明提供一种基于双目视觉的投影图像自动校正方法及系统,基于双目视差原理对投影图像进行三维重建,且对投影图像反向变换,实现对投影图像的高效自动校正,且不受投影区域限制,校正灵活性高。
为了解决上述技术问题,本发明所提出如下技术方案予以解决:
一种基于双目视觉的投影图像自动校正方法,其特征在于,包括:获取投影平面上投影图像的深度图;计算所述源图像和投影图像之间的第一变换关系;根据所述深度图,获取畸变图像;获取对所述畸变图像校正后的校正图像;计算所述畸变图像和所述校正图像之间的第二变换关系;根据第一变换关系和第二变换关系,获取对所述源图像和所述校正图像之间的校正关系;根据所述校正关系,对所述投影图像进行校正。
如上所述的基于双目视觉的投影图像自动校正方法,计算所述源图像和投影图像之间的第一变换关系,具体包括:根据所述源图像和相机图像的匹配关系,计算所述源图像与所述相机图像之间的变换关系A;根据所述深度图对应的深度值、以及所述相机图像与所述投影图像之间的匹配关系,计算所述相机图像和所述投影图像之间的变换关系B;根据所述变换关系A和变换关系B计算所述第一变换关系。
如上所述的基于双目视觉的投影图像自动校正方法,所述相机图像为同一个摄像头所拍摄。
如上所述的基于双目视觉的投影图像自动校正方法,计算所述源图像和投影图像之间的第一变换关系,具体包括:根据所述深度图对应的深度值,以及所述源图像和所述投影图像之间的匹配关系计算所述第一变换关系。
如上所述的基于双目视觉的投影图像自动校正方法,获取投影平面上投影图像的深度图,还包括;对左摄像头和右摄像头进行相机标定;对所述左摄像头拍摄的左相机图像和右摄像头拍摄的右相机图像进行立体校正;立体匹配所述左右相机图像;获取所述左右相机图像上所有对应点的视差图;根 据视差图,计算所述深度图。
如上所述的基于双目视觉的投影图像自动校正方法,获取对所述畸变图像校正后的校正图像,具体为:获取所述畸变图像的内接矩形。
如上所述的基于双目视觉的投影图像自动校正方法,获取所述畸变图像的内接矩形,具体包括:识别所述畸变图像的四个外轮廓边形成的四个角部的角部坐标;计算每条外轮廓边的中点的中点坐标;连接相对外轮廓边的中点以形成交点,所述交点作为所述内接矩形的中心点;计算所述交点到每条外轮廓边的最短距离;将所述最短距离在图像坐标系下进行横轴分解和纵轴分解,横轴分解量的两倍的长度作为所述内接矩形的一个边长,纵轴分解量的两倍的长度作为所述内接矩形的另一边长。
如上所述的基于双目视觉的投影图像自动校正方法还包括:对求解后的所述内接矩形的大小进行调整的步骤。
本发明还涉及一种基于双目视觉的投影图像自动校正系统,包括投影仪、两台摄像头和主控单元,以及与所述主控单元通信连接的如下部分:投影图像重建模块,其用于获取所述投影图像的深度图,并根据所述深度图,获取畸变图像;第一计算模块,其用于计算所述源图像和投影图像之间的第一变换关系;图像获取模块,其用于获取所述畸变图像校正后的校正图像;第二计算模块,其用于计算所述畸变图像和校正图像之间的第二变换关系;第三计算模块,其用于根据所述第一变换关系和第二变换关系,计算所述源图像和所述校正图像之间的校正关系;校正模块,其根据所述校正关系,对投影至投影平面上的投影图像进行校正。
如上所述的基于双目视觉的投影图像自动校正系统,所述两台摄像头对称布置在所述投影仪的出影面的两侧,且所述两台摄像头的视场范围大于所述投影仪的投影范围。
与现有技术相比,本发明的优点和有益效果是:通过对投影图像进行视 觉重建,获取深度图以及源图像与投影图像之间的变换关系,且根据深度图获取投影图像的畸变图像,对畸变图像进行校正得到校正图像,根据源图像和投影图像之间的变换关系以及畸变图像与校正图像之间的关系,便于将投影图形进行畸变校正,该校正方法基于双目摄像头采集、重建并恢复投影图像,不受投影区域的限制,校正灵活性高,且校正方法简单快速,校正效率高。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对本发明实施例或现有技术描述中所需要使用的附图作一简要介绍,显而易见地,下面描述的附图是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图。
图1为本发明提出的基于双目视觉的投影图像自动校正方法一个实施例的流程图;
图2为本发明提出的基于双目视觉的投影图像自动校正方法实施例中获取深度图像的流程图;
图3为本发明提出的基于双目视觉的投影图像自动校正方法实施例中获取校正图像的流程图;
图4为本发明提出的基于双目视觉的投影图像自动校正方法实施例中源图像、投影图像及相机图像之间的关系示意图;
图5为本发明提出的基于双目视觉的投影图像自动校正方法实施例中源图像及投影图像之间的关系示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发 明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1虽然示出了每个阶段的流程关系,但是并不代表本发明提出的自动校正方法的实现顺序,这些阶段的实现顺序并不一定是图1中示出的顺序,图1示出的实现顺序仅是为了清楚说明投影图像的自动校正过程。
S1:获取投影平面上投影图像P的深度图。
在本实施例中,在投影仪的两侧,具体在投影仪的出影面的两侧对称设置有两台摄像头,该摄像头可以检测深度,其中两台摄像头的视场在投影平面上能够相互重合,且摄像头视场范围大于投影仪的投影范围,使得两台摄像头均能够捕捉到投影至投影平面上的投影图像P。
如图2所示,其示出了获取投影图像P的深度图的流程图,包括(1)相机标定;(2)立体校正;(3)立体匹配;(4)获取深度图。
具体地,在(1)相机标定中,采用离线校准方法来标定相机,以求出相机内参数系数矩阵Kc、外参数旋转矩阵R和平移向量t以及畸变系数,采用相机标定的方法一般采用OpenCV标定函数或MATLAB标定工具箱实现,这种相机标定方法是数字图像处理领域成熟公开的内容,在此不再赘述。其中OpenCV是跨平台的开源计算机视觉库,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。
(2)立体校正的具体实现过程是:a.打开两台摄像头和和投影仪,投影仪向投影平面投射白噪声图像,获取两台摄像头所同时拍摄的左相机图像和右相机图像;b.读取上述摄像头标定后的相机标定参数文件,分别获得左摄像头和右摄像头的内参矩阵、外参数旋转矩阵及平移向量和畸变系数;c.使用OpenCV中stereoRctify函数计算出左相机图像和右相机图像的旋转矩阵;d. 根据内参数矩阵和外参数旋转矩阵分别在左相机图像和右相机图像上使用OpenCV中的initUndistortRectifyMap函数计算出X方向和Y方向的映射矩阵;e.分别在左相机图像和右相机图像上根据得到的映射矩阵使用OpenCV中的remap函数得到校正后的左相机图像和右相机图像;f.裁剪出校正后的左相机图像和右相机图像的有效区域得到最终结果,完成立体校正过程。
其中由于摄像头内部电路元件在工作过程中因自身缺陷、工作环境不稳定,电子电路内部的复杂响应与干扰而在图像采集过程中产生噪声,因此,可以对采集的图像进行预处理,包括图像的灰度化、几何变换、高斯滤波等方法对图像进行降噪和增强。
(3)立体匹配过程是在立体校正后的左相机图像和右相机图像中找到两幅图像中的对应像素之后进行,即校正时找到的左相机图像中的某些像素以及与该些像素对应的右相机图像中的一些像素,具体地,首先可以对图像进行例如灰度化或高斯滤波等预处理,然后计算左相机图像和右相机图像的匹配代价以及匹配代价聚合(具体也可以参见申请号为“201711321133.9”、发明名称为“一种基于立体视觉的积尘检测装置及方法”的发明专利),再进行视差计算及视差优化,上述这种立体匹配过程为数字图像处理领域成熟公开的内容,在此不再赘述。
(4)根据左相机图像和右相机图像立体匹配后的结构得到的对应匹配的两点m1和m2,以及左摄像头和右摄像头标定得到的焦距f,计算出图像中每个像素点的深度值z:
Figure PCTCN2019129586-appb-000001
其中b两台摄像头的光心之间的距离,D为点m1和m2之间的视差值。根据各个像素点的深度值z,可以得到深度图。
S2:计算源图像O与投影图像P之间的第一变换关系Q。
源图像中任意一点o(u,v)投射至投影平面上的三维空间坐标位置的点p为(s,t,z),点p再被左摄像头和右摄像头中一个(为了便于说明,在S2中均选择左摄像头)捕捉到,反馈至相机图像C上的坐标为c(x,y)。其中, 投影平面上的点p(s,t,z)中的(s,t)是在相机坐标系下变换得到的,深度z通过如上深度图得到,源图像上点o(u,v)和相机图像上点c(x,y)均在图像坐标系下。
在第一种实施例中,通过计算源图像O、投影图像P和相机图像C之间的关系源图像O与投影图像P之间的第一变换关系Q。
如图4所示,假设源图像O到相机图像C的空间转换关系是矩阵T,相机图像C到投影图像P的空间转换关系是矩阵R,则源图像O与投影图像P之间的第一变换关系为矩阵Q=TR,通过动态求解矩阵Q摆脱对投影区域的限制,实现在任意区域投影亦可进行校正的方法。
具体地,矩阵T可以通过如下描述过程获得。投影仪可以投影已知的图像(例如标准棋盘格图像),利用OpenCV的Harris角点检测函数得到源图像O和相机图像C中的角点的坐标。结合特征点匹配方法,例如可以通过读取指定角点的位置的方法或是利用OpenCV中SIFT、SURF、FAST、ORB等特征点匹配的方法,分别将源图像O中的例如四个点的坐标输入到源数组src中,并将与该四个点对应匹配的相机图像C中的四个点的坐标存储在目标数组dst中,则通过调用OpenCV的findHomography函数求解源图像O到相机图像C之间的单应性矩阵T,例如T为3×3的矩阵。
矩阵R可以通过如下描述过程获得。由于投影图像P上的点p(s,t,z)是在相机坐标系下的三维表示,相机图像C上的点c(x,y)是图像像素坐标系下的成像点,因此,相机图像C到投影图像P的变换实质上是图像像素坐标系到相机坐标系的变换,即相机的内参数矩阵Kc,R=Kc其中Kc在相机标定部分中已计算得出。
在替代性实施例中,也可以不求取矩阵T,直接通过源图像O到投影图像P的关系求取第一变换关系Q。
根据重建的深度图,可以得到相机坐标系下投影图像P上的点
投影图像P上的点x p(s,t,z)和源图像O上的点x o(u,v)求解源图像O与投影图像P之间的第一变换关系Q。
根据相机图像C到投影图像P的变换矩阵R,得知R(s,t,1) T=(x,y,1) T,其中点c(x,y)事先通过角点检测(例如Harris角点检测、Fast角点检测、SIFT特征点提取等)的方法得到,因此,可以对应求得对应的点p(s,t,z)中的(s,t),且点p的z坐标通过深度图得到。
此外,参考图4,并根据通过源图像O与相机图像C的角点检测及匹配,可以得到点p:x p(s,t,z)对应的源图像O中的点x o(u,v),因此,根据特征点匹配,可以建立如下公式:
Figure PCTCN2019129586-appb-000002
其中,
Figure PCTCN2019129586-appb-000003
Figure PCTCN2019129586-appb-000004
分别是x p和x o的齐次坐标形式,s是比例系数,因此,源图像O和投影图像P之间的变换矩阵Q为3×4的矩阵,由于是齐次变换,所以Q有11个未知系数,至少需要6个对应的点来求解Q。
这种Q计算方法相对于第一种实施例中Q的计算方法,不用计算矩阵T,简化了算法,且避免引入计算源图像O和相机图像C之间矩阵T所带来的的误差。
需要解释的的是,上述相机图像C为同一摄像头拍摄得到的,可以都为左摄像头拍摄得到的,也可以都为右摄像头拍摄得到的。
S3:根据深度图像,获取畸变图像。
深度图像下对应的三维点坐标是在相机坐标系下的点,而畸变图像是在图像坐标系下的点,因此,参考图4,利用相机标定中的内参数矩阵Kc将深度图转换为图像坐标系中,即(x,y,1) T=Kc(s,t,1) T
S4:获取对畸变图像校正后的校正图像。
在本实施例中,可以选取畸变图像的内接矩形为校正后的校正图像。具体地,参见图3所示。
S41:识别畸变图像的四个外轮廓边形成的四个角部的角部坐标;
图像的边缘检测也是数字图像处理领域成熟公开的内容,例如可以通过OpenCV中的轮廓检测函数findContours识别畸变图像的外轮廓点的坐标,优选外轮廓边形成的四个角部的角部坐标,记为x1,x2,x3和x4,其中x1,x2,x3和x4所在的点在顺时针方向上。
S42:计算每条外轮廓边的中点的中点坐标;
计算O1为x1和x2的中点,O2为x2和x3的中点,O3为x3和x4的中点,O4为x4和x1的中点;
S43:连接相对外轮廓边的中点以形成交点作为内接矩形的中心点;
连接O1和O3,O2和O4形成交点O,点O作为内接矩形的中心点;
S44:计算交点到每条外轮廓边的最短距离d;
具体地,将点O的坐标传入OpenCV的PointPolygonTest函数,返回O点到各外轮廓边上的最短距离d;
S45:将最短距离d在图像坐标系下进行横轴分解和纵轴分解;
对最短距离d分别在相机图像C所在的图像坐标系xy下分别在x方向(即横轴方向)和y方向(即纵轴方向)上分解,将x方向上分解的分量长度的两倍作为内接矩形的一个边长,将y方向上分解的分量长度的两倍作为内接矩形的另一边长。
这样内接矩形就是畸变图像校正后的图像。
此外,为了使内接矩形满足标准(16:9或4:3)的输出画面大小,利用OpenCV中图像缩放函数resize调整内接矩形的大小。
S5:计算畸变图像和校正图像之间的第二变换关系S。
由于畸变图像和校正图像均是在图像坐标系下,且畸变图像的特征点可以选择外轮廓边形成的四个角部的角点,且校正图像的特征点对应选择其内接矩形的的四个顶点,根据变换关系,根据对应的四组点坐标,可以求取畸变图像到校正图像的变换关系矩阵S。
S6:参考图5,根据如上求取的源图像O到投影图像P的第一变换关系矩阵Q以及投影平面上畸变图像到校正图像的第二变换关系矩阵S,可以获取源图像O到校正图像的校正关系W=QS。
S7:通过求解变换矩阵W,对投影图像P进行校正,最后输出校正后的图像。
本发明中的投影图像自动校正方法,计算量小,提升校正效率,且基于双目相机采集、重建并恢复投影图像,且动态计算第一变换关系Q和第二变换关系矩阵S,不受投影区域的限制,在任何投影平面上都可以实现自动校正的功能,提高了投影仪使用灵活性。
本发明还提供了一种基于双目视觉的投影图像自动校正系统,该系统用于实现如上所述的投影图像自动校正方法,具体的,该系统包括投影仪(未示出)、两台摄像头(未示出)和主控单元(未示出),两台摄像头对称位于投影仪的出影面的两侧,且视场重合且视场范围大于投影仪的投影范围;主控单元用于控制投影仪和两台摄像头的整个控制系统的运行、图像识别和处理、投影视频信号的处理以及投影仪和摄像头的电源管理等功能,其可以是MCU(Microcontroller Unit,微控制单元)、AP(Application Processor,应用处理器)等核心处理器。
本实施例自动校正系统还包括投影图像重建模块(未示出),其用于获取投影图像P的深度图,并根据深度图,获取畸变图像;第一计算模块(未示出),其用于计算源图像O和投影图像P之间的第一变换关系Q;图像获取模块(未示出),其用于获取畸变图像校正后的校正图像;第二计算模块(未示出),其用于计算畸变图像和校正图像之间的第二变换关系S;第三计算模块(未示出),其用于根据第一变换关系Q和第二变换关系S,计算源图像O和校正图像之间的校正关系W;校正模块(未示出),其根据校正关系W,对投影图像P进行校正。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (10)

  1. 一种基于双目视觉的投影图像自动校正方法,其特征在于,包括:
    获取投影平面上投影图像的深度图;
    计算所述源图像和投影图像之间的第一变换关系;
    根据所述深度图,获取畸变图像;
    获取对所述畸变图像校正后的校正图像;
    计算所述畸变图像和所述校正图像之间的第二变换关系;
    根据第一变换关系和第二变换关系,获取对所述源图像和所述校正图像之间的校正关系;
    根据所述校正关系,对所述投影图像进行校正。
  2. 根据权利要求1所述的基于双目视觉的投影图像自动校正方法,其特征在于,计算所述源图像和投影图像之间的第一变换关系,具体包括:
    根据所述源图像和相机图像的匹配关系,计算所述源图像与所述相机图像之间的变换关系A;
    根据所述深度图对应的深度值、以及所述相机图像与所述投影图像之间的匹配关系,计算所述相机图像和所述投影图像之间的变换关系B;
    根据所述变换关系A和变换关系B计算所述第一变换关系。
  3. 根据权利要求2所述的基于双目视觉的投影图像自动校正方法,其特征在于,所述相机图像为同一个摄像头所拍摄。
  4. 根据权利要求1所述的基于双目视觉的投影图像自动校正方法,其特征在于,计算所述源图像和投影图像之间的第一变换关系,具体包括:
    根据所述深度图对应的深度值,以及所述源图像和所述投影图像之间的匹配关系计算所述第一变换关系。
  5. 根据权利要求1所述的基于双目视觉的投影图像自动校正方法,其特征在于,获取投影平面上投影图像的深度图,还包括;
    对左摄像头和右摄像头进行相机标定;
    对所述左摄像头拍摄的左相机图像和右摄像头拍摄的右相机图像进行立体校正;
    立体匹配所述左相机图像和右相机图像;
    获取所述左相机图像和右相机图像上所有对应点的视差图;
    根据视差图,计算所述深度图。
  6. 根据权利要求1所述的基于双目视觉的投影图像自动校正方法,其特征在于,获取对所述畸变图像校正后的校正图像,具体为:
    获取所述畸变图像的内接矩形。
  7. 根据权利要求6所述的基于双目视觉的投影图像自动校正方法,其特征在于,获取所述畸变图像的内接矩形,具体包括:
    识别所述畸变图像的四个外轮廓边形成的四个角部的角部坐标;
    计算每条外轮廓边的中点的中点坐标;
    连接相对外轮廓边的中点以形成交点,所述交点作为所述内接矩形的中心点;
    计算所述交点到每条外轮廓边的最短距离;
    将所述最短距离在图像坐标系下进行横轴分解和纵轴分解,横轴分解量的两倍的长度作为所述内接矩形的一个边长,纵轴分解量的两倍的长度作为所述内接矩形的另一边长。
  8. 根据权利要求7所述的基于双目视觉的投影图像自动校正方法,其特征在于,还包括:
    对求解后的所述内接矩形的大小进行调整的步骤。
  9. 一种基于双目视觉的投影图像自动校正系统,包括投影仪、两台摄像头和主控单元,其特征在于,所述投影图像自动校正系统还包括与所述主控单元通信连接的如下部分:
    投影图像重建模块,其用于获取所述投影图像的深度图,并根据所述深度图,获取畸变图像;
    第一计算模块,其用于计算所述源图像和投影图像之间的第一变换关系;
    图像获取模块,其用于获取所述畸变图像校正后的校正图像;
    第二计算模块,其用于计算所述畸变图像和校正图像之间的第二变换关系;
    第三计算模块,其用于根据所述第一变换关系和第二变换关系,计算所述源图像和所述校正图像之间的校正关系;
    校正模块,其根据所述校正关系,对投影至投影平面上的投影图像进行校正。
  10. 根据权利要求9所述的基于双目视觉的投影图像自动校正系统,其特征在于,所述两台摄像头对称布置在所述投影仪的出影面的两侧,且所述两台摄像头的视场范围大于所述投影仪的投影范围。
PCT/CN2019/129586 2019-10-30 2019-12-28 一种基于双目视觉的投影图像自动校正方法及系统 WO2021082264A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/395,649 US11606542B2 (en) 2019-10-30 2021-08-06 Projection image automatic correction method and system based on binocular vision

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911045424.9A CN110830781B (zh) 2019-10-30 2019-10-30 一种基于双目视觉的投影图像自动校正方法及系统
CN201911045424.9 2019-10-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/395,649 Continuation US11606542B2 (en) 2019-10-30 2021-08-06 Projection image automatic correction method and system based on binocular vision

Publications (1)

Publication Number Publication Date
WO2021082264A1 true WO2021082264A1 (zh) 2021-05-06

Family

ID=69551405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/129586 WO2021082264A1 (zh) 2019-10-30 2019-12-28 一种基于双目视觉的投影图像自动校正方法及系统

Country Status (3)

Country Link
US (1) US11606542B2 (zh)
CN (1) CN110830781B (zh)
WO (1) WO2021082264A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697623B (zh) * 2020-12-29 2023-08-15 极米科技股份有限公司 投影面选取和投影图像校正方法、装置、投影仪及介质
CN112804508B (zh) * 2021-03-19 2021-08-31 深圳市火乐科技发展有限公司 投影仪校正方法、系统、存储介质以及电子设备
CN113099198B (zh) * 2021-03-19 2023-01-10 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备
CN113724321B (zh) * 2021-07-08 2023-03-24 南京航空航天大学苏州研究院 一种自适应激光投影辅助装配方法
CN114615478B (zh) * 2022-02-28 2023-12-01 青岛信芯微电子科技股份有限公司 投影画面校正方法、系统、投影设备和存储介质
CN114565683B (zh) * 2022-03-02 2022-09-27 禾多科技(北京)有限公司 一种精度确定方法、装置、设备、介质及产品
CN116228890B (zh) * 2023-05-05 2023-08-11 深圳市拓普泰克技术股份有限公司 一种基于加油设备的智能控制系统
CN117061719B (zh) * 2023-08-11 2024-03-08 元橡科技(北京)有限公司 一种车载双目相机视差校正方法
CN116863085B (zh) * 2023-09-04 2024-01-09 北京数慧时空信息技术有限公司 一种三维重建系统、三维重建方法、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102780834A (zh) * 2011-05-11 2012-11-14 张茂军 环眼镜头图像半柱面全景展开方法
CN104318569A (zh) * 2014-10-27 2015-01-28 北京工业大学 基于深度变分模型的空间显著性区域提取方法
CN105654502A (zh) * 2016-03-30 2016-06-08 广州市盛光微电子有限公司 一种基于多镜头多传感器的全景相机标定装置和方法
CN107917701A (zh) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 基于主动式双目立体视觉的测量方法及rgbd相机系统
US20190213745A1 (en) * 2018-01-10 2019-07-11 Samsung Electronics Co., Ltd. Method and optical system for determining depth information
CN110300292A (zh) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 投影畸变校正方法、装置、系统及存储介质

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7015954B1 (en) * 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
US7103212B2 (en) * 2002-11-22 2006-09-05 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
JP3969363B2 (ja) * 2003-07-30 2007-09-05 カシオ計算機株式会社 プロジェクタ及びプロジェクタの投影像補正方法
US20100182406A1 (en) * 2007-07-12 2010-07-22 Benitez Ana B System and method for three-dimensional object reconstruction from two-dimensional images
US9241143B2 (en) * 2008-01-29 2016-01-19 At&T Intellectual Property I, L.P. Output correction for visual projection devices
US8684531B2 (en) * 2009-12-28 2014-04-01 Vision3D Technologies, Llc Stereoscopic display device projecting parallax image and adjusting amount of parallax
JP2014522591A (ja) * 2011-05-25 2014-09-04 サード ディメンション アイピー リミテッド ライアビリティー カンパニー 角スライス実像3dディスプレイのためのアライメント、キャリブレーション、およびレンダリングのシステムおよび方法
KR101276208B1 (ko) * 2011-05-30 2013-06-18 전자부품연구원 스테레오 카메라용 보정 시스템 및 스테레오 영상 보정 장치
WO2013057714A1 (en) * 2011-10-20 2013-04-25 Imax Corporation Invisible or low perceptibility of image alignment in dual projection systems
CN102821285A (zh) * 2012-07-06 2012-12-12 中影数字巨幕(北京)有限公司 一种数字电影放映方法、优化装置和放映系统
US9049387B2 (en) * 2012-09-06 2015-06-02 National Taiwan University Method of generating view-dependent compensated images
JP6644371B2 (ja) * 2014-12-17 2020-02-12 マクセル株式会社 映像表示装置
FR3039028B1 (fr) * 2015-07-17 2018-03-09 Universite De Nantes Procede et dispositif d'affichage de scene tridimensionnelle sur une surface d'affichage de forme arbitraire non plane
US9674504B1 (en) * 2015-12-22 2017-06-06 Aquifi, Inc. Depth perceptive trinocular camera system
CN107547879B (zh) * 2016-06-24 2019-12-24 上海顺久电子科技有限公司 一种投影成像的校正方法、装置及激光电视
JP6748961B2 (ja) * 2016-07-07 2020-09-02 パナソニックIpマネジメント株式会社 投写画像調整システム及び投写画像調整方法
US20200162719A1 (en) * 2017-02-07 2020-05-21 Mindmaze Holding Sa Systems, methods and apparatuses for stereo vision
CN108696730A (zh) * 2017-03-08 2018-10-23 北京微美云息软件有限公司 一种多投影3d光场图像自动校准方法
CN107274400B (zh) * 2017-06-21 2021-02-12 歌尔光学科技有限公司 空间定位装置、定位处理方法及装置、虚拟现实系统
CN107747941B (zh) * 2017-09-29 2020-05-15 歌尔股份有限公司 一种双目视觉定位方法、装置及系统
AU2017251725A1 (en) * 2017-10-24 2019-05-09 Canon Kabushiki Kaisha Calibration of projection systems
CN108181319B (zh) * 2017-12-12 2020-09-11 陕西三星洁净工程有限公司 一种基于立体视觉的积尘检测装置及方法
CN108227348A (zh) * 2018-01-24 2018-06-29 长春华懋科技有限公司 基于高精度视觉云台的几何畸变自动校正方法
US10412380B1 (en) * 2018-02-02 2019-09-10 University Of Arkansas At Little Rock Portable CAVE automatic virtual environment system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102780834A (zh) * 2011-05-11 2012-11-14 张茂军 环眼镜头图像半柱面全景展开方法
CN104318569A (zh) * 2014-10-27 2015-01-28 北京工业大学 基于深度变分模型的空间显著性区域提取方法
CN105654502A (zh) * 2016-03-30 2016-06-08 广州市盛光微电子有限公司 一种基于多镜头多传感器的全景相机标定装置和方法
CN107917701A (zh) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 基于主动式双目立体视觉的测量方法及rgbd相机系统
US20190213745A1 (en) * 2018-01-10 2019-07-11 Samsung Electronics Co., Ltd. Method and optical system for determining depth information
CN110300292A (zh) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 投影畸变校正方法、装置、系统及存储介质

Also Published As

Publication number Publication date
US20210368147A1 (en) 2021-11-25
US11606542B2 (en) 2023-03-14
CN110830781A (zh) 2020-02-21
CN110830781B (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2021082264A1 (zh) 一种基于双目视觉的投影图像自动校正方法及系统
US11985293B2 (en) System and methods for calibration of an array camera
CN106875339B (zh) 一种基于长条形标定板的鱼眼图像拼接方法
WO2022193559A1 (zh) 投影校正方法、装置、存储介质及电子设备
US8355565B1 (en) Producing high quality depth maps
CN106846409B (zh) 鱼眼相机的标定方法及装置
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
CN101656857B (zh) 投影型显示装置和显示方法
JP5266954B2 (ja) 投写型表示装置および表示方法
JP2011253376A (ja) 画像処理装置、および画像処理方法、並びにプログラム
US11282232B2 (en) Camera calibration using depth data
TWI761684B (zh) 影像裝置的校正方法及其相關影像裝置和運算裝置
WO2019232793A1 (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
CN112734860A (zh) 一种基于弧形幕先验信息的逐像素映射投影几何校正方法
WO2022100668A1 (zh) 温度测量方法、装置、系统、存储介质及程序产品
CN114299156A (zh) 无重叠区域下多相机的标定与坐标统一方法
US8340399B2 (en) Method for determining a depth map from images, device for determining a depth map
JP4554231B2 (ja) 歪みパラメータの生成方法及び映像発生方法並びに歪みパラメータ生成装置及び映像発生装置
WO2017215018A1 (zh) 一种教育玩具套件及其凸面镜成像校正方法
CN115880369A (zh) 线结构光3d相机和线阵相机联合标定的装置、系统及方法
CN112907680A (zh) 一种可见光与红外双光相机旋转矩阵自动校准方法
KR102295987B1 (ko) 스테레오 카메라 모듈의 캘리브레이션 방법 및 장치, 및 컴퓨터 판독 가능한 저장매체
Urban et al. On the Issues of TrueDepth Sensor Data for Computer Vision Tasks Across Different iPad Generations
JP7312594B2 (ja) キャリブレーション用チャートおよびキャリブレーション用器具
TWI775428B (zh) 極座標至球座標轉換方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19950721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19950721

Country of ref document: EP

Kind code of ref document: A1