WO2021208933A1 - 用于相机的图像校正方法和装置 - Google Patents

用于相机的图像校正方法和装置 Download PDF

Info

Publication number
WO2021208933A1
WO2021208933A1 PCT/CN2021/087040 CN2021087040W WO2021208933A1 WO 2021208933 A1 WO2021208933 A1 WO 2021208933A1 CN 2021087040 W CN2021087040 W CN 2021087040W WO 2021208933 A1 WO2021208933 A1 WO 2021208933A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
speckle
camera
sub
pixel
Prior art date
Application number
PCT/CN2021/087040
Other languages
English (en)
French (fr)
Inventor
周凯
尹首一
唐士斌
欧阳鹏
李秀冬
王博
Original Assignee
北京清微智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京清微智能科技有限公司 filed Critical 北京清微智能科技有限公司
Priority to US17/488,502 priority Critical patent/US20220036521A1/en
Publication of WO2021208933A1 publication Critical patent/WO2021208933A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present invention relates to the field of image processing, in particular to an image correction method and device for a camera.
  • Depth map estimation is an important research direction in the field of stereo vision research, which is widely used in the fields of intelligent security, autonomous driving, human-computer interaction, and mobile payment.
  • the existing depth map estimation methods mainly include binocular camera method, structured light and binocular camera combination method, structured light and monocular camera combination method, and TOF (time of flight) method.
  • the method based on the combination of speckle structured light projection and monocular camera has been widely used due to its simple structure, lower cost and power consumption, and higher accuracy.
  • the method based on the combination of speckle structured light projection and monocular camera uses the speckle projector to project the fine speckle pattern onto the surface of the scene object.
  • a pre-calibrated camera is used to capture the image of the object with the speckle pattern, and then The image is matched with the pre-stored reference image, and finally the depth map of the scene is calculated using the matched pixels and the calibration parameters of the camera.
  • the equipment based on this method generally needs to meet the following requirements in terms of structure: the camera and the speckle projector are placed in parallel, with the same orientation, and the straight line connecting the optical center of the camera and the center of the projector and The X-axis of the camera reference system is parallel; however, in actual applications, due to the installation error in the relative positions of the camera and the speckle projector, it is difficult to strictly meet the above requirements, which leads to the inaccurate results of the depth map estimation.
  • the embodiments of the present application provide an image correction method and device for a camera, so as to achieve high precision requirements with a simpler structure and lower cost and power consumption.
  • an embodiment of the present application provides an image correction method for a camera.
  • the method includes: a camera collects speckle patterns on two planes at different distances to obtain a first plane speckle image and a second plane speckle image, wherein the speckle pattern is projected by a speckle projector, the camera and the speckle projector
  • the first plane speckle image and the second plane speckle image are matched by an image matching algorithm to obtain sub-pixel matching points; according to the sub-pixel matching points corresponding to the first plane speckle image on the first plane Physical coordinates and the corresponding second physical coordinates on the second plane speckle image to obtain the mapping matrix between the first physical coordinates and the second physical coordinates; according to the mapping matrix to obtain the center of the speckle projector in the camera reference system Direction vector; adjust the coordinate axis direction of the camera reference system to align the horizontal axis direction with the direction vector, and update the imaging matrix of the camera; map the target scene image through the imaging matrix to obtain a corrected image.
  • the method further includes: according to the pixel coordinates of the sub-pixel matching point, the first physical coordinates and the second plane corresponding to the sub-pixel matching point on the first plane speckle image are respectively calculated through the internal parameter conversion method.
  • the second physical coordinate corresponding to the speckle image is respectively calculated through the internal parameter conversion method.
  • the first physical coordinates and the second physical coordinates are obtained according to the first physical coordinates corresponding to the sub-pixel matching points on the first planar speckle image and the second physical coordinates corresponding to the second planar speckle image.
  • the mapping matrix between coordinates includes: Calculate the mapping matrix according to the following formula:
  • (u i ,v i ) T represents the first physical coordinate
  • (u' i ,v' i ) T represents the second physical coordinate
  • mapping matrix The representation of the mapping matrix is as follows:
  • H is the mapping matrix
  • I is the unit matrix of 3 ⁇ 3
  • is the scalar
  • v is the homogeneous coordinate corresponding to the projection position of the center of the speckle projector on the camera and the center of the speckle projector is in the camera reference
  • the direction vector in the system a is the homogeneous representation of another two-dimensional vector.
  • mapping the target scene image through the imaging matrix to obtain the corrected image includes: calculating the grayscale of each sub-pixel point on the target scene image by using an interpolation method, and assigning the grayscale to the corresponding corresponding on the corrected image. pixel.
  • the embodiment of the present application also provides an image correction device for a camera.
  • the device includes an image collector, a matcher, an acquisition device, a calculation device, an analysis device, and a processor.
  • the image collector is configured to collect speckle patterns on two planes located at different positions to obtain a first plane speckle image and a second plane speckle image.
  • the speckle pattern is projected by the speckle projector, and the image collector and the speckle projector have the same orientation and a fixed relative position.
  • the matcher is configured to match the first planar speckle image and the second planar speckle image through an image matching algorithm to obtain sub-pixel matching points.
  • the acquiring device is configured to obtain the first physical coordinates and the second physical coordinates according to the first physical coordinates corresponding to the sub-pixel matching points on the first planar speckle image and the second physical coordinates corresponding to the second planar speckle image.
  • the computing device is configured to obtain the direction vector of the center of the speckle projector in the camera reference system according to the mapping matrix.
  • the analysis device is configured to adjust the coordinate axis direction of the camera reference system, align the horizontal axis direction with the direction vector, and update the camera's imaging matrix.
  • the processor is configured to map the target scene image through the imaging matrix to obtain a corrected image.
  • the acquisition device is configured to: according to the pixel coordinates of the sub-pixel matching point, the first physical coordinates and the first physical coordinates corresponding to the sub-pixel matching point on the first planar speckle image and the second The corresponding second physical coordinate on the two-plane speckle image.
  • obtaining the device configuration is used to calculate the mapping matrix according to the following formula:
  • (u i, v i) T denotes a first physical coordinates
  • (u 'i, v' i) T denotes a second physical coordinates
  • mapping matrix The representation of the mapping matrix is as follows:
  • H is the mapping matrix
  • I is the unit matrix of 3 ⁇ 3
  • is the scalar
  • v is the homogeneous coordinate corresponding to the projection position of the center of the speckle projector on the camera and the center of the speckle projector is in the camera reference
  • the direction vector in the system a is the homogeneous representation of another two-dimensional vector.
  • the processor is configured to calculate the grayscale of each sub-pixel on the target scene image by using an interpolation method, and assign the grayscale to the corresponding pixel on the corrected image.
  • An embodiment of the present invention also provides an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the above method when the computer program is executed.
  • the embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program for executing the above method.
  • the beneficial technical effect of the embodiments of the present application is that it can achieve improved image calibration accuracy with a relatively simple structure at lower cost and power consumption, and provide better data support for subsequent image recognition and other technologies.
  • Fig. 1 is a schematic flowchart of an image correction method for a camera according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an application example of an image correction method for a camera according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of the principle of an image correction method for a camera according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of the structure of an image and a vertical and horizontal system for images according to an embodiment of the present application;
  • Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • this application provides an image correction method for a camera.
  • the method includes the following steps.
  • the camera collects speckle patterns on two planes at different distances to obtain a first plane speckle image and a second plane speckle image, the speckle pattern is projected by a speckle projector, and the orientation of the camera and the speckle projector The same and the relative position is fixed.
  • S102 Match the first planar speckle image and the second planar speckle image through an image matching algorithm to obtain sub-pixel matching points.
  • S104 Obtain a direction vector of the center of the speckle projector in the camera reference system according to the mapping matrix.
  • S105 Adjust the coordinate axis direction of the camera reference system to align the horizontal axis direction with the direction vector, and update the imaging matrix of the camera.
  • the two flat panels cover all the field of view of the camera, so that the collected first plane speckle image and second plane speckle image can contain more speckle elements, and the subsequent calculation results can be more accurate.
  • a flat plate smaller than the camera's field of view can also be used, and it is only necessary to ensure that both the first plane speckle image and the second plane speckle image contain speckles (on the flat plate).
  • Related technologies in this field Personnel can choose to use it according to actual needs, and this application does not further limit it.
  • this application may include only one camera and only one speckle projector, where the camera and the speckle projector have the same orientation and their relative positions are fixed.
  • a white flat panel can be placed directly in front of the camera and the speckle projector, the speckle pattern is projected on the white flat panel by the speckle projector, and the flat panel image is captured by the camera.
  • two flat panel images with speckles can be obtained, that is, the first plane speckle Image and second plane speckle image.
  • the method further includes: according to the pixel coordinates of the sub-pixel matching point, the first physical coordinates and the corresponding first physical coordinates of the sub-pixel matching point on the first planar speckle image are calculated through the internal parameter conversion method.
  • the corresponding second physical coordinate on the second plane speckle image According to the first physical coordinates and the second physical coordinates, the mapping matrix between the first physical coordinates on the first planar speckle image and the corresponding second physical coordinates on the second planar speckle image can be calculated by projective geometry theory. For example, the mapping matrix can be calculated by the following formula:
  • (u i ,v i ) T represents the first physical coordinate
  • (u' i ,v' i ) T represents the second physical coordinate
  • mapping matrix can be expressed as follows:
  • H is the mapping matrix
  • I is the unit matrix of 3 ⁇ 3
  • is the scalar
  • v is the homogeneous coordinate corresponding to the projection position of the center of the speckle projector on the camera and the center of the speckle projector is in the camera reference
  • the direction vector in the system a is the homogeneous representation of another two-dimensional vector.
  • mapping the target scene image through the imaging matrix to obtain the corrected image includes: calculating the gray level of each sub-pixel point on the target scene image using an interpolation method, and assigning the gray level to the corrected image Corresponding to pixels. Therefore, the method can improve the accuracy of the image.
  • FIG. 2 is a schematic diagram of an application example of an image correction method for a camera according to an embodiment of the present application
  • FIG. 3 is a principle of an image correction method for a camera according to an embodiment of the present application Schematic.
  • the same white plate is placed at two different distances directly in front of the camera and the speckle projector. At each distance, the tablet can cover as much as possible the entire field of view of the camera.
  • the speckle pattern is projected on the flat plate by the speckle projector, and two flat speckle images are obtained by the camera, namely the first plane speckle image 1 and the second plane speckle image 2.
  • Sub-pixel matching is performed on the first planar speckle image 1 and the second planar speckle image 2. For the pixel points in the first planar speckle image 1, obtain their corresponding sub-pixel matching points in the second planar speckle image 2 respectively.
  • the matching method can be block matching or other existing matching methods.
  • (x i, y i) T is the pixel coordinates of the first speckle image plane 1 of the pixel p i
  • (x 'i, y' i) T is the corresponding pixel in the second p i
  • (u i ,v i ) T is the first physical coordinate corresponding to the pixel p i (also called the first physical imaging coordinate)
  • (u′ i ,v′ i ) T is the second physical coordinate corresponding to p′ i (Also called the second physical imaging coordinate)
  • A is the internal parameter matrix of the camera.
  • H may be a 3 ⁇ 3 homography matrix. According to the related theory of projective geometry, H can be expressed as follows:
  • I is a 3 ⁇ 3 unit matrix
  • is a scalar
  • v and a are both homogeneous representations of two-dimensional vectors
  • v corresponds to the homogeneous coordinates of the projection position of the center of the speckle projector on the camera, and at the same time It can also be used to express the direction vector of the center of the speckle projector in the camera reference frame. Therefore, H can be expressed by five independent parameters.
  • the direction vector v of the center of the speckle projector in the camera reference frame is obtained from the mapping matrix H. After obtaining H through the foregoing calculation, the vector v in it can also be obtained accordingly. v is the direction vector of the center of the projector in the camera reference frame.
  • the imaging matrix of the camera before correction can be expressed as:
  • a and t are the camera's internal parameter matrix and translation vector, respectively, and R is the rotation matrix of the camera for correcting the front phase:
  • the calculation method is: Indicates the direction vector of the Y-axis of the camera's reference frame after correction.
  • the calculation method is: Indicates the direction vector of the Z-axis of the camera reference frame after correction.
  • the calculation method is:
  • the parameter matrix A and the translation vector t remain unchanged before and after the correction. Therefore, the imaging matrix of the corrected camera can be obtained:
  • the original scene image is mapped to a new image, that is, the corrected image.
  • the matrix T maps the pixel coordinates on the original scene image to the corrected image.
  • the method of generating the corrected image is: for each pixel position on the corrected image Calculate the pixel position (x,y) T corresponding to the point on the original image by T. Since this position is generally not an integer, the grayscale interpolation method is used to calculate the grayscale at the position (x,y) T, and then it Assigned to the pixel position on the corrected image Perform the above operations on each pixel to obtain the entire corrected image.
  • the image can be equivalent to the image taken under the condition that the camera and the speckle projector are in an ideal positional relationship, that is, the line connecting the optical center of the camera and the center of the speckle projector is parallel to the X axis of the camera reference frame.
  • the present application also provides an image correction device for a camera.
  • the device includes an image collector, a matcher, an acquisition device, a calculation device, an analysis device, and a processor.
  • the image collector is configured to collect speckle patterns on two planes at different distances to obtain a first plane speckle image and a second plane speckle image, the speckle pattern is projected by the speckle projector, the camera and The direction of the speckle projector is the same and the relative position is fixed.
  • the matcher is configured to match the first planar speckle image and the second planar speckle image through an image matching algorithm to obtain sub-pixel matching points.
  • the obtaining device is configured to obtain the first physical coordinates and the second physical coordinates according to the first physical coordinates corresponding to the sub-pixel matching points on the first planar speckle image and the corresponding second physical coordinates on the second planar speckle image The mapping matrix between.
  • the computing device is configured to obtain the direction vector of the center of the speckle projector in the camera reference system according to the mapping matrix.
  • the analysis device is configured to adjust the coordinate axis direction of the camera reference system, align the horizontal axis direction with the direction vector, and update the camera's imaging matrix.
  • the processor is configured to map the target scene image through the imaging matrix to obtain a corrected image.
  • the obtaining device is configured to: according to the pixel coordinates of the sub-pixel matching point, the first physical coordinates corresponding to the sub-pixel matching point on the first planar speckle image are respectively calculated through the internal parameter conversion method And the second physical coordinate corresponding to the second plane speckle image.
  • the acquisition device is configured to: calculate the mapping between the first physical coordinate on the first planar speckle image and the corresponding second physical coordinate on the second planar speckle image through the theory of projective geometry matrix. For example, the following formula can be used to calculate the mapping matrix:
  • (u i ,v i ) T represents the first physical coordinate
  • (u' i ,v' i ) T represents the second physical coordinate
  • mapping matrix can be expressed as follows:
  • H is the mapping matrix
  • is the scalar
  • v is the homogeneous coordinate corresponding to the projection position of the center of the speckle projector on the camera and the direction vector of the center of the speckle projector in the camera reference system
  • a is the other A homogeneous representation of a two-dimensional vector.
  • the processor is configured to calculate the gray level of each sub-pixel point on the target scene image by using an interpolation method, and assign the gray level to the corresponding pixel point on the corrected image.
  • the present application also provides an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the method according to the embodiment of the present application when the processor executes the computer program.
  • the present invention also provides a computer-readable storage medium storing a computer program for executing the method according to the embodiment of the present application.
  • the beneficial technical effect of the present invention is that it can achieve higher calibration accuracy at lower cost and power consumption with a relatively simple structure, and provide better data support for subsequent image recognition and other technologies.
  • the electronic device 600 may further include: a communication module 110, an input unit 120, an audio processing unit 130, a display 160, and a power supply 170. It is worth noting that the electronic device 600 does not necessarily include all the components shown in FIG. 5; in addition, the electronic device 600 may also include components not shown in FIG. 5, and reference may be made to the prior art.
  • the central processing unit 100 is sometimes called a controller or operating control, and may include a microprocessor or other processor devices and/or logic devices.
  • the central processing unit 100 receives inputs and controls various components of the electronic equipment 600. Operation of components.
  • the memory 140 may be, for example, one or more of a cache, a flash memory, a hard drive, a removable medium, a volatile memory, a non-volatile memory, or other suitable devices.
  • the above-mentioned information related to the failure can be stored, and the program for executing the related information can also be stored.
  • the central processing unit 100 can execute the program stored in the memory 140 to realize information storage or processing.
  • the input unit 120 provides input to the central processing unit 100.
  • the input unit 120 is, for example, a button or a touch input device.
  • the power supply 170 is used to provide power to the electronic device 600.
  • the display 160 is used for displaying display objects such as images and characters.
  • the display may be, for example, an LCD display, but it is not limited thereto.
  • the memory 140 may be a solid-state memory, for example, read only memory (ROM), random access memory (RAM), SIM card, etc. It may also be a memory that saves information even when the power is off, can be selectively erased and is provided with more data, and an example of this memory is sometimes referred to as EPROM or the like.
  • the memory 140 may also be some other type of device.
  • the memory 140 includes a buffer memory 141 (sometimes referred to as a buffer).
  • the memory 140 may include an application/function storage unit 142, which is used to store application programs and function programs or to execute the operation flow of the electronic device 600 through the central processing unit 100.
  • the memory 140 may further include a data storage unit 143 for storing data, such as contacts, digital data, pictures, sounds, and/or any other data used by electronic devices.
  • the driver storage part 144 of the memory 140 may include various drivers for the communication function of the electronic device and/or for executing other functions of the electronic device (such as a messaging application, an address book application, etc.).
  • the communication module 110 is a transmitter/receiver 110 that transmits and receives signals via the antenna 111.
  • the communication module (transmitter/receiver) 110 is coupled to the central processing unit 100 to provide input signals and receive output signals, which can be the same as that of a conventional mobile communication terminal.
  • multiple communication modules 110 may be provided in the same electronic device, such as a cellular network module, a Bluetooth module, and/or a wireless local area network module.
  • the communication module (transmitter/receiver) 110 is also coupled to the speaker 131 and the microphone 132 via the audio processor 130 to provide audio output via the speaker 131 and receive audio input from the microphone 132, thereby realizing general telecommunication functions.
  • the audio processor 130 may include any suitable buffers, decoders, amplifiers, etc.
  • the audio processor 130 is also coupled to the central processing unit 100, so that the microphone 132 can be used to record on the unit, and the speaker 131 can be used to play the sound stored on the unit.
  • the embodiments of the present invention can be provided as a method, a system, or a computer program product. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Projection Apparatus (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)

Abstract

本申请提供了一种单相机极线校正方法,所述方法包含:相机采集位于不同距离的两个平面上的散斑图案获得第一平面散斑图像和第二平面散斑图像;通过图像匹配算法匹配第一和第二平面散斑图像,获得亚像素匹配点;根据亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标,获得第一物理坐标和第二物理坐标之间的映射矩阵;根据映射矩阵获得散斑投射器的中心在相机参考系中的方向向量;调节相机参考系的坐标轴方向,使水平轴方向与方向向量对齐,并更新相机的成像矩阵;通过成像矩阵将目标场景图像映射获得校正后的图像。

Description

用于相机的图像校正方法和装置
相关申请的交叉引用
本申请基于申请号为2020102980419、申请日为2020年04月16日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及图像处理领域,尤其涉及一种用于相机的图像校正方法和装置。
背景技术
深度图估计是立体视觉研究领域的一个重要研究方向,广泛应用在智能安防,自动驾驶,人机交互以及移动支付等领域。根据所用设备的不同,现有的深度图估计方法主要包括双目相机方法、结构光与双目相机组合的方法、结构光与单目相机组合的方法,以及TOF(飞行时间)法。其中,基于散斑结构光投影与单目相机组合的方法由于其简单的结构、较低的成本和功耗以及较高的精度而获得了广泛的应用。
基于散斑结构光投影与单目相机组合的方法利用散斑投射器把精细散斑图案投影到场景物体表面,投射瞬间利用一个事先标定好的相机拍摄带有散斑图案的物体图像,然后将该图像与预存的参考图像进行匹配,最后利用匹配像素点结合相机的标定参数解算出场景的深度图。为保证算法的简洁性和匹配的准确性,该方法基于的设备在结构上一般需要满足如下要求:相机与散斑投射器平行放置,朝向相同,且连接相机光心和投射器中心的直线与相机参考系的X轴平行;然而,在实际应用中,由于相机和散斑投射器的相对位置存在安装误差,难以严格满足上述要求,从而导致了深度图估计的结果不够准确。
发明内容
本申请实施例提供一种用于相机的图像校正方法和装置,以利用较简单的结构和较低的成本和功耗实现高精度的要求。
为达上述目的,本申请实施例提供了一种用于相机的图像校正方法。该方法包含:相机采集位于不同距离的两个平面上的散斑图案获得第一平面散斑图像和第二平面散斑图像,其中散斑图案由散斑投射器投射,相机和散斑投射器的朝向相同且相对位置固定;通过图像匹配算法匹配第一平面散斑图像和第二平面散斑图像,获得亚像素匹配点;根据亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标, 获得第一物理坐标和第二物理坐标之间的映射矩阵;根据映射矩阵获得散斑投射器的中心在相机参考系中的方向向量;调节相机参考系的坐标轴方向,使水平轴方向与方向向量对齐,并更新相机的成像矩阵;通过所述成像矩阵将目标场景图像映射获得校正后的图像。在一些实施例中,方法还包括:根据亚像素匹配点的像素坐标,通过内参数转换法分别计算获得亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标。
在一些实施例中,根据亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标,获得第一物理坐标和第二物理坐标之间的映射矩阵包括:根据下式计算映射矩阵:
Figure PCTCN2021087040-appb-000001
其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标;
所述映射矩阵的表示形式如下:
Figure PCTCN2021087040-appb-000002
其中,H为映射矩阵,I为3×3的单位矩阵,μ为标量,v为对应于散斑投射器的中心在相机上的投影位置的齐次坐标和散斑投射器的中心在相机参考系中的方向向量,a为另一个二维向量的齐次表示。
在一些实施例中,通过成像矩阵将目标场景图像映射获得校正后的图像包括:利用插值法计算目标场景图像上各亚像素点的灰度,将所述灰度赋予校正后的图像上的对应像素点。
本申请实施例还提供一种用于相机的图像校正装置。装置包含图像采集器、匹配器、获取设备、计算设备、分析设备和处理器。
图像采集器配置用于采集位于不同位置处的两个平面上的散斑图案,以获得第一平面散斑图像和第二平面散斑图像。其中,散斑图案由散斑投射器投射,图像采集器和散斑投射器的朝向相同且相对位置固定。
匹配器配置用于通过图像匹配算法匹配第一平面散斑图像和第二平面散斑图像,获得亚像素匹配点。
获取设备配置用于根据所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标,获得第一物理坐标和第二物理坐标之间的映 射矩阵。
计算设备配置用于根据映射矩阵获得散斑投射器的中心在相机参考系中的方向向量。
分析设备配置用于调节相机参考系的坐标轴方向,使水平轴方向与方向向量对齐,并更新相机的成像矩阵。
处理器配置用于通过成像矩阵将目标场景图像映射获得校正后的图像。
在一些实施例中,获取设备配置用于:根据亚像素匹配点的像素坐标,通过内参数转换法分别计算获得亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标。
在一些实施例中,获取设备配置用于:根据下式计算映射矩阵:
Figure PCTCN2021087040-appb-000003
其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标。
所述映射矩阵的表示形式如下:
Figure PCTCN2021087040-appb-000004
其中,H为映射矩阵,I为3×3的单位矩阵,μ为标量,v为对应于散斑投射器的中心在相机上的投影位置的齐次坐标和散斑投射器的中心在相机参考系中的方向向量,a为另一个二维向量的齐次表示。
在一些实施例中,处理器配置用于:利用插值法计算目标场景图像上各亚像素点的灰度,将所述灰度赋予校正后的图像上的对应像素点。
本发明实施例还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述方法。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质存储有执行上述方法的计算机程序。
本申请实施例的有益技术效果在于:能够以较为简单的结构,实现较低成本和功耗下提高图像的校准精度,为后续图像识别等技术提供较好的数据支持。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,并不构成对 本发明的限定。在附图中:
图1为根据本申请实施例的的用于相机的图像校正方法的流程示意图;
图2为本根据本申请实施例的用于相机的图像校正方法的应用实例的示意图;
图3为根据本申请实施例的的用于相机的图像校正方法的原理示意图;
图4为根据本申请实施例的的用于图像的图像及纵横系统的结构示意图;
图5为根据本申请实施例的电子设备的结构示意图。
具体实施方式
以下将结合附图及实施例来详细说明本发明的实施方式,借此对本发明如何应用技术手段来解决技术问题,并达成技术效果的实现过程能充分理解并据以实施。需要说明的是,只要不构成冲突,本申请中的各个实施例及各实施例中的各个特征可以相互结合,所形成的技术方案均在本申请的保护范围之内。
另外,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
请参考图1所示,本申请提供一种用于相机的图像校正方法。该方法包括以下步骤。
S101,相机采集位于不同距离的两个平面上的散斑图案以获得第一平面散斑图像和第二平面散斑图像,散斑图案由散斑投射器投射,相机与散斑投射器的朝向相同且相对位置固定。
S102,通过图像匹配算法匹配第一平面散斑图像和第二平面散斑图像,获得亚像素匹配点。
S103,根据所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标,获得第一物理坐标和第二物理坐标之间的映射矩阵。
S104,根据映射矩阵获得散斑投射器的中心在相机参考系中的方向向量。
S105,调节相机参考系的坐标轴方向,使水平轴方向与方向向量对齐,并更新相机的成像矩阵。
S106,通过成像矩阵来映射目标场景图像获得校正后的图像。
其中,两个平板覆盖相机的所有视场,以此使得采集到的第一平面散斑图像和第二平面散斑图像能够包含更多的散斑元素,可更进一步精确后续计算结果。当然,因实际需求不同,也可采用小于相机视场的平板,仅需保证第一平面散斑图像和第二平面散斑图像上均含有(平板上的)散斑即可,本领域相关技术人员可根据实际需要选择使用,本申请对其不做进 一步限定。
实际工作中,本申请可以包含仅一台相机和仅一个散斑投射器,其中相机和散斑投射器的朝向相同,且相对位置固定不变。
该方法可以将白色平板置于相机和散斑投射器的正前方,由散斑投射器将散斑图案投射在白色平板上,并由相机拍摄平板图像。在一些示例中,通过将平板放置在相机和散斑投射器正前方的两个不同距离处,并使用相机分别进行拍摄,可以获得两张带有散斑的平板图像,即第一平面散斑图像和第二平面散斑图像。
在本申请的一些实施例中,方法还包含:根据亚像素匹配点的像素坐标,通过内参数转换法分别计算获得亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标。根据第一物理坐标和第二物理坐标,可以通过射影几何理论计算第一平面散斑图像上的第一物理坐标与第二平面散斑图像上对应的第二物理坐标之间的映射矩阵。例如,可以由下式来计算映射矩阵:
Figure PCTCN2021087040-appb-000005
其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标;
其中,映射矩阵可以表示如下:
Figure PCTCN2021087040-appb-000006
其中,H为映射矩阵,I为3×3的单位矩阵,μ为标量,v为对应于散斑投射器的中心在相机上的投影位置的齐次坐标和散斑投射器的中心在相机参考系中的方向向量,a为另一个二维向量的齐次表示。
在本申请的一些实施例中,通过成像矩阵将目标场景图像映射获得校正后的图像包含:利用插值法计算目标场景图像上各亚像素点的灰度,将灰度赋予校正后的图像上的对应像素点。由此,本方法可以提高图像的精准度。
为更清楚的说明本申请所提供的用于相机的图像校正方法,以下以具体实例对上述方法进行整体说明。本领域相关技术人员当可知,本申请的实施例仅为便于理解上述方法的组合实施方式,并不对其做进一步限定。
请参考图2及图3所示,图2是根据本申请实施例的用于相机的图像校正方法的应用实例的示意图,图3是根据本申请实施例的用于相机的图像校正方法的原理示意图。
如图2所示,将同一白色平板分别放置在相机和散斑投射器的正前方的两个不同距离处。在每个距离处,使平板能够尽可能多地覆盖相机的整个视场。由散斑投射器将散斑图案投射在平板上,并由相机拍摄得到两张带有散斑的平板图像,即第一平面散斑图像1和第二平面散斑图像2。
在一些实施例中,还可以采用两张白色平板分别放置于相机正前方的且距相机不同的距离处,以便获得第一平面散斑图像1和第二平面散斑图像2。
对第一平面散斑图像1和第二平面散斑图像2进行亚像素匹配。对于第一平面散斑图像1中的像素点,分别获取它们在第二平面散斑图像2中对应的亚像素匹配点。匹配方法可以采用块匹配,也可以采用现有的其他匹配方法。
根据每一组匹配点的像素坐标,计算它们的物理成像坐标,该过程可以通过内参数转换实现:
Figure PCTCN2021087040-appb-000007
其中,(x i,y i) T为第一平面散斑图像1中的像素点p i的像素坐标,(x′ i,y′ i) T为与该像素点p i对应的在第二平面散斑图像2中的匹配点p′ i的像素坐标。(u i,v i) T为像素点p i对应的第一物理坐标(也称为第一物理成像坐标),(u′ i,v′ i) T为p′ i对应的第二物理坐标(也称为第二物理成像坐标),A为相机的内参数矩阵。
计算由(u i,v i) T,(i=1,2,…,n)到(u′ i,v′ i) T,(i=1,2,…,n)的映射矩阵H。
在本申请的一些实施例中,H可以是3×3的单应性矩阵。根据射影几何的相关理论,H可由表示如下:
Figure PCTCN2021087040-appb-000008
其中,I为3×3的单位矩阵,μ为标量,v和a均为二维向量的齐次表示,其中v对应于散斑投射器的中心在相机上的投影位置的齐次坐标,同时也可以用来表达散斑投射器的中心在相机参考系中的方向向量。因此H可以由相互独立的5个参数来表达。
计算H可以采用迭代最优化的方法实现,最优化的目标函数可表示为:
Figure PCTCN2021087040-appb-000009
其中:
Figure PCTCN2021087040-appb-000010
从映射矩阵H中获得散斑投射器的中心在相机参考系中的方向向量v。通过前述计算得到H后,其中的向量v也可相应得到。v即是投射器中心在相机参考系中的方向向量。
调节相机参考系的坐标轴方向,使相机参考系的X轴与v对齐,然后更新相机的成像矩阵。
在本申请的一些实施例中,校正前的相机的成像矩阵可以表示为:
P=A[R|t];
其中,A和t分别是相机的内参数矩阵和平移向量,R是校正前相的机的旋转矩阵:
Figure PCTCN2021087040-appb-000011
假设校正后的相机的旋转矩阵为:
Figure PCTCN2021087040-appb-000012
其中,
Figure PCTCN2021087040-appb-000013
表示校正后相机参考系X轴的方向向量,其计算方法为:
Figure PCTCN2021087040-appb-000014
表示校正后相机参考系Y轴的方向向量,其计算方法为:
Figure PCTCN2021087040-appb-000015
表示校正后相机参考系Z轴的方向向量,其计算方法为:
Figure PCTCN2021087040-appb-000016
校正前后内参数矩阵A和平移向量t不变。因此,可以得到校正后相机的成像矩阵:
Figure PCTCN2021087040-appb-000017
根据校正后的成像矩阵,将原始的场景图像映射为新的图像,即为校正后的图像。利用下式获得变换矩阵T:
Figure PCTCN2021087040-appb-000018
矩阵T将原始的场景图像上的像素坐标映射到校正后的图像上。校正后图像的生成方法为:对于校正后的图像上的每一个像素位置
Figure PCTCN2021087040-appb-000019
通过T计算该点在原始图像上对应的像素位置(x,y) T,由于该位置一般不是整数,因此利用灰度插值的方法计算位置(x,y) T处的灰度,然后将它赋给校正后图像上的像素位置
Figure PCTCN2021087040-appb-000020
对每个像素执行上述操作,即可获得整个校 正后的图像。该图像可以等效成相机与散斑投射器处在理想位置关系,即连接相机的光心和散斑投射器的中心的直线与相机参考系的X轴平行的条件下所拍摄得到的图像。
参考图4,本申请还提供一种用于相机的图像校正装置。该装置包含图像采集器、匹配器、获取设备、计算设备、分析设备和处理器。
其中,图像采集器配置用于采集位于不同距离处的两个平面上的散斑图案,获得第一平面散斑图像和第二平面散斑图像,散斑图案由散斑投射器投射,相机和散斑投射器的朝向相同且相对位置固定。
匹配器配置用于通过图像匹配算法匹配第一平面散斑图像和第二平面散斑图像,获得亚像素匹配点。
获取设备配置用于根据亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标,获得第一物理坐标和第二物理坐标之间的映射矩阵。
计算设备配置用于根据映射矩阵获得散斑投射器的中心在相机参考系中的方向向量。
分析设备配置用于调节相机参考系的坐标轴方向,使水平轴方向与方向向量对齐,并更新相机的成像矩阵。
处理器配置用于通过所述成像矩阵将目标场景图像映射获得校正后的图像。
在本申请的一些实施例中,获取设备配置用于:根据亚像素匹配点的像素坐标,通过内参数转换法分别计算获得亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在第二平面散斑图像上对应的第二物理坐标。
在本申请的一些实施例中,获取设备配置用于:通过射影几何理论计算第一平面散斑图像上的第一物理坐标与第二平面散斑图像上对应的第二物理坐标之间的映射矩阵。例如,可以利用下式来计算映射矩阵:
Figure PCTCN2021087040-appb-000021
其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标;
此外,映射矩阵可以表示如下:
Figure PCTCN2021087040-appb-000022
其中,H为映射矩阵,μ为标量,v为对应于散斑投射器的中心在相机上的投影位置的齐次 坐标和散斑投射器的中心在相机参考系中的方向向量,a为另一个二维向量的齐次表示。
在本申请的一些实施例中,处理器配置用于:利用插值法计算目标场景图像上各亚像素点的灰度,将所述灰度赋予校正后的图像上的对应像素点。
本申请还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现根据本申请实施例的方法。
本发明还提供一种计算机可读存储介质,所述计算机可读存储介质存储有执行根据本申请实施例的方法的计算机程序。
本发明的有益技术效果在于:能够以较为简单的结构,实现较低成本和功耗下较高的校准精度,为后续图像识别等技术提供较好的数据支持。
如图5所示,该电子设备600还可以包括:通信模块110、输入单元120、音频处理单元130、显示器160、电源170。值得注意的是,电子设备600也并不是必须要包括图5中所示的所有部件;此外,电子设备600还可以包括图5中没有示出的部件,可以参考现有技术。
如图5所示,中央处理器100有时也称为控制器或操作控件,可以包括微处理器或其他处理器装置和/或逻辑装置,该中央处理器100接收输入并控制电子设备600的各个部件的操作。
其中,存储器140,例如可以是缓存器、闪存、硬驱、可移动介质、易失性存储器、非易失性存储器或其它合适装置中的一种或更多种。可储存上述与失败有关的信息,此外还可存储执行有关信息的程序。并且中央处理器100可执行该存储器140存储的该程序,以实现信息存储或处理等。
输入单元120向中央处理器100提供输入。该输入单元120例如为按键或触摸输入装置。电源170用于向电子设备600提供电力。显示器160用于进行图像和文字等显示对象的显示。该显示器例如可为LCD显示器,但并不限于此。
该存储器140可以是固态存储器,例如,只读存储器(ROM)、随机存取存储器(RAM)、SIM卡等。还可以是这样的存储器,其即使在断电时也保存信息,可被选择性地擦除且设有更多数据,该存储器的示例有时被称为EPROM等。存储器140还可以是某种其它类型的装置。存储器140包括缓冲存储器141(有时被称为缓冲器)。存储器140可以包括应用/功能存储部142,该应用/功能存储部142用于存储应用程序和功能程序或用于通过中央处理器100执行电子设备600的操作的流程。
存储器140还可以包括数据存储部143,该数据存储部143用于存储数据,例如联系人、 数字数据、图片、声音和/或任何其他由电子设备使用的数据。存储器140的驱动程序存储部144可以包括电子设备的用于通信功能和/或用于执行电子设备的其他功能(如消息传送应用、通讯录应用等)的各种驱动程序。
通信模块110即为经由天线111发送和接收信号的发送机/接收机110。通信模块(发送机/接收机)110耦合到中央处理器100,以提供输入信号和接收输出信号,这可以和常规移动通信终端的情况相同。
基于不同的通信技术,在同一电子设备中,可以设置有多个通信模块110,如蜂窝网络模块、蓝牙模块和/或无线局域网模块等。通信模块(发送机/接收机)110还经由音频处理器130耦合到扬声器131和麦克风132,以经由扬声器131提供音频输出,并接收来自麦克风132的音频输入,从而实现通常的电信功能。音频处理器130可以包括任何合适的缓冲器、解码器、放大器等。另外,音频处理器130还耦合到中央处理器100,从而使得可以通过麦克风132能够在本机上录音,且使得可以通过扬声器131来播放本机上存储的声音。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种用于相机的图像校正方法,其特征在于,所述方法包括:
    相机采集位于不同距离的两个平面上的散斑图案获得第一平面散斑图像和第二平面散斑图像,其中所述散斑图案由散斑投射器投射,所述相机与所述散斑投射器的朝向相同且相对位置固定;
    通过图像匹配算法匹配所述第一平面散斑图像和所述第二平面散斑图像,获得亚像素匹配点;
    根据所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在所述第二平面散斑图像上对应的第二物理坐标,获得所述第一物理坐标和所述第二物理坐标之间的映射矩阵;
    根据所述映射矩阵获得所述散斑投射器的中心在所述相机参考系中的方向向量;
    调节所述相机参考系的坐标轴方向,使水平轴方向与所述方向向量对齐,并更新所述相机的成像矩阵;
    通过所述成像矩阵将目标场景图像映射获得校正后的图像。
  2. 根据权利要求1所述的图像校正方法,其特征在于,进一步包括:根据所述亚像素匹配点的像素坐标,通过内参数转换法分别计算获得所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在所述第二平面散斑图像上对应的第二物理坐标。
  3. 根据权利要求1所述的图像校正方法,其特征在于,根据所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在所述第二平面散斑图像上对应的第二物理坐标,获得所述第一物理坐标和所述第二物理坐标之间的映射矩阵包括:
    根据下式计算所述映射矩阵:
    Figure PCTCN2021087040-appb-100001
    其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标;
    所述映射矩阵的表示形式如下:
    Figure PCTCN2021087040-appb-100002
    其中,H为映射矩阵,I为3×3的单位矩阵,μ为标量,v为对应于所述散斑投射器的中心在所述相机上的投影位置的齐次坐标和所述散斑投射器的中心在所述相机参考系中的方向 向量,a为另一个二维向量的齐次表示。
  4. 根据权利要求1所述的图像校正方法,其特征在于,通过所述成像矩阵将目标场景图像映射获得校正后的图像包括:
    利用插值法计算所述目标场景图像上各亚像素点的灰度,将所述灰度赋予校正后的图像上的对应像素点。
  5. 一种图像校正装置,其特征在于,包括:
    图像采集器,配置用于采集位于不同距离处的两个平面上的散斑图案以获得第一平面散斑图像和第二平面散斑图像,其中所述散斑图案由散斑投射器投射,所述图像采集器与所述散斑投射器的朝向相同且相对位置固定;
    匹配器,配置用于通过图像匹配算法匹配所述第一平面散斑图像和所述第二平面散斑图像,获得亚像素匹配点;
    获取设备,配置用于根据所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在所述第二平面散斑图像上对应的第二物理坐标,获得所述第一物理坐标和所述第二物理坐标之间的映射矩阵;
    计算设备,配置用于根据所述映射矩阵获得所述散斑投射器的中心在所述相机参考系中的方向向量;
    分析设备,配置用于调节所述相机参考系的坐标轴方向,使水平轴方向与所述方向向量对齐,并更新所述相机的成像矩阵;
    处理器,配置用于通过所述成像矩阵将目标场景图像映射获得校正后的图像。
  6. 根据权利要求5所述的校正装置,其特征在于,所述获取设备配置用于:根据所述亚像素匹配点的像素坐标,通过内参数转换法分别计算获得所述亚像素匹配点在第一平面散斑图像上对应的第一物理坐标和在所述第二平面散斑图像上对应的第二物理坐标。
  7. 根据权利要求5所述的校正装置,其特征在于,所述获取设备配置用于根据下式计算所述映射矩阵:
    Figure PCTCN2021087040-appb-100003
    其中,(u i,v i) T表示第一物理坐标,(u′ i,v′ i) T表示第二物理坐标;
    所述映射矩阵的表示形式如下:
    Figure PCTCN2021087040-appb-100004
    其中,H为映射矩阵,I为3×3的单位矩阵,μ为标量,v为对应于所述散斑投射器的中心在所述相机上的投影位置的齐次坐标和所述散斑投射器的中心在所述相机参考系中的方向向量,a为另一个二维向量的齐次表示。
  8. 根据权利要求5所述的校正装置,其特征在于,所述处理器配置用于:
    利用插值法计算所述目标场景图像上各亚像素点的灰度,将所述灰度赋予校正后的图像上的对应像素点。
  9. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至4任一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有执行权利要求1至4任一项所述的方法的计算机程序。
PCT/CN2021/087040 2020-04-16 2021-04-13 用于相机的图像校正方法和装置 WO2021208933A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/488,502 US20220036521A1 (en) 2020-04-16 2021-09-29 Image correction method and apparatus for camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010298041.9 2020-04-16
CN202010298041.9A CN111540004B (zh) 2020-04-16 2020-04-16 单相机极线校正方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/488,502 Continuation US20220036521A1 (en) 2020-04-16 2021-09-29 Image correction method and apparatus for camera

Publications (1)

Publication Number Publication Date
WO2021208933A1 true WO2021208933A1 (zh) 2021-10-21

Family

ID=71978575

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087040 WO2021208933A1 (zh) 2020-04-16 2021-04-13 用于相机的图像校正方法和装置

Country Status (3)

Country Link
US (1) US20220036521A1 (zh)
CN (1) CN111540004B (zh)
WO (1) WO2021208933A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902652A (zh) * 2021-12-10 2022-01-07 南昌虚拟现实研究院股份有限公司 散斑图像校正方法、深度计算方法、装置、介质及设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019022941A1 (en) 2017-07-28 2019-01-31 OPSYS Tech Ltd. VCSEL LIDAR TRANSMITTER WITH LOW ANGULAR DIVERGENCE
KR102364531B1 (ko) 2017-11-15 2022-02-23 옵시스 테크 엘티디 잡음 적응형 솔리드-스테이트 lidar 시스템
JP2022526998A (ja) 2019-04-09 2022-05-27 オプシス テック リミテッド レーザ制御を伴うソリッドステートlidar送光機
CN111540004B (zh) * 2020-04-16 2023-07-14 北京清微智能科技有限公司 单相机极线校正方法及装置
CN112184811B (zh) * 2020-09-22 2022-11-04 合肥的卢深视科技有限公司 单目空间结构光系统结构校准方法及装置
CN113034565B (zh) * 2021-03-25 2023-07-04 奥比中光科技集团股份有限公司 一种单目结构光的深度计算方法及系统
WO2023010565A1 (zh) * 2021-08-06 2023-02-09 中国科学院深圳先进技术研究院 单目散斑结构光系统的标定方法、装置及终端
CN113793387A (zh) * 2021-08-06 2021-12-14 中国科学院深圳先进技术研究院 单目散斑结构光系统的标定方法、装置及终端
CN114926371B (zh) * 2022-06-27 2023-04-07 北京五八信息技术有限公司 一种全景图的垂直校正、灭点检测方法、设备及存储介质
CN115546311B (zh) * 2022-09-28 2023-07-25 中国传媒大学 一种基于场景信息的投影仪标定方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868524A (zh) * 2013-12-23 2014-06-18 西安新拓三维光测科技有限公司 一种基于散斑图案的单目测量系统标定方法及装置
CN106651794A (zh) * 2016-12-01 2017-05-10 北京航空航天大学 一种基于虚拟相机的投影散斑校正方法
CN109461181A (zh) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 基于散斑结构光的深度图像获取方法及系统
US20190319036A1 (en) * 2005-10-11 2019-10-17 Apple Inc. Method and system for object reconstruction
CN111540004A (zh) * 2020-04-16 2020-08-14 北京清微智能科技有限公司 单相机极线校正方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110049305B (zh) * 2017-12-18 2021-02-26 西安交通大学 一种智能手机的结构光深度相机自校正方法及装置
CN108629841A (zh) * 2018-05-08 2018-10-09 深圳大学 一种基于激光散斑多视点三维数据测量方法及系统
CN110580716A (zh) * 2018-06-07 2019-12-17 凌上科技(北京)有限公司 一种深度信息采集方法、装置以及介质
CN110853086A (zh) * 2019-10-21 2020-02-28 北京清微智能科技有限公司 基于散斑投影的深度图像生成方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190319036A1 (en) * 2005-10-11 2019-10-17 Apple Inc. Method and system for object reconstruction
CN103868524A (zh) * 2013-12-23 2014-06-18 西安新拓三维光测科技有限公司 一种基于散斑图案的单目测量系统标定方法及装置
CN106651794A (zh) * 2016-12-01 2017-05-10 北京航空航天大学 一种基于虚拟相机的投影散斑校正方法
CN109461181A (zh) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 基于散斑结构光的深度图像获取方法及系统
CN111540004A (zh) * 2020-04-16 2020-08-14 北京清微智能科技有限公司 单相机极线校正方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HU, HUAHU: "Research on 3D Visual Measurement System of Monocular Structure Light", CHINESE MASTER'S THESES FULL-TEXT DATABASE, 17 April 2017 (2017-04-17), XP055857985 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902652A (zh) * 2021-12-10 2022-01-07 南昌虚拟现实研究院股份有限公司 散斑图像校正方法、深度计算方法、装置、介质及设备

Also Published As

Publication number Publication date
CN111540004B (zh) 2023-07-14
CN111540004A (zh) 2020-08-14
US20220036521A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
WO2021208933A1 (zh) 用于相机的图像校正方法和装置
WO2021103347A1 (zh) 投影仪的梯形校正方法、装置、系统及可读存储介质
US10659768B2 (en) System and method for virtually-augmented visual simultaneous localization and mapping
TWI555378B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
WO2019179168A1 (zh) 投影畸变校正方法、装置、系统及存储介质
WO2019049331A1 (ja) キャリブレーション装置、キャリブレーションシステム、およびキャリブレーション方法
CN111750820A (zh) 影像定位方法及其系统
JPWO2018235163A1 (ja) キャリブレーション装置、キャリブレーション用チャート、チャートパターン生成装置、およびキャリブレーション方法
US10063792B1 (en) Formatting stitched panoramic frames for transmission
WO2020029373A1 (zh) 人眼空间位置的确定方法、装置、设备和存储介质
US10326894B1 (en) Self stabilizing projector
CN111028155A (zh) 一种基于多对双目相机的视差图像拼接方法
CN112541973B (zh) 虚实叠合方法与系统
CN112308925A (zh) 可穿戴设备的双目标定方法、设备及存储介质
US11989894B2 (en) Method for acquiring texture of 3D model and related apparatus
CN105513074B (zh) 一种羽毛球机器人相机以及车身到世界坐标系的标定方法
CN110853102B (zh) 一种新的机器人视觉标定及引导方法、装置及计算机设备
CN111696141B (zh) 一种三维全景扫描采集方法、设备及存储设备
JP2009301181A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
EP4071713A1 (en) Parameter calibration method and apapratus
CN115174878B (zh) 投影画面校正方法、装置和存储介质
JP2009302731A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
TWM594322U (zh) 全向立體視覺的相機配置系統
CN115965697A (zh) 基于沙姆定律的投影仪标定方法、标定系统及装置
CN115834860A (zh) 背景虚化方法、装置、设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21789074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21789074

Country of ref document: EP

Kind code of ref document: A1