WO2019104453A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2019104453A1
WO2019104453A1 PCT/CN2017/113244 CN2017113244W WO2019104453A1 WO 2019104453 A1 WO2019104453 A1 WO 2019104453A1 CN 2017113244 W CN2017113244 W CN 2017113244W WO 2019104453 A1 WO2019104453 A1 WO 2019104453A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
dimensional
processing result
image
rotation matrix
Prior art date
Application number
PCT/CN2017/113244
Other languages
English (en)
French (fr)
Inventor
卢庆博
李琛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780028205.2A priority Critical patent/CN109155822B/zh
Priority to PCT/CN2017/113244 priority patent/WO2019104453A1/zh
Publication of WO2019104453A1 publication Critical patent/WO2019104453A1/zh
Priority to US16/865,786 priority patent/US20200267297A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory

Definitions

  • Embodiments of the present invention relate to image processing technologies, and in particular, to an image processing method and apparatus.
  • the image sensor records the light incident on the image sensor during the imaging process, but the camera does not conform to the commonly used camera imaging model due to certain distortion or alignment problems of the camera lens and the image sensor.
  • the larger the field of view of the camera the more severe the distortion.
  • Large angles of view of the lens provide a wider viewing angle and are therefore often used as a way to capture virtual reality images. If this type of lens is installed in a sports equipment, a car, a drone, etc., the camera's recording screen will frequently shake due to the vibration of the camera, causing discomfort to the observer. In this case, at least two operations of electronic image stabilization, distortion correction, and virtual reality display are required to be simultaneously performed on the input image.
  • any one of the operations needs to calculate the geometric transformation relationship between the input image and the output image, that is, the coordinate relationship between the output image and the input image.
  • the calculation complexity is high and the calculation time is long.
  • Embodiments of the present invention provide an image processing method and apparatus, thereby implementing rapid processing of an input image to complete at least two processing operations of electronic image stabilization, distortion correction, and virtual reality.
  • an embodiment of the present invention provides an image processing method, including:
  • the second processing result is mapped to a two-dimensional image coordinate system.
  • the two-dimensional coordinate point performs a two-dimensional to three-dimensional conversion operation according to a camera imaging model or a distortion correction model, and acquires a first processing result, including :
  • the two-dimensional coordinate point is subjected to a two-dimensional to three-dimensional conversion operation according to a parameter of the camera and a distortion correction model, and the first processing result is obtained.
  • the first processing result is subjected to virtual reality processing according to the first rotation matrix.
  • the first processing result is subjected to electronic anti-shake processing according to the second rotation matrix.
  • the first rotation matrix is determined according to an observer's posture angle parameter, according to The first rotation matrix processes the first processing result to obtain the second processing result.
  • the method further includes:
  • the second rotation matrix is a measurement acquired according to an inertial measurement unit connected to the camera And determining, by the parameter, processing the first processing result according to the second rotation matrix to obtain the second processing result.
  • the method further includes:
  • the second rotation matrix is acquired from an inertial measurement unit connected to the camera, the second rotation matrix being determined by the inertial measurement unit according to the measurement parameter.
  • the camera imaging model includes a small hole imaging model, an equidistant rectangular model, and a stereoscopic imaging model. , fisheye lens model and wide-angle lens model.
  • an embodiment of the present invention provides an image processing apparatus, including: a lens, an image sensor, and a processor;
  • the image sensor acquires a two-dimensional image through a lens
  • the processor is configured to implement the image processing method according to any of the possible implementation manners of the first aspect.
  • an embodiment of the present invention provides a computer storage medium having stored thereon a computer program or an instruction, and when the computer program or instruction is executed by a processor or a computer, implementing any possible implementation of the first aspect The image processing method described in the mode.
  • An image processing method and apparatus obtains a first processing result by performing a two-dimensional to three-dimensional conversion operation on a two-dimensional coordinate point of the acquired input image, and processes the first processing result according to at least one of the first rotation matrix and the second rotation matrix.
  • the first processing result obtains a second processing result, maps the second processing result to a two-dimensional image coordinate system, and obtains an output image, thereby implementing fast processing on the input image to complete distortion correction, virtual reality, and electronic
  • At least two processing operations in anti-shake can effectively reduce the computational complexity, shorten the calculation time, and improve the image processing efficiency.
  • FIG. 1 is a schematic diagram of an application scenario of the present invention
  • FIG. 2 is a flow chart of an image processing method according to the present invention.
  • FIG. 3 is a flow chart of another image processing method of the present invention.
  • Figure 4 is a schematic view of the flow chart shown in Figure 3;
  • FIG. 5 is a flowchart of another image processing method according to the present invention.
  • Figure 6 is a schematic view of the flow chart shown in Figure 5;
  • FIG. 7 is a flowchart of another image processing method according to the present invention.
  • Figure 8 is a schematic view of the flow chart shown in Figure 7;
  • FIG. 9 is a flowchart of another image processing method according to the present invention.
  • Figure 10 is a schematic view of the flow chart shown in Figure 9;
  • FIG. 11 is a block diagram showing the structure of an image processing apparatus of the present invention.
  • FIG. 1 is a schematic diagram of an application scenario of the present invention.
  • the application scenario includes an image processing device, which may be a camera, a camera device, an aerial camera device, a medical imaging device, etc., including a lens, an image sensor, and an image.
  • a processor wherein the lens is connected to the image sensor, the image sensor is connected to the image processor, the light is incident on the image sensor through the lens, the image sensor is imaged, the input image is obtained, and the image processor performs distortion correction, electronic image stabilization, and the input image.
  • the image processing method of the present application can effectively reduce high computational complexity and shorten calculation in at least two processing operations of performing distortion correction, electronic image stabilization, and virtual reality.
  • the image processing efficiency of the image processor is improved by the duration.
  • the image processor of the present invention may be located on a different electronic device than the lens and the image sensor, or may be located on the same electronic device as the lens and the image sensor.
  • FIG. 2 is a flowchart of an image processing method according to the present invention. As shown in FIG. 2, the method in this embodiment may include:
  • Step 101 Acquire a two-dimensional coordinate point of the input image.
  • the input image is that the light is incident on the image sensor through the lens, and the image sensor performs imaging, and the obtained image is a two-dimensional image, and the two-dimensional coordinate points of all the pixels in the input image can be acquired.
  • Step 102 Perform the two-dimensional coordinate point according to a camera imaging model or a distortion correction model The two-dimensional to three-dimensional conversion operation acquires the first processing result.
  • the two-dimensional to three-dimensional conversion operation specifically refers to establishing a one-to-one correspondence between the two-dimensional coordinate points and the incident ray, that is, mapping the two-dimensional coordinate points of the respective pixel points of the input image into incident rays, and the two-dimensional coordinates of the respective pixels
  • the incident ray corresponding to the coordinate point is the first processing result.
  • a specific implementation manner of step 102 may be: performing a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a camera imaging model, and acquiring a first processing result.
  • Another specific implementation manner of step 102 may be: performing a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a distortion correction model, and acquiring a first processing result.
  • the parameters of the camera may include the focal length and the position of the optical center of the camera, etc., which are not illustrated here.
  • the above camera imaging model may include any one of a small hole imaging model, an isometric rectangular model, a stereoscopic imaging model, a fisheye lens model, and a wide-angle lens model. It can be flexibly set according to requirements.
  • Step 103 Perform at least one processing of virtual reality and electronic image stabilization on the first processing result, and obtain a second processing result.
  • the virtual processing is performed on the first processing result according to the first rotation matrix, and the first processing result is subjected to electronic anti-shake processing according to the second rotation matrix.
  • the second processing result is obtained according to the first processing result in the processing step 102 in at least one of the first rotation matrix and the second rotation matrix.
  • the first rotation matrix is determined according to an observer's attitude angle parameter
  • the second rotation matrix is determined according to a measurement parameter acquired by an inertial measurement unit connected to the camera.
  • the camera may specifically refer to a lens and an image sensor as shown in FIG.
  • Step 104 Map the second processing result to a two-dimensional image coordinate system.
  • an output image is obtained, and the output image is an image after at least two processing operations of distortion correction, electronic image stabilization, and virtual reality.
  • the first processing result is obtained by performing a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point of the acquired input image, and the first processing result is processed according to at least one of the first rotation matrix and the second rotation matrix.
  • At least two processing operations in correction, electronic image stabilization and virtual reality can effectively reduce the computational complexity, shorten the calculation time and improve the image processing efficiency.
  • the camera imaging model, the distortion correction model, the first rotation matrix, the second rotation matrix, and the like referred to above may be referred to the prior art.
  • FIG. 3 is a flowchart of another image processing method according to the present invention
  • FIG. 4 is a schematic diagram of the flowchart shown in FIG. 3.
  • This embodiment is a specific implementation manner of performing distortion correction and virtual reality processing on an input image, as shown in the following figure.
  • the method of this embodiment may include:
  • Step 201 Obtain a two-dimensional coordinate point of the input image.
  • step 201 For the specific explanation of the step 201, reference may be made to the step 101 of the embodiment shown in FIG. 2, and details are not described herein again.
  • Step 202 Perform a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a distortion correction model, and obtain a first processing result.
  • step 202 implements the 2D to 3D conversion as shown in FIG.
  • the first processing result is represented by P 3D and the two-dimensional coordinate point is represented by P 2D .
  • Step 203 Perform virtual reality processing on the first processing result to obtain a second processing result.
  • the first rotation matrix is a rotation matrix used in the virtual reality processing process, and is determined according to an observer's attitude angle parameter.
  • This step 203 implements a 3D to 3D rotation process as shown in FIG. 4, and acquires a second processing result.
  • Step 204 Map the second processing result to a two-dimensional image coordinate system.
  • the incident ray rotated by the step 203 is mapped to the two-dimensional image coordinate system, and an output image is obtained, which is an image after the distortion correction and the virtual reality processing operation.
  • This step 204 implements a 3D to 2D mapping as shown in FIG.
  • step 204 can be, according to the formula
  • the second processing result is mapped to a two-dimensional image coordinate system.
  • the function It can be flexibly set according to your needs.
  • the first processing result is obtained, and the first processing result is subjected to virtual reality processing to obtain
  • the second processing result maps the second processing result to the two-dimensional image coordinate system to obtain an output image, thereby implementing fast processing on the input image to complete the distortion correction and the virtual reality processing operation, thereby effectively reducing the computational complexity , shorten the calculation time and improve image processing efficiency.
  • FIG. 5 is a flowchart of another image processing method according to the present invention
  • FIG. 6 is a schematic diagram of the flowchart shown in FIG. 5.
  • This embodiment is a specific implementation manner of performing distortion correction and electronic anti-shake processing on an input image, such as As shown in FIG. 5, the method in this embodiment may include:
  • Step 301 Acquire a two-dimensional coordinate point of the input image.
  • step 301 For a detailed explanation of the step 301, reference may be made to the step 101 of the embodiment shown in FIG. 2, and details are not described herein again.
  • Step 302 Perform a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a distortion correction model, and obtain a first processing result.
  • the step 302 implements the 2D to 3D conversion as shown in FIG. 6. Specifically, the two-dimensional coordinate point is subjected to a two-dimensional to three-dimensional conversion operation according to the camera parameter and the distortion correction model, that is, the two-dimensional coordinate point is mapped to the incident ray.
  • Step 303 Perform an electronic anti-shake processing on the first processing result to obtain a second processing knot. fruit.
  • the second rotation matrix is a rotation matrix used in the electronic anti-shake processing process, and is determined according to measurement parameters acquired by an inertial measurement unit connected to the camera.
  • This step 303 implements a 3D to 3D rotation process as shown in FIG. 6, that is, the incident ray obtained in step 302 is rotated according to the second rotation matrix, and the second processing result is obtained.
  • Step 304 Map the second processing result to a two-dimensional image coordinate system.
  • the incident ray rotated by the step 303 is mapped to the two-dimensional image coordinate system, and an output image is obtained, and the output image is an image subjected to the distortion correction and the electronic anti-shake processing operation.
  • This step 304 implements a 3D to 2D mapping as shown in FIG. 6.
  • step 304 can be, according to the formula
  • the second processing result is mapped to a two-dimensional image coordinate system.
  • the function It can be flexibly set according to your needs.
  • the first processing result is obtained, and the first processing result is subjected to electronic anti-shake processing.
  • FIG. 7 is a flowchart of another image processing method according to the present invention
  • FIG. 8 is a flow shown in FIG. 7.
  • the embodiment of the present invention is a specific implementation of the virtual reality and the electronic anti-shake processing on the input image. As shown in FIG. 7, the method in this embodiment may include:
  • Step 401 Acquire a two-dimensional coordinate point of the input image.
  • step 401 For a detailed explanation of the step 401, reference may be made to the step 101 of the embodiment shown in FIG. 2, and details are not described herein again.
  • Step 402 Perform a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate points according to parameters of the camera and a camera imaging model, and obtain a first processing result.
  • the step 402 realizes the 2D to 3D conversion as shown in FIG. 8. Specifically, the two-dimensional coordinate point is subjected to a two-dimensional to three-dimensional conversion operation according to the parameters of the camera, that is, the two-dimensional coordinate point is mapped to an incident ray.
  • Step 403 Perform virtual reality and electronic image stabilization processing on the first processing result to obtain a second processing result.
  • the first rotation matrix is a rotation matrix used in the virtual reality processing process, and is determined according to an observer's attitude angle parameter.
  • the second rotation matrix is a rotation matrix used in the electronic anti-shake processing process, and is determined according to measurement parameters acquired by an inertial measurement unit connected to the camera.
  • This step 403 implements the rotation processing of 3D to 3D and then 3D as shown in FIG. 8, that is, the incident ray obtained in step 402 is rotated according to the first rotation matrix and the second rotation matrix, and the second processing result is obtained.
  • the second processing result is represented by P' 3D
  • R VR represents the first rotation matrix
  • R IS represents the second rotation matrix
  • P '3D R IS R VR P 3D
  • obtaining a second processing result P' 3D the virtual reality processing is performed first and then the electronic image stabilization processing is performed.
  • P' 3D R IS R VR f cam (P 2D ).
  • Step 404 Map the second processing result to a two-dimensional image coordinate system.
  • the incident ray after the rotation process in step 403 is mapped to the two-dimensional image coordinates.
  • the output image can be obtained, and the output image is an image subjected to virtual reality and electronic image stabilization processing.
  • This step 404 implements a 3D to 2D mapping as shown in FIG.
  • step 404 can be, according to the formula
  • the second processing result is mapped to a two-dimensional image coordinate system.
  • the function It can be flexibly set according to your needs.
  • the first processing result is obtained, and the first processing result is subjected to virtual reality and electronic defense.
  • Shaking processing acquiring a second processing result, mapping the second processing result to a two-dimensional image coordinate system, obtaining an output image, thereby implementing fast processing of the input image to complete virtual reality and electronic anti-shake processing operations, Effectively reduce the computational complexity, shorten the calculation time, and improve the image processing efficiency.
  • FIG. 9 is a flowchart of another image processing method according to the present invention
  • FIG. 10 is a schematic diagram of the flowchart shown in FIG. 9.
  • the embodiment is a specific implementation of distortion correction, virtual reality, and electronic image stabilization processing on an input image.
  • the method as shown in FIG. 9, the method in this embodiment may include:
  • Step 501 Acquire a two-dimensional coordinate point of the input image.
  • step 501 For the specific explanation of the step 501, reference may be made to the step 101 of the embodiment shown in FIG. 2, and details are not described herein again.
  • Step 502 Perform a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a distortion correction model, and obtain a first processing result.
  • the step 502 implements the 2D to 3D conversion as shown in FIG. Specifically, the two-dimensional coordinate point is subjected to a two-dimensional to three-dimensional conversion operation according to the camera parameter and the distortion correction model, that is, the two-dimensional coordinate point is mapped to the incident ray.
  • step 502 needs to be performed first to perform distortion correction.
  • Step 503 Perform virtual reality and electronic anti-shake processing on the first processing result to obtain a second processing result.
  • the first rotation matrix is a rotation matrix used in the virtual reality processing process, and is determined according to an observer's attitude angle parameter.
  • the second rotation matrix is a rotation matrix used in the electronic anti-shake processing process, and is determined according to measurement parameters acquired by an inertial measurement unit connected to the camera.
  • This step 503 implements the rotation processing of 3D to 3D and then 3D as shown in FIG. 10, that is, the incident ray obtained in step 502 is rotated according to the first rotation matrix and the second rotation matrix, and the second processing result is obtained, ie, As shown in FIG. 10, the virtual reality processing is performed first and then the electronic image stabilization processing is performed.
  • step 503 can also perform electronic anti-shake processing and then perform virtual reality processing.
  • Step 504 Map the second processing result to a two-dimensional image coordinate system.
  • the incident ray rotated by the step 503 is mapped to the two-dimensional image coordinate system, and an output image is obtained, which is an image after the distortion correction, the electronic image stabilization, and the virtual reality processing operation.
  • This step 504 implements a 3D to 2D mapping as shown in FIG.
  • step 504 can be, according to the formula
  • the second processing result is mapped to a two-dimensional image coordinate system.
  • the function It can be flexibly set according to your needs.
  • the first processing result is obtained, and the first processing result is subjected to virtual reality and electronic defense.
  • Shake processing acquiring a second processing result, mapping the second processing result to a two-dimensional image coordinate system, obtaining an output image, thereby implementing fast processing of the input image to complete distortion correction, electronic image stabilization, and virtual reality processing operation , can effectively reduce the computational complexity, shorten the calculation time, and improve the image processing efficiency.
  • the apparatus of this embodiment may include a lens (not shown), an image sensor 11, and a processor 12, wherein the image sensor 11 is used.
  • the two-dimensional image is used as an input image
  • the processor 12 is configured to acquire two-dimensional coordinate points of the input image, and perform two-dimensional to three-dimensional coordinate points according to the camera imaging model or the distortion correction model. Converting operation, obtaining a first processing result; performing at least one processing of virtual reality and electronic image stabilization on the first processing result, and acquiring a second processing result to map the second processing result to a two-dimensional image coordinate system.
  • the processor 12 is configured to: perform a two-dimensional to three-dimensional conversion operation on the two-dimensional coordinate point according to a parameter of the camera and a camera imaging model, to obtain a first processing result; or, according to a parameter of the camera and a distortion correction model, The two-dimensional coordinate point performs a two-dimensional to three-dimensional conversion operation to obtain a first processing result.
  • the processor 12 is configured to perform virtual reality processing on the first processing result according to the first rotation matrix.
  • the processor 12 is configured to perform electronic anti-shake processing on the first processing result according to the second rotation matrix.
  • the first rotation matrix is determined according to an observer angle parameter of the observer, and the second processing result is obtained by processing the first processing result according to the first rotation matrix.
  • the processor 12 is further configured to: acquire an attitude angle parameter of the observer.
  • the second rotation matrix is determined according to measurement parameters acquired by an inertial measurement unit connected to the camera, and the processor 12 is configured to process the first processing result according to the second rotation matrix to obtain the second processing result.
  • the processor 12 is further configured to: acquire the measurement parameter from an inertial measurement unit connected to the camera, the processor 12 is further configured to determine the second rotation matrix according to the measurement parameter; or, the processing The device 12 is further configured to acquire the second rotation matrix from an inertial measurement unit connected to the camera, the second rotation matrix being determined by the inertial measurement unit according to the measurement parameter.
  • the camera imaging model includes any one of a small hole imaging model, an isometric rectangular model, a stereoscopic imaging model, a fisheye lens model, and a wide-angle lens model.
  • the device in this embodiment may be used to implement the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, and details are not described herein again.
  • the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the functional modules in the embodiments of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium. , including a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor All or part of the steps of the method described in the various embodiments of the present application are performed.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present invention are generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例提供一种图像处理方法和装置。本发明图像处理方法,包括:获取输入图像的二维坐标点;对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果;对所述第一处理结果进行虚拟现实、电子防抖至少一项处理,获取第二处理结果;将所述第二处理结果映射至二维图像坐标系。本发明实施例可以实现对输入图像的快速处理,以完成畸变矫正、虚拟现实和电子防抖中至少两项处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。

Description

图像处理方法和装置 技术领域
本发明实施例涉及图像处理技术,尤其涉及一种图像处理方法和装置。
背景技术
图像传感器在成像的过程中,记录下射入图像传感器的光线的情况,但是由于相机的镜头和图像传感器等部件存在一定的畸变或者对齐问题,导致相机并不符合常用的相机成像模型。通常,相机的视场角越大,畸变越严重。大视场角的镜头可以提供更广的视角,因此经常用来作为虚拟现实图像的采集方式。如果将这种类型镜头安装在运动设备、汽车、无人机等环境下,由于相机的振动,相机的录制画面会频繁抖动,而导致观察者的不适。在这种情况下,需要对输入图像同时执行电子防抖、畸变矫正和虚拟现实显示中至少两项操作。
然而,在同时执行电子防抖、畸变矫正和虚拟现实显示中至少两项操作过程中,任意一项操作均需要计算出输入图像和输出图像的几何变换关系,即输出图像与输入图像的坐标关系,计算复杂度高,计算时间较长。
发明内容
本发明实施例提供一种图像处理方法和装置,从而实现对输入图像的快速处理,以完成电子防抖、畸变矫正和虚拟现实中至少两项处理操作。
第一方面,本发明实施例提供一种图像处理方法,包括:
获取输入图像的二维坐标点;
对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果;
对所述第一处理结果进行虚拟现实、电子防抖至少一项处理,获取第二处理结果;
将所述第二处理结果映射至二维图像坐标系。
结合第一方面,在第一方面的一种可能的实现方式中,所述对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果,包括:
根据相机的参数、和相机成像模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果;或者,
根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
结合第一方面或者第一方面的一种可能的实现方式,在第一方面的另一种可能的实现方式中,根据第一旋转矩阵对所述第一处理结果进行虚拟现实处理。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,根据第二旋转矩阵对所述第一处理结果进行电子防抖处理。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,所述第一旋转矩阵为根据观察者的姿态角度参数确定的,根据所述第一旋转矩阵处理所述第一处理结果获取所述第二处理结果。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,所述方法还包括:
获取所述观察者的姿态角度参数。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,所述第二旋转矩阵为根据与相机相连接的惯性测量单元获取的测量参数确定的,根据所述第二旋转矩阵处理所述第一处理结果获取所述第二处理结果。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,所述方法还包括:
从与相机相连接的惯性测量单元获取所述测量参数,根据所述测量参数确定所述第二旋转矩阵;或者,
从与相机相连接的惯性测量单元获取所述第二旋转矩阵,所述第二旋转矩阵为所述惯性测量单元根据所述测量参数确定的。
结合第一方面或者第一方面的任一种可能的实现方式,在第一方面的另一种可能的实现方式中,所述相机成像模型包括小孔成像模型、等距矩形模型、立体成像模型、鱼眼镜头模型和广角镜头模型中任意一项。
第二方面,本发明实施例提供一种图像处理装置,包括:镜头、图像传感器和处理器;
所述图像传感器通过镜头采集二维图像;
所述处理器,用于实现如第一方面任一种可能的实现方式所述的图像处理方法。
第三方面,本发明实施例提供一种计算机存储介质,其上存储有计算机程序或指令,当所述计算机程序或指令被处理器或计算机执行时,实现如第一方面任一种可能的实现方式所述的图像处理方法。
本发明实施例图像处理方法和装置,通过对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,根据第一旋转矩阵和第二旋转矩阵中至少一项处理所述第一处理结果,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成畸变矫正、虚拟现实和电子防抖中至少两项处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的一种应用场景示意图;
图2为本发明的一种图像处理方法的流程图;
图3为本发明的另一种图像处理方法的流程图;
图4为图3所示的流程图的示意图;
图5为本发明的另一种图像处理方法的流程图;
图6为图5所示的流程图的示意图;
图7为本发明的另一种图像处理方法的流程图;
图8为图7所示的流程图的示意图;
图9为本发明的另一种图像处理方法的流程图;
图10为图9所示的流程图的示意图;
图11为本发明的一种图像处理装置的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本发明的一种应用场景示意图,如图1所示,该应用场景下包括一图像处理装置,可以为相机、摄像设备、航拍设备、医学成像设备等,包括镜头、图像传感器和图像处理器,其中,镜头与图像传感器连接,图像传感器与图像处理器连接,光线通过镜头射入图像传感器,图像传感器进行成像,得到输入图像,图像处理器对输入图像进行畸变矫正、电子防抖和虚拟现实中至少两项处理操作,以得到输出图像,本申请的图像处理方法在完成畸变矫正、电子防抖和虚拟现实中至少两项处理操作过程中,可以有效降低计算复杂度高,缩短计算时长,提升图像处理器的图像处理效率,其具体实现方式参见下述实施例的具体解释说明。
需要说明的是,本发明的图像处理器可以与镜头和图像传感器位于不同的电子设备上,也可以与镜头和图像传感器位于相同的电子设备上。
图2为本发明的一种图像处理方法的流程图,如图2所示,本实施例的方法可以包括:
步骤101、获取输入图像的二维坐标点。
其中,输入图像为光线通过镜头射入图像传感器,图像传感器进行成像,得到的图像,该输入图像为二维图像,则可以获取该输入图像中所有像素点的二维坐标点。
步骤102、根据相机成像模型或畸变矫正模型对所述二维坐标点进行 二维-三维转换操作,获取第一处理结果。
其中,进行二维-三维转换操作具体指建立二维坐标点与入射射线之间的一一对应关系,即将输入图像的各个像素点的二维坐标点映射为入射射线,各个像素点的二维坐标点对应的入射射线即为第一处理结果。可选的,步骤102的一种具体的可实现方式可以为:根据相机的参数和相机成像模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。步骤102的另一种具体的可实现方式可以为:根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
其中,相机的参数可以包括相机的焦距和光心位置等,此处不一一举例说明。
需要说明的是,上述相机成像模型可以包括小孔成像模型、等距矩形模型、立体成像模型、鱼眼镜头模型和广角镜头模型中任意一项。其可以根据需求进行灵活设置。
步骤103、对所述第一处理结果进行虚拟现实、电子防抖至少一项处理,获取第二处理结果。
其中,根据第一旋转矩阵对所述第一处理结果进行虚拟现实处理,根据第二旋转矩阵对所述第一处理结果进行电子防抖处理。根据第一旋转矩阵和第二旋转矩阵中至少一项处理步骤102中的第一处理结果,即获得第二处理结果。
具体的,该第一旋转矩阵为根据观察者的姿态角度参数确定的,该第二旋转矩阵为根据与相机相连接的惯性测量单元获取的测量参数确定的。该相机具体可以指如图1所示的镜头和图像传感器。
步骤104、将所述第二处理结果映射至二维图像坐标系。
具体的,将各个调整后的入射射线映射至二维图像坐标系,可以得到输出图像,该输出图像为经过畸变矫正、电子防抖和虚拟现实中至少两项处理操作后的图像。
本实施例,通过对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,根据第一旋转矩阵和第二旋转矩阵中至少一项处理所述第一处理结果,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成畸变 矫正、电子防抖和虚拟现实中至少两项处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。上述涉及到的相机成像模型、畸变矫正模型、第一旋转矩阵、第二旋转矩阵等可以参考现有技术。
下面采用几个具体的实施例,对图2所示方法实施例的技术方案进行详细说明。
图3为本发明的另一种图像处理方法的流程图,图4为图3所示的流程图的示意图,本实施例为对输入图像进行畸变矫正和虚拟现实处理的具体实施方式,如图3所示,本实施例的方法可以包括:
步骤201、获取输入图像的二维坐标点。
其中,步骤201的具体解释说明可以参见图2所示实施例的步骤101,此处不再赘述。
步骤202、根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
其中,该步骤202即实现如图4所示的2D到3D的转换。以P3D表示第一处理结果,P2D表示二维坐标点。相应的,步骤202可以为,根据公式P3D=fpin(P2D),获取第一处理结果P3D,其中,函数fpin()可以是一个多项式。
步骤203、对所述第一处理结果进行虚拟现实处理,获取第二处理结果。
其中,该第一旋转矩阵为虚拟现实处理过程中所使用的旋转矩阵,是根据观察者的姿态角度参数确定的。该步骤203即实现如图4所示的3D至3D的旋转处理,获取第二处理结果。
其中,以P′3D表示第二处理结果,RVR表示第一旋转矩阵。相应的,步骤203可以为,根据公式P′3D=RVRP3D,获取第二处理结果P′3D
将步骤202的公式P3D=fpin(P2D)带入P′3D=RVRP3D,得到P′3D=RVRfpin(P2D)。
步骤204、将所述第二处理结果映射至二维图像坐标系。
具体的,将经过步骤203旋转处理后的入射射线映射至二维图像坐标系,可以得到输出图像,该输出图像为经过畸变矫正和虚拟现实处理操作后的图像。该步骤204即实现如图4所示的3D至2D的映射。
其中,以P′2D表示映射至所述二维图像坐标系中的坐标点。相应的,步骤204可以为,根据公式
Figure PCTCN2017113244-appb-000001
将所述第二处理结果映射至二维图像坐 标系。其中,函数
Figure PCTCN2017113244-appb-000002
可以根据需求进行灵活设置。
将步骤203的公式P′3D=RVRfpin(P2D)带入
Figure PCTCN2017113244-appb-000003
得到
Figure PCTCN2017113244-appb-000004
本实施例,通过根据相机的参数和畸变矫正模型对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,对所述第一处理结果进行虚拟现实处理,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成畸变矫正和虚拟现实处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。
并且,本申请通过上述方式完成畸变矫正和虚拟现实处理操作,无需在P3D=fpin(P2D)之后和P′3D=RVRP3D之前,再进行
Figure PCTCN2017113244-appb-000005
和P3D=fcam(P2D),实现简化计算,并且进行
Figure PCTCN2017113244-appb-000006
和P3D=fcam(P2D)的计算通常是通过定点化或查找表实现的,因此导致
Figure PCTCN2017113244-appb-000007
和P3D=fcam(P2D)并不是完全等价的逆操作,多次反复计算之后会导致累计误差增加,通过本实施例上述方式简化计算,可以消除累计误差,提高计算的精度。
图5为本发明的另一种图像处理方法的流程图,图6为图5所示的流程图的示意图,本实施例为对输入图像进行畸变矫正和电子防抖处理的具体实施方式,如图5所示,本实施例的方法可以包括:
步骤301、获取输入图像的二维坐标点。
其中,步骤301的具体解释说明可以参见图2所示实施例的步骤101,此处不再赘述。
步骤302、根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
其中,该步骤302即实现如图6所示的2D到3D的转换。具体的,根据相机的参数和畸变矫正模型对二维坐标点进行二维-三维转换操作,即将二维坐标点映射为入射射线。
其中以P3D表示第一处理结果,P2D表示二维坐标点。相应的,步骤202可以为,根据公式P3D=fpin(P2D),获取第一处理结果P3D,其中,函数fpin()可以是一个多项式。
步骤303、对所述第一处理结果进行电子防抖处理,获取第二处理结 果。
其中,该第二旋转矩阵为电子防抖处理过程中所使用的旋转矩阵,是根据与相机相连接的惯性测量单元获取的测量参数确定的。该步骤303即实现如图6所示的3D至3D的旋转处理,即根据第二旋转矩阵对步骤302得到的入射射线进行旋转,获取第二处理结果。
其中,以P′3D表示第二处理结果,RIS表示第二旋转矩阵。相应的,步骤303可以为,根据公式P′3D=RISP3D,获取第二处理结果P′3D
将步骤302的公式P3D=fpin(P2D)带入P′3D=RISP3D,得到P′3D=RISfpin(P2D)。
步骤304、将所述第二处理结果映射至二维图像坐标系。
具体的,将经过步骤303旋转处理后的入射射线映射至二维图像坐标系,可以得到输出图像,该输出图像为经过畸变矫正和电子防抖处理操作后的图像。该步骤304即实现如图6所示的3D至2D的映射。
其中,以P′2D表示映射至所述二维图像坐标系中的坐标点。相应的,步骤304可以为,根据公式
Figure PCTCN2017113244-appb-000008
将所述第二处理结果映射至二维图像坐标系。其中,函数
Figure PCTCN2017113244-appb-000009
可以根据需求进行灵活设置。
将步骤303的公式P′3D=RISfpin(P2D)带入
Figure PCTCN2017113244-appb-000010
得到
Figure PCTCN2017113244-appb-000011
本实施例,通过根据相机的参数和畸变矫正模型对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,对所述第一处理结果进行电子防抖处理,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成畸变矫正和电子防抖处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。
并且,本申请通过上述方式完成畸变矫正和电子防抖处理操作,无需在P3D=fpin(P2D)之后和P′3D=RISP3D之前,再进行
Figure PCTCN2017113244-appb-000012
和P3D=fcam(P2D),实现简化计算,并且进行
Figure PCTCN2017113244-appb-000013
和P3D=fcam(P2D)的计算通常是通过定点化或查找表实现的,因此导致
Figure PCTCN2017113244-appb-000014
和P3D=fcam(P2D)并不是完全等价的逆操作,多次反复计算之后会导致累计误差增加,通过本实施例上述方式简化计算,可以消除累计误差,提高计算的精度。
图7为本发明的另一种图像处理方法的流程图,图8为图7所示的流 程图的示意图,本实施例为对输入图像进行虚拟现实和电子防抖处理的具体实施方式,如图7所示,本实施例的方法可以包括:
步骤401、获取输入图像的二维坐标点。
其中,步骤401的具体解释说明可以参见图2所示实施例的步骤101,此处不再赘述。
步骤402、根据相机的参数和相机成像模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
其中,该步骤402即实现如图8所示的2D到3D的转换。具体的,根据相机的参数对二维坐标点进行二维-三维转换操作,即将二维坐标点映射为入射射线。
其中以P3D表示第一处理结果,P2D表示二维坐标点。相应的,步骤202可以为,根据公式P3D=fcam(P2D),获取第一处理结果P3D
步骤403、对所述第一处理结果进行虚拟现实和电子防抖处理,获取第二处理结果。
其中,该第一旋转矩阵为虚拟现实处理过程中所使用的旋转矩阵,是根据观察者的姿态角度参数确定的。该第二旋转矩阵为电子防抖处理过程中所使用的旋转矩阵,是根据与相机相连接的惯性测量单元获取的测量参数确定的。该步骤403即实现如图8所示的3D至3D再至3D的旋转处理,即根据第一旋转矩阵和第二旋转矩阵对步骤402得到的入射射线进行旋转,获取第二处理结果。
其中,以P′3D表示第二处理结果,RVR表示第一旋转矩阵,RIS表示第二旋转矩阵。相应的,步骤403的一种可实现方式为,根据公式P′3D=RISRVRP3D,获取第二处理结果P′3D。即先进行虚拟现实处理然后再进行电子防抖处理。将步骤402的公式带入P′3D=RISRVRP3D中,可以得到P′3D=RISRVRfcam(P2D)。
需要说明的是,步骤405的另一种可实现方式为,根据公式P′3D=RVRRISP3D,获取第二处理结果P′3D。即先进行电子防抖处理然后再进行虚拟现实处理。将步骤402的公式带入P′3D=RVRRISP3D中,可以得到P′3D=RVRRISfcam(P2D)。
步骤404、将所述第二处理结果映射至二维图像坐标系。
具体的,将经过步骤403旋转处理后的入射射线映射至二维图像坐标 系,可以得到输出图像,该输出图像为经过虚拟现实和电子防抖处理操作后的图像。该步骤404即实现如图8所示的3D至2D的映射。
其中,以P′2D表示映射至所述二维图像坐标系中的坐标点。相应的,步骤404可以为,根据公式
Figure PCTCN2017113244-appb-000015
将所述第二处理结果映射至二维图像坐标系。其中,函数
Figure PCTCN2017113244-appb-000016
可以根据需求进行灵活设置。
将步骤403的公式P′3D=RISRVRfcam(P2D)带入
Figure PCTCN2017113244-appb-000017
可以得到
Figure PCTCN2017113244-appb-000018
将步骤403的公式P′3D=RVRRISfcam(P2D)带入
Figure PCTCN2017113244-appb-000019
可以得到
Figure PCTCN2017113244-appb-000020
本实施例,通过根据相机的参数和相机成像模型对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,对所述第一处理结果进行虚拟现实和电子防抖的处理,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成虚拟现实和电子防抖处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。
并且,本申请通过上述方式完成虚拟现实和电子防抖处理操作,无需在P3D=fcam(P2D)之后和P′3D=RISRVRP3D(或者P′3D=RVRRISP3D)之前,再进行
Figure PCTCN2017113244-appb-000021
和P3D=fcam(P2D),实现简化计算,并且进行
Figure PCTCN2017113244-appb-000022
和P3D=fcam(P2D)的计算通常是通过定点化或查找表实现的,因此导致
Figure PCTCN2017113244-appb-000023
和P3D=fcam(P2D)并不是完全等价的逆操作,多次反复计算之后会导致累计误差增加,通过本实施例上述方式简化计算,可以消除累计误差,提高计算的精度。
图9为本发明的另一种图像处理方法的流程图,图10为图9所示的流程图的示意图,本实施例为对输入图像进行畸变矫正、虚拟现实和电子防抖处理的具体实施方式,如图9所示,本实施例的方法可以包括:
步骤501、获取输入图像的二维坐标点。
其中,步骤501的具体解释说明可以参见图2所示实施例的步骤101,此处不再赘述。
步骤502、根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
其中,该步骤502即实现如图10所示的2D到3D的转换。具体的,根据相机的参数和畸变矫正模型对二维坐标点进行二维-三维转换操作,即将二维坐标点映射为入射射线。
其中以P3D表示第一处理结果,P2D表示二维坐标点。相应的,步骤202可以为,根据公式P3D=fpin(P2D),获取第一处理结果P3D
需要说明的是,与上述图7所示实施例不同,本实施例可以进行畸变矫正、虚拟现实和电子防抖处理,当完成三种处理时,需要先执行步骤502进行畸变矫正。本实施例的第一处理结果为P3D=fpin(P2D)。
步骤503、对所述第一处理结果进行虚拟现实和电子防抖处理,获取第二处理结果。
其中,该第一旋转矩阵为虚拟现实处理过程中所使用的旋转矩阵,是根据观察者的姿态角度参数确定的。该第二旋转矩阵为电子防抖处理过程中所使用的旋转矩阵,是根据与相机相连接的惯性测量单元获取的测量参数确定的。该步骤503即实现如图10所示的3D至3D再至3D的旋转处理,即根据第一旋转矩阵和第二旋转矩阵对步骤502得到的入射射线进行旋转,获取第二处理结果,即如图10所示先进行虚拟现实处理然后再进行电子防抖处理。
可以理解的,步骤503也可以先进行电子防抖处理然后再进行虚拟现实处理。
其中,以P′3D表示第二处理结果,RVR表示第一旋转矩阵,RIS表示第二旋转矩阵。步骤503的一种可实现方式可以为,根据公式P′3D=RISRVRP3D,获取第二处理结果P′3D。将步骤502的公式带入P′3D=RISRVRP3D中,可以得到P′3D=RISRVRfpin(P2D)。
需要说明的是,步骤405的另一种可实现方式为,根据公式P′3D=RVRRISP3D,获取第二处理结果P′3D。将步骤502的公式带入P′3D=RVRRISP3D中,可以得到P′3D=RVRRISfpin(P2D)。
步骤504、将所述第二处理结果映射至二维图像坐标系。
具体的,将经过步骤503旋转处理后的入射射线映射至二维图像坐标系,可以得到输出图像,该输出图像为经过畸变矫正、电子防抖和虚拟现实处理操作后的图像。该步骤504即实现如图10所示的3D至2D的映射。
其中,以P′2D表示映射至所述二维图像坐标系中的坐标点。相应的,步 骤504可以为,根据公式
Figure PCTCN2017113244-appb-000024
将所述第二处理结果映射至二维图像坐标系。其中,函数
Figure PCTCN2017113244-appb-000025
可以根据需求进行灵活设置。
将步骤503的公式P′3D=RISRVRfpin(P2D)带入
Figure PCTCN2017113244-appb-000026
可以得到
Figure PCTCN2017113244-appb-000027
将步骤503的公式P′3D=RVRRISfpin(P2D)带入
Figure PCTCN2017113244-appb-000028
可以得到
Figure PCTCN2017113244-appb-000029
本实施例,通过根据相机的参数和畸变矫正模型对获取的输入图像的二维坐标点进行二维-三维转换操作,获取第一处理结果,对所述第一处理结果进行虚拟现实和电子防抖处理,获取第二处理结果,将所述第二处理结果映射至二维图像坐标系,得到输出图像,从而实现对输入图像的快速处理,以完成畸变矫正、电子防抖和虚拟现实处理操作,可以有效降低计算复杂度高,缩短计算时长,提升图像处理效率。
并且,本申请通过上述方式完成畸变矫正、虚拟现实和电子防抖处理操作,无需在P3D=fcam(P2D)之后和P′3D=RISRVRP3D(或者P′3D=RVRRISP3D)之前,再进行
Figure PCTCN2017113244-appb-000030
和P3D=fcam(P2D),实现简化计算,并且进行
Figure PCTCN2017113244-appb-000031
和P3D=fcam(P2D)的计算通常是通过定点化或查找表实现的,因此导致
Figure PCTCN2017113244-appb-000032
和P3D=fcam(P2D)并不是完全等价的逆操作,多次反复计算之后会导致累计误差增加,通过本实施例上述方式简化计算,可以消除累计误差,提高计算的精度。
图11为本发明的一种图像处理装置的结构示意图,如图11所示,本实施例的装置可以包括:镜头(未示出)、图像传感器11和处理器12,其中,图像传感器11用于采集二维图像,将所述二维图像作为输入图像,处理器12用于获取输入图像的二维坐标点,对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果;对所述第一处理结果进行虚拟现实、电子防抖至少一项处理,获取第二处理结果将所述第二处理结果映射至二维图像坐标系。
所述处理器12用于:根据相机的参数和相机成像模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果;或者,根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
所述处理器12用于,根据第一旋转矩阵对所述第一处理结果进行虚拟现实处理。
所述处理器12用于,根据第二旋转矩阵对所述第一处理结果进行电子防抖处理。
其中,所述第一旋转矩阵为根据观察者的姿态角度参数确定的,根据所述第一旋转矩阵处理所述第一处理结果获取所述第二处理结果。
所述处理器12还用于:获取所述观察者的姿态角度参数。
所述第二旋转矩阵为根据与相机相连接的惯性测量单元获取的测量参数确定的,所述处理器12用于根据所述第二旋转矩阵处理所述第一处理结果获取所述第二处理结果。
所述处理器12还用于:从与相机相连接的惯性测量单元获取所述测量参数,所述处理器12还用于根据所述测量参数确定所述第二旋转矩阵;或者,所述处理器12还用于从与相机相连接的惯性测量单元获取所述第二旋转矩阵,所述第二旋转矩阵为所述惯性测量单元根据所述测量参数确定的。
其中,所述相机成像模型包括小孔成像模型、等距矩形模型、立体成像模型、鱼眼镜头模型和广角镜头模型中任意一项。
本实施例的装置,可以用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本发明的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor) 执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (11)

  1. 一种图像处理方法,其特征在于,包括:
    获取输入图像的二维坐标点;
    对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果;
    对所述第一处理结果进行虚拟现实、电子防抖至少一项处理,获取第二处理结果;
    将所述第二处理结果映射至二维图像坐标系。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述二维坐标点根据相机成像模型或畸变矫正模型进行二维-三维转换操作,获取第一处理结果,包括:
    根据相机的参数和畸变矫正模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果;或者,
    根据相机的参数和相机成像模型对所述二维坐标点进行二维-三维转换操作,获取第一处理结果。
  3. 根据权利要求1所述的方法,其特征在于,根据第一旋转矩阵对所述第一处理结果进行虚拟现实处理。
  4. 根据权利要求1所述的方法,其特征在于,根据第二旋转矩阵对所述第一处理结果进行电子防抖处理。
  5. 根据权利要求3所述的方法,其特征在于,所述第一旋转矩阵为根据观察者的姿态角度参数确定的,根据所述第一旋转矩阵处理所述第一处理结果获取所述第二处理结果。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    获取所述观察者的姿态角度参数。
  7. 根据权利要求4所述的方法,其特征在于,所述第二旋转矩阵为根据与相机相连接的惯性测量单元获取的测量参数确定的,根据所述第二旋转矩阵处理所述第一处理结果获取所述第二处理结果。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    从与相机相连接的惯性测量单元获取所述测量参数,根据所述测量参数确定所述第二旋转矩阵;或者,
    从与相机相连接的惯性测量单元获取所述第二旋转矩阵,所述第二旋转矩阵为所述惯性测量单元根据所述测量参数确定的。
  9. 根据权利要求2所述的方法,其特征在于,所述相机成像模型包括小孔成像模型、等距矩形模型、立体成像模型、鱼眼镜头模型和广角镜头模型中任意一项。
  10. 一种图像处理装置,其特征在于,包括:镜头、图像传感器和处理器;
    所述图像传感器通过镜头采集二维图像;
    所述处理器,用于实现如权利要求1至9任一项所述的图像处理方法。
  11. 一种计算机存储介质,其上存储有计算机程序或指令,其特征在于,当所述计算机程序或指令被处理器或计算机执行时,实现如权利要求1至9任一项所述的图像处理方法。
PCT/CN2017/113244 2017-11-28 2017-11-28 图像处理方法和装置 WO2019104453A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780028205.2A CN109155822B (zh) 2017-11-28 2017-11-28 图像处理方法和装置
PCT/CN2017/113244 WO2019104453A1 (zh) 2017-11-28 2017-11-28 图像处理方法和装置
US16/865,786 US20200267297A1 (en) 2017-11-28 2020-05-04 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/113244 WO2019104453A1 (zh) 2017-11-28 2017-11-28 图像处理方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/865,786 Continuation US20200267297A1 (en) 2017-11-28 2020-05-04 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2019104453A1 true WO2019104453A1 (zh) 2019-06-06

Family

ID=64803849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/113244 WO2019104453A1 (zh) 2017-11-28 2017-11-28 图像处理方法和装置

Country Status (3)

Country Link
US (1) US20200267297A1 (zh)
CN (1) CN109155822B (zh)
WO (1) WO2019104453A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3979617A4 (en) * 2019-08-26 2022-06-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. METHOD AND DEVICE FOR ANTI-BLUR RECORDINGS, TERMINAL DEVICE AND STORAGE MEDIA
CN112489114B (zh) * 2020-11-25 2024-05-10 深圳地平线机器人科技有限公司 图像转换方法、装置、计算机可读存储介质及电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935909A (zh) * 2015-05-14 2015-09-23 清华大学深圳研究生院 一种基于深度信息的多幅图超分辨方法
CN105894574A (zh) * 2016-03-30 2016-08-24 清华大学深圳研究生院 一种双目三维重建方法
CN107346551A (zh) * 2017-06-28 2017-11-14 太平洋未来有限公司 一种光场光源定向方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101876533B (zh) * 2010-06-23 2011-11-30 北京航空航天大学 一种显微立体视觉校准方法
US10229477B2 (en) * 2013-04-30 2019-03-12 Sony Corporation Image processing device, image processing method, and program
CN104833360B (zh) * 2014-02-08 2018-09-18 无锡维森智能传感技术有限公司 一种二维坐标到三维坐标的转换方法
CN105227828B (zh) * 2015-08-25 2017-03-15 努比亚技术有限公司 拍摄装置和方法
TWI555378B (zh) * 2015-10-28 2016-10-21 輿圖行動股份有限公司 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
US20170286993A1 (en) * 2016-03-31 2017-10-05 Verizon Patent And Licensing Inc. Methods and Systems for Inserting Promotional Content into an Immersive Virtual Reality World

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935909A (zh) * 2015-05-14 2015-09-23 清华大学深圳研究生院 一种基于深度信息的多幅图超分辨方法
CN105894574A (zh) * 2016-03-30 2016-08-24 清华大学深圳研究生院 一种双目三维重建方法
CN107346551A (zh) * 2017-06-28 2017-11-14 太平洋未来有限公司 一种光场光源定向方法

Also Published As

Publication number Publication date
US20200267297A1 (en) 2020-08-20
CN109155822B (zh) 2021-07-27
CN109155822A (zh) 2019-01-04

Similar Documents

Publication Publication Date Title
WO2018153374A1 (zh) 相机标定
WO2019205852A1 (zh) 确定图像捕捉设备的位姿的方法、装置及其存储介质
CN107945112B (zh) 一种全景图像拼接方法及装置
EP3134868B1 (en) Generation and use of a 3d radon image
US10726580B2 (en) Method and device for calibration
US11282232B2 (en) Camera calibration using depth data
US10803556B2 (en) Method and apparatus for image processing
WO2010028559A1 (zh) 图像拼接方法及装置
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
CN111325792B (zh) 用于确定相机位姿的方法、装置、设备和介质
WO2019037038A1 (zh) 图像处理方法、装置及服务器
EP3318053A1 (en) Full-spherical video imaging system and computer-readable recording medium
WO2019232793A1 (zh) 双摄像头标定方法、电子设备、计算机可读存储介质
TWI669683B (zh) 三維影像重建方法、裝置及其非暫態電腦可讀取儲存媒體
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
CN110675456A (zh) 多深度摄像机外部参数标定方法、装置及存储介质
US8509522B2 (en) Camera translation using rotation from device
CN109785225B (zh) 一种用于图像矫正的方法和装置
CN109427040B (zh) 图像处理装置及方法
WO2019104453A1 (zh) 图像处理方法和装置
WO2018170725A1 (zh) 一种图像传输方法、装置及设备
WO2018219274A1 (zh) 降噪处理方法、装置、存储介质及终端
WO2023221969A1 (zh) 3d图片拍摄方法和3d拍摄系统
WO2020135577A1 (zh) 画面生成方法、装置、终端及对应的存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933748

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933748

Country of ref document: EP

Kind code of ref document: A1