WO2019000676A1 - 一种利用双摄像头实现三维图像的方法及装置 - Google Patents

一种利用双摄像头实现三维图像的方法及装置 Download PDF

Info

Publication number
WO2019000676A1
WO2019000676A1 PCT/CN2017/103820 CN2017103820W WO2019000676A1 WO 2019000676 A1 WO2019000676 A1 WO 2019000676A1 CN 2017103820 W CN2017103820 W CN 2017103820W WO 2019000676 A1 WO2019000676 A1 WO 2019000676A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
field data
camera
phase grating
Prior art date
Application number
PCT/CN2017/103820
Other languages
English (en)
French (fr)
Inventor
曾广荣
都斌
Original Assignee
诚迈科技(南京)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 诚迈科技(南京)股份有限公司 filed Critical 诚迈科技(南京)股份有限公司
Publication of WO2019000676A1 publication Critical patent/WO2019000676A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2545Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects

Definitions

  • the invention relates to a method and a device for realizing a three-dimensional image, in particular to a method and a device for realizing a three-dimensional image by using a dual camera.
  • 3D Three Dimensions
  • VR virtual reality
  • AR Augmented Reality
  • image recognition and measurement 3D (Three Dimensions) photos and videos based on dual camera will be the future trend.
  • Users can directly capture 3D photos and videos directly through the dual camera of the mobile phone to achieve more applications and scenarios, such as: virtual reality (VR) , Virtual Reality), Augmented Reality (AR), image recognition and measurement.
  • VR virtual reality
  • AR Augmented Reality
  • some mobile phones are equipped with two cameras on the same side (such as front or rear dual cameras), and two photos are obtained by simultaneously capturing the first camera and the second camera, one of which is a color photo and one of which is a grayscale photo.
  • the super-pixel synthesis algorithm is used to reconstruct the gray-scale photo data to obtain the depth of field data, and then the 3D image is simulated and generated by the 3D algorithm according to the color photo and the depth of field data.
  • the depth of field data simulated by the superpixel synthesis algorithm is not accurate, resulting in distortion of the synthesized 3D content, which is not a true 3D image.
  • a method for realizing a three-dimensional image by using a dual camera comprising the following steps:
  • the image signal processor After receiving the image information and the depth of field data, the image signal processor directly transmits the depth of field data to the 3D image generation module if it is determined to be depth of field data, and performs normal image signal processing if it is determined to be image information. ;
  • the depth of field data and the image information processed by the image signal processing are combined by a 3D image generation module to generate a 3D image.
  • the step of acquiring the phase grating image of the stripe deformation of the same view image space by using the second camera, and calculating the depth of field data of the target object further includes:
  • the acquired phase grating image after the stripe deformation is demodulated to obtain a phase change including the depth information, and then an optical analog trigonometric function is used to calculate the depth of field data of the target object.
  • the projecting the phase grating structure light by using the structured light emitting unit to the same viewing angle image space is to use the laser diode module on the second camera to project the phase grating structured light to the same viewing angle image space.
  • the second camera is a universal color camera; and further comprising: after the phase grating image after the stripe deformation is generated by the second camera capturing the phase grating structure light projection:
  • the operation mode of the second camera is set to an IR gray mode.
  • the image information and the depth of field data are respectively sent to the image signal processor through two MIPI interfaces, or the image information and the depth of field data are encapsulated and sent to the image signal processor through a MIPI interface.
  • the method controls shooting synchronization and frame synchronization by using an I2C communication bus.
  • the present invention also provides an apparatus for realizing a three-dimensional image by using a dual camera, comprising:
  • An image information collecting module configured to acquire image information of the target object by using the first camera, and send the image information to the image signal processor;
  • a depth of field data generating module using the second camera to acquire a phase grating image of the stripe deformation of the same viewing angle image space, and calculating depth of field data of the target object, and transmitting the image to the image signal processor;
  • the image signal processor is configured to: after receiving the image information and the depth of field data, if the depth of field data is determined, the depth of field data is directly transmitted to the 3D image generation module, and if the image information is determined, It performs a regular diagram Image processing is sent to the 3D image generation module;
  • the 3D image generation module is configured to synthesize the depth of field data and the image information processed by the image signal to generate a 3D image.
  • the depth of field data generating module further includes:
  • a structured light emitting unit for projecting phase grating structured light to the same view picture space
  • the phase grating image acquisition unit uses the second camera to collect the phase grating image after the stripe deformation is generated by the phase grating structure light projection;
  • the depth of field data calculation unit demodulates the acquired phase grating image after the stripe deformation to obtain a phase change including the depth information, and then performs an optical analog trigonometric function to calculate the depth of field data of the target object.
  • the structured light emitting unit projects the phase grating structured light to the same view picture space by using the laser diode module on the second camera.
  • the second camera adopts a universal color camera, and its working mode is set to an IR gray mode.
  • the image information obtained by the image information collecting module and the depth of field data obtained by the depth of field data generating module are respectively sent to the image signal processor through two MIPI interfaces, or image information obtained by the image information collecting module.
  • the depth of field data obtained by the depth of field data generating module is encapsulated and sent to the image signal processor through a MIPI interface.
  • the device controls shooting synchronization and frame synchronization using an I2C communication bus.
  • the method and the device for implementing a three-dimensional image by using a dual camera have the beneficial effects of:
  • the invention discloses a method and a device for realizing a three-dimensional image by using a dual camera.
  • the image processing function of the image data processor of the depth of field bypass image signal is obtained, and the image data of the depth of field data and the image of another camera are 3D synthesized and controlled by the I2C communication bus.
  • Shooting synchronization, frame synchronization, real-time, high-fidelity 3D photos and videos, the present invention is adapted to various shooting environments, including night scenes, moving objects in the screen, etc.
  • the present invention can realize mobile terminals including mobile phones, tablets, televisions, 3D dual-camera method based on bypass ISP and dual-frame synchronization on smart cars, VR, AR, and drones.
  • FIG. 1 is a flow chart of steps of an embodiment of a method for implementing a three-dimensional image by using a dual camera
  • FIG. 2 is a schematic diagram of functions and processes of an image signal processor
  • FIG. 3 is a system structural diagram of an embodiment of a system for realizing a three-dimensional image by using a dual camera;
  • FIG. 4 is a detailed structural diagram of a depth of field data generating module according to an embodiment of the present invention.
  • FIG. 5 is a block diagram of a dual-camera hardware design using a separate MIPI channel in accordance with an embodiment of the present invention
  • FIG. 6 is a block diagram of a dual-camera hardware design using a MIPI virtual channel in accordance with an embodiment of the present invention
  • FIG. 7 is a schematic diagram of bypassing ISP and transparent transmission of depth of field data according to an embodiment of the present invention.
  • a method for implementing a three-dimensional image by using a dual camera includes the following steps:
  • Step 101 Acquire image information of the target object by using the first camera, and send the image information to the image signal processor;
  • Step 102 The second camera is used to collect the phase grating image of the stripe deformation in the same view image space, and the depth of field data of the target object is calculated and sent to the image signal processor.
  • step 102 further includes:
  • the structured light emitting unit is used to project the phase grating structured light, that is, the laser beam of the infrared band with extremely high stability, to the same view picture space, and the grating is deformed by the depth of the object surface to cause stripe deformation.
  • the structured light emitting unit may be an LDM (Laser Diode Module) on a camera;
  • step S2 the phase grating image after the stripe deformation is generated by the second camera is used to collect the light after the phase grating structure is projected.
  • the second camera can reuse the existing universal color camera, but before use, the infrared filter is turned off by the voice coil motor microcontroller, so that the full transmissive spectrum filter starts to work, so that the camera The sensor receives the infrared light, that is, sets the working mode of the color camera to the IR gray mode, so that the camera collects the deformed fringe phase grating image of the surface of the target object while working.
  • Step S3 demodulating the acquired phase grating image after the stripe deformation is performed to obtain a phase change including the depth information, and then performing an optical analog trigonometric function to calculate the depth of field data of the target object.
  • the image information of the target object obtained by the first camera and the depth of field data of the target object are respectively sent to the image signal processor ISP module on the mobile phone SoC through two MIPI interfaces, or the image information is After being encapsulated with the depth of field data, it is sent to the image signal processor ISP on the mobile phone SoC through a MIPI interface for processing.
  • Step 103 After receiving the two signals, the image signal processor, if it is determined that the signal is the depth of field data, directly transmits the depth of field data to the 3D image generation module, and if it is determined that the signal is the image information captured by the first camera, Conventional image signal processing is performed, including image processing such as denoising, white balance, and the like, and sent to the 3D image generation module.
  • Conventional image signal processing is performed, including image processing such as denoising, white balance, and the like, and sent to the 3D image generation module.
  • the photos taken by each camera will be connected to the image signal through the MIPI interface.
  • the processor ISP, the image signal processor ISP performs white balance, denoising, edge distortion shaping and the like on the original image, and then supplies the processed image data to the next module for previewing and recording, wherein the main chip
  • Pass1Node is responsible for canvas size adjustment and format conversion of image signals.
  • Pass2Node is responsible for white balance, denoising, exposure adjustment of image information, and recognition of faces in images.
  • Step 104 The 3D image generation module performs synthesis according to the image information processed by the image signal processor and the depth of field data to generate a 3D image. Specifically, the 3D image generation module performs 3D modeling on the space through the depth of field data, and renders the color image onto the 3D model according to the frame frame synchronization sequence number, and completes the synthesis of the 3D image or video.
  • the image signal processor ISP receives image information and depth of field data through two MIPI interfaces (MIPI0 and MIPI1), and directly bypasses the determined depth of field data by processing Pass1 and Pass2.
  • the process through the callback function DataCallBack callback to upload the data directly to the function AppOnPreviewFrame on the back-end or upper-level application, to achieve the purpose of the back-end or upper-layer application to obtain the transparent depth data, where the transparent transmission refers to the transmission network Medium, modulation and demodulation mode, transmission mode, and transmission protocol-independent data transmission mode, but the invention is not limited thereto;
  • the determined image information is processed by Pass1 and Pass2 of Pass1Node and Pass2Node, including image information.
  • Canvas size adjustment, format conversion and other processing, white balance, denoising, exposure adjustment, etc. of the image information and upload the data to the backend or upper application function AppOnPreviewFrame through the callback function DataCallBack callback, that is, in the implementation
  • the 3D image generation module is set in the backend or upper application. Depth data and image information is synthesized at the rear end or the upper layer of the completed application, but the present invention is not limited thereto.
  • the present invention can pass the depth of field data by using the image processing function of obtaining the depth of field data bypass image signal processor.
  • the image information captured by the color camera (first camera) is synthesized, or the depth of field data is transparently transmitted to the back end or the upper layer application and the image information captured by the color camera (first camera) is synthesized, and the shooting synchronization is controlled through the I2C communication bus.
  • a system for implementing a three-dimensional image by using a dual camera includes: an image information collecting module 301, a depth of field data generating module 302, and an image signal processor (ISP). 303 and 3D image generation module 304.
  • ISP image signal processor
  • the image information collecting module 301 is configured to acquire image information of the target object.
  • the image information collecting module 301 is implemented by using a first camera, and the first camera may be an ordinary color camera; the depth of field data is generated.
  • the module 302 is configured to acquire a phase grating image of the stripe deformation in the same view image space by using the second camera, and calculate the depth of field data of the target object;
  • the image signal processor 303 is configured to receive the image information and the depth of field data. If the depth of field data is determined, the depth of field data is directly transmitted to the 3D image generation module 305, and if it is determined that the acquired signal is the image information, the conventional image signal processing is performed, including images such as denoising and white balance.
  • the processing is sent to the 3D image generation module 304.
  • the 3D image generation module 304 is configured to combine the depth of field data and the image signal processed by the image signal to generate a 3D image. Specifically, the 3D image generation module 304 performs 3D modeling on the space through the depth of field data, and renders the color image onto the 3D model according to the frame frame synchronization sequence number, and completes the synthesis of the 3D image or video.
  • the depth of field data generating module 302 further includes:
  • the structured light emitting unit 3021 is configured to project phase grating structured light, that is, an infrared band of extremely high stability, in the same viewing angle picture space as the first camera, and the grating is deformed by depth in the surface of the object to cause stripe deformation.
  • the structured light emitting unit may be an LDM (Laser Diode Module) on a camera;
  • the phase grating image acquisition unit 3022 collects the phase grating image after the stripe deformation is generated by using the second camera.
  • the second camera can reuse the existing universal color camera, but before use, the infrared filter is turned off by the voice coil motor microcontroller, so that the full transmissive spectrum filter starts to work, so that the camera
  • the sensor receives the infrared light, that is, sets the working mode of the color camera to the IR gray mode, so that the camera collects the deformed fringe phase grating image of the surface of the target object while working.
  • the depth of field data calculation unit 3023 is configured to demodulate the acquired phase grating image after the stripe deformation to obtain a phase change including the depth information, and then perform an optical analog function to calculate the depth of field data of the target object.
  • the image information collection module 301 uses the color camera (RGB Camera), that is, the first camera, and the acquired image information of the target object and the depth of field data generation module 302 are generated according to the infrared camera (IR Camera).
  • RGB Camera color camera
  • IR Camera infrared camera
  • the depth of field data of the object generated by the stripe-shaped phase grating image can pass through two MIPI interfaces (MIPI0 and MIPI1) is sent to the image signal processing ISP on the SoC, as shown in FIG. 5, or encapsulated and sent to the image signal processing ISP on the SoC through a MIPI interface, as shown in FIG. 6.
  • the image signal processor ISP 303 directly bypasses the processing process of Pass1 and Pass2 through the Pass1 Node and the Pass2 Node for the depth of field data obtained by the infrared camera, and directly uploads the data to the callback function DataCallBack callback.
  • the AppOnPreviewFrame function of the upper application APP implements the purpose of obtaining the transparent depth data by the upper application App, as shown by the dotted path in FIG. 7, that is, the 3D image generation module is set in the backend or upper application, image information and depth of field data.
  • the compositing is done in the backend or upper application; the image information obtained by the color camera is processed by Pass1 and Pass2 Node Pass1 and Pass2, including the canvas size adjustment and format conversion of the image information, and the image is processed.
  • the information is white balance, denoising, exposure adjustment, etc., and then upload the data to the function AppOnPreviewFrame through the callback function DataCallBack callback, as shown in the solid line path in Figure 7, the specific process is shown in Figure 7.
  • the partial business logic specific code of the image signal processor ISP in the present invention is implemented as follows:
  • step 0 a mutex and a buffer pointer are initialized on the first node Pass1 Node of the first path Pass1 of the image signal processing ISP.
  • Step 1 Initialize a mutex, a buffer pointer, and a SetIRisAddr function on the second node Pass2 Node of the second path Pass2.
  • Step 2 Define the SaveIRis function in the first node Pass1 Node.
  • Step 3 Send the data sent to the first path Pass1 to the ptrbuff pointer defined in the first path Pass1 structure through the SaveIRis function function interface.
  • Step 4 Define the ReadIRisAddr function in the second node Pass2 Node.
  • Step 5 Through the ReadIRisAddr function interface, the data in the IRisRead Buff is transferred from 10 bits to 8 bits, and rotated 90 degrees to be sent to the output of the second channel Pass2.
  • Step 6 Call the setIRisAddr function of the second path Pass2, and direct the IRisRead pointer to the mpIRisBuffer, and the IRisLock pointer to the MIRisLockMtx to realize the Buffer address sharing of the first path Pass1 and the second path Pass2.
  • Step 7 Upload the data directly to the App's OnPreviewFrame through the DataCallBack callback.
  • Step 8 Perform depth/gray data and color data synthesis on the App side.
  • Step 9 Preview and record the synthesized data by the camera client App.
  • the present invention provides a method and apparatus for realizing a three-dimensional image by using a dual camera, by performing an image processing function of obtaining a depth of field data bypass image signal processor, and performing 3D synthesis of the depth information and the image information of another camera.
  • Pass The I2C communication bus controls shooting synchronization and frame synchronization, realizing real-time, high-fidelity 3D photos and videos.
  • the present invention is applicable to various shooting environments, including night scenes, moving objects in the screen, etc.
  • the present invention can realize mobile terminals including mobile phones. 3D dual-camera method based on bypass ISP and dual-frame synchronization on flat panel, TV, smart car, VR, AR, and drone.
  • the present invention has the following advantages:
  • the invention is based on the single image signal processor (ISP) scheme of the existing main control chip, and does not require an additional ISP to dock the infrared camera, thereby reducing the design cost of the intelligent terminal;
  • ISP single image signal processor
  • the method for obtaining depth of field data by bypassing ISP is highly adaptive.
  • the second camera responsible for depth of field data can normally collect color images, and improve the resolution of double-shot photos and videos. Rate, frame rate and quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种利用双摄像头实现三维图像的方法及装置,所述方法包括:利用第一摄像头获取目标物体的图像信息,并发送至图像信号处理器;利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据,发送至图像信号处理器;图像信号处理器于接收到所述图像信息与景深数据后,若判断为景深数据,则将所述景深数据直接传递至3D图像生成模块,若判断出为图像信息,则进行常规的图像信号处理;利用3D图像生成模块对所述景深数据和经图像信号处理处理后的图像信息进行合成,生成3D图像,通过本发明,可实现实时、高保真的3D照片和视频。

Description

一种利用双摄像头实现三维图像的方法及装置
本申请要求2017年6月29日提交的申请号为:201710516187.4、发明名称为“一种利用双摄像头实现三维图像的方法及装置”的中国专利申请的优先权,其全部内容合并在此。
技术领域
本发明涉及一种三维图像的实现方法及装置,尤其涉及一种利用双摄像头实现三维图像的方法及装置。
背景技术
随着智能手机的快速发展,互相之间的差异越来越小,它们具有差不多的配置,相似的外观设计,因此,智能手机需要一些“新功能”来重新吸引用户的注意。通过对用户的使用习惯分析,除了基本的通话、上网功能外,用户使用频率比较高的是相机。前置或后置双摄相头的需求开始受到各大智能手机厂商的追捧和用户的认可。通过双摄相头拍摄出来的照片拥有更高的分辨率,更出色的噪点控制,更好的动态范围,更精确的景深数据,更受用户的青睐。
基于双摄头拍摄3D(Three Dimensions,三维立体)照片和视频将是未来发展趋势,用户直接通过手机的双摄像头即时拍摄3D照片和视频可以实现更多的应用和场景,比如:虚拟现实(VR,Virtual Reality)、增强现实(AR,Augmented Reality)、图像识别和测量等。
目前有部分手机配置同侧双摄像头(如前置或后置双摄像头),利用第一摄像头和第二摄像头同时拍摄而获得两张照片,其中一张是彩色照片,一张是灰度照片,利用超像素合成算法对灰度照片进行数据重建得到景深数据,然后再根据所述彩色照片和景深数据利用3D算法模拟生成3D照片。
然而,现有技术中由于拍摄的照片本身不携带景深数据,由超像素合成算法模拟得到的景深数据并不精确,导致合成的3D内容存在失真,并非真正的3D图像。
发明内容
为克服上述现有技术存在的不足,本发明之目的在于提供一种利用双摄像头实现三维图像的方法及装置,以提高双摄像头拍摄照片和视频的质量。
为达上述目的,本发明提供的技术方案如下:
一种利用双摄像头实现三维图像的方法,包括如下步骤:
利用第一摄像头获取目标物体的图像信息,并发送至图像信号处理器;
利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据,发送至图像信号处理器;
图像信号处理器于接收到所述图像信息与景深数据后,若判断为景深数据,则将所述景深数据直接传递至3D图像生成模块,若判断出为图像信息,则进行常规的图像信号处理;
利用3D图像生成模块对所述景深数据和经图像信号处理处理后的图像信息进行合成,生成3D图像。
进一步地,所述利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据的步骤进一步包括:
利用结构光发射单元对相同视角画面空间投射相位光栅结构光;
利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像;
对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出所述目标物体的景深数据。
进一步地,所述利用结构光发射单元对相同视角画面空间投射相位光栅结构光为利用第二摄像头上的激光二极管模块对相同视角画面空间投射相位光栅结构光。
进一步地,所述第二摄像头,为通用彩色摄像头;在所述利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像之前进一步包括:
将所述第二摄像头的工作模式设置为IR灰度模式。
进一步地,所述图像信息和景深数据通过两路MIPI接口分别发送给所述图像信号处理器,或者将所述图像信息与景深数据封装后通过一路MIPI接口发送给所述图像信号处理器。
进一步地,所述方法通过利用I2C通讯总线控制拍摄同步、帧同步。
为达到上述目的,本发明还提供一种利用双摄像头实现三维图像的装置,包括:
图像信息采集模块,用于利用第一摄像头获取目标物体的图像信息,并发送至图像信号处理器;
景深数据生成模块,利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出所述目标物体的景深数据,发送至图像信号处理器;
图像信号处理器,用于接收到所述图像信息以及所述景深数据后,若判断为景深数据,则将所述景深数据直接传递至3D图像生成模块,若判断为所述图像信息,则对其进行常规的图 像信号处理后送至所述3D图像生成模块;
3D图像生成模块,用于对所述景深数据和所述经图像信号处理后的图像信息进行合成,生成3D图像。
进一步地,所述景深数据生成模块进一步包括:
结构光发射单元,用于对相同视角画面空间投射相位光栅结构光;
相位光栅图像采集单元,利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像;
景深数据计算单元,对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出所述目标物体的景深数据。
进一步地,所述结构光发射单元利用第二摄像头上的激光二极管模块对相同视角画面空间投射相位光栅结构光。
进一步地,所述第二摄像头采用通用彩色摄像头,其工作模式设置为IR灰度模式。
进一步地,所述图像信息采集模块获得的图像信息和所述景深数据生成模块获得的景深数据通过两路MIPI接口分别发送给所述图像信号处理器,或者所述图像信息采集模块获得的图像信息和所述景深数据生成模块获得的景深数据封装后通过一路MIPI接口发送给所述图像信号处理器。
进一步地,所述装置利用I2C通讯总线控制拍摄同步、帧同步。
与现有技术相比,本发明一种利用双摄像头实现三维图像的方法及装置的有益效果在于:
本发明一种利用双摄像头实现三维图像的方法及装置通过将获得景深数据旁路图像信号处理器的图像处理功能,将景深数据与另外一路摄像头的图像信息进行3D合成,并通过I2C通讯总线控制拍摄同步、帧同步,实现了实时、高保真的3D照片和视频,本发明适应各种拍摄环境,包括夜景、画面中有移动的物体等,本发明可实现移动终端包括手机、平板、电视、智能汽车、VR、AR、无人机上的基于旁路ISP、双摄帧同步的3D双摄方法。
附图说明
图1为本发明一种利用双摄像头实现三维图像的方法的一个实施例步骤流程图;
图2为图像信号处理器的功能和流程示意图;
图3为本发明一种利用双摄像头实现三维图像的系统的一个实施例系统结构图;
图4为本发明具体实施例中景深数据生成模块的细部结构图;
图5为本发明具体实施例中采用分离MIPI通道的双摄硬件设计框图;
图6为本发明具体实施例中采用MIPI虚拟通道的双摄硬件设计框图;
图7为本发明具体实施例中旁路ISP、透传景深数据的示意图。
具体实施方式
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对照附图说明本发明的具体实施方式。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,并获得其他的实施方式。
为使图面简洁,各图中只示意性地表示出了与本发明相关的部分,它们并不代表其作为产品的实际结构。另外,以使图面简洁便于理解,在有些图中具有相同结构或功能的部件,仅示意性地绘示了其中的一个,或仅标出了其中的一个。在本文中,“一个”不仅表示“仅此一个”,也可以表示“多于一个”的情形。
在本发明的一个实施例中,如图1所示,本发明一种利用双摄像头实现三维图像的方法,包括如下步骤:
步骤101,利用第一摄像头获取目标物体的图像信息,并发送至图像信号处理器;
步骤102,利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据,发送至图像信号处理器;
具体地,步骤102进一步包括:
步骤S1,利用结构光发射单元对相同视角画面空间投射相位光栅结构光,即稳定性极高的红外波段的激光,所述光栅在物体表面受深度的调制而发生条纹形变。具体的,所述结构光发射单元可以是相机上的LDM(Laser Diode Module,激光二极管模块);
步骤S2,利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像。在本发明具体实施例中,第二摄像头可重用现有的通用彩色摄像头,但在使用之前需要通过音圈马达微控制器关闭红外滤光片,让全透光谱滤光片开始工作,使摄像头传感器接收到红外光线,即将彩色摄像头的工作模式设置为IR灰度模式,从而让摄像头在工作时采集目标物体表面的变形条纹相位光栅图像。
步骤S3,对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出目标物体的景深数据。
在本发明具体实施例中,通过第一摄像头获得的目标物体的图像信息和所述目标物体的景深数据通过两路MIPI接口分别发送给手机SoC上的图像信号处理器ISP模块,或者将图像信息与景深数据封装后通过一路MIPI接口发送给手机SoC上的图像信号处理器ISP进行处理。
步骤103,图像信号处理器于接收到这两个信号后,若判断出信号为景深数据,则将景深数据直接传递至3D图像生成模块,若判断出信号为第一摄像头拍摄的图像信息,则进行常规的图像信号处理,包括去噪、白平衡等图像处理并发送至3D图像生成模块。
由于目前对于手机常规的硬件设计,都是手机主芯片(SoC,System on Chip)上内置图像信号处理器(ISP,Image Signal Processor),每个摄像头拍摄的照片都将通过MIPI接口连接到图像信号处理器ISP,图像信号处理器ISP会对原始的图像进行白平衡、去噪、边缘扭曲整形等一系列的处理,然后将处理过的图像数据提供给下一个模块进行预览和录制,其中主芯片内置的ISP功能和流程如图2所示,Pass1Node负责对图像信号进行画布尺寸调整、格式转换等处理,Pass2Node负责对图像信息进行白平衡、去噪、曝光调整,以及识别图像内的人脸等信息,但是这样的机制会破坏来自灰度相机的景深数据,导致后端无法与彩色相机的数据合成出3D照片和视频,因此,本步骤将获得的景深数据旁路图像信号处理器的图像处理功能,以避免景深数据被破坏。
步骤104,利用3D图像生成模块根据图像信号处理器处理后的图像信息和景深数据进行合成,生成3D图像。具体的,3D图像生成模块通过景深数据对空间进行3D建模,按画面帧同步序号,将彩色图像渲染到3D模型上,完成3D图像或视频的合成。
在本发明具体实施例中,于步骤104中,图像信号处理器ISP通过两路MIPI接口(MIPI0与MIPI1)分别接收图像信息与景深数据,将判断出的景深数据直接绕过Pass1和Pass2的处理过程,通过回调函数DataCallBack回调将数据直接上传给后端或上层应用程序上的函数AppOnPreviewFrame,实现后端或上层应用程序获取被透传的景深数据目的,这里的透传指的是与传输网络的介质、调制解调方式、传输方式、传输协议无关的一种数据传送方式,但本发明不以此为限;将判断出的图像信息经过Pass1Node和Pass2Node的Pass1与Pass2处理,包括对图像信息进行画布尺寸调整、格式转换等处理,对图像信息进行白平衡、去噪、曝光调整等,并通过回调函数DataCallBack回调将数据上传至后端或上层应用程序的函数AppOnPreviewFrame,也就是说,在该实施例中,3D图像生成模块设置于后端或上层应用程序,图像信息和景深数据进行合成是在后端或上层应用程序完成的,但本发明不以此为限。
可见,本发明可通过将获得景深数据旁路图像信号处理器的图像处理功能,将景深数据与 彩色相机(第一摄像头)拍摄的图像信息进行合成,或将景深数据透传给后端或上层应用与彩色相机(第一摄像头)拍摄的图像信息进行合成,并通过I2C通讯总线控制拍摄同步、帧同步,以实现3D照片和视频的拍摄,
在本发明的另一个实施例中,如图3所示,本发明一种利用双摄像头实现三维图像的系统,包括:图像信息采集模块301、景深数据生成模块302、图像信号处理器(ISP)303以及3D图像生成模块304。
其中,图像信息采集模块301,用于获取目标物体的图像信息,在本发明具体实施例中,图像信息采集模块301采用第一摄像头实现,该第一摄像头可为普通的彩色相机;景深数据生成模块302,利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据;图像信号处理器303,用于接收到所述图像信息以及所述景深数据后,若判断为景深数据,则将景深数据直接传递至3D图像生成模块305,若判断出获取的信号为所述图像信息,则对其进行常规的图像信号处理,包括去噪、白平衡等图像处理后送至3D图像生成模块304;3D图像生成模块304,用于对所述景深数据和所述经图像信号处理后的图像信息进行合成,生成3D图像。具体的,3D图像生成模块304通过景深数据对空间进行3D建模,按画面帧同步序号,将彩色图像渲染到3D模型上,完成3D图像或视频的合成。
具体地,如图4所示,景深数据生成模块302进一步包括:
结构光发射单元3021,用于对与所述第一摄像头相同视角画面空间投射相位光栅结构光,即稳定性极高的红外波段的激光,所述光栅在物体表面受深度的调制而发生条纹形变。具体的,所述结构光发射单元可以是相机上的LDM(Laser Diode Module,激光二极管模块);
相位光栅图像采集单元3022,利用第二摄像头采集所述发生条纹形变后的相位光栅图像。在本发明具体实施例中,第二摄像头可重用现有的通用彩色摄像头,但在使用之前需要通过音圈马达微控制器关闭红外滤光片,让全透光谱滤光片开始工作,使摄像头传感器接收到红外光线,即将彩色摄像头的工作模式设置为IR灰度模式,从而让摄像头在工作时采集目标物体表面的变形条纹相位光栅图像。
景深数据计算单元3023,用于对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出目标物体的景深数据。
在本发明具体实施例中,图像信息采集模块301利用彩色相机(RGB Camera),即第一摄像头,采集到的目标物体的图像信息与景深数据生成模块302根据红外相机(IR Camera)获得的发生条纹形变的相位光栅图像生成的所述物体的景深数据可通过两路MIPI接口(MIPI0和 MIPI1)分别发送给SoC上的图像信号处理ISP,如图5所示,或者封装后通过一路MIPI接口发送给SoC上的图像信号处理ISP,如图6所示。
在本发明具体实施例中,图像信号处理器ISP303对于经红外相机获得的景深数据,通过Pass1 Node和Pass2 Node而直接绕过Pass1和Pass2的处理过程,通过回调函数DataCallBack回调将数据直接上传给了上层应用程序APP端的AppOnPreviewFrame函数,实现上层应用程序App端获取被透传的景深数据目的,如图7中的虚线路径,即3D图像生成模块设置于后端或上层应用程序,图像信息和景深数据进行合成是在后端或上层应用程序完成的;对于彩色相机获取的图像信息,则经过Pass1 Node和Pass2 Node的Pass1与Pass2处理,包括对图像信息进行画布尺寸调整、格式转换等处理,对图像信息进行白平衡、去噪、曝光调整等,再通过回调函数DataCallBack回调将数据上传至函数AppOnPreviewFrame,如图7中的实线路径,具体流程如图7所示。
具体的,本发明中图像信号处理器ISP的部分业务逻辑具体代码实现如下:
步骤0,在图像信号处理ISP的第一通路Pass1的第一节点Pass1 Node上初始化一个互斥锁和一个缓冲指针。
步骤1、在第二通路Pass2的第二节点Pass2 Node上初始化一个互斥锁、一个缓冲指针和一个SetIRisAddr函数。
步骤2、在第一节点Pass1 Node中定义SaveIRis函数。
步骤3、通过SaveIRis函数函数接口,把送到第一通路Pass1的数据送到定义在第一通路Pass1结构体中的ptrbuff指针中。
步骤4、在第二节点Pass2 Node中定义ReadIRisAddr函数。
步骤5、通过ReadIRisAddr函数接口,把IRisRead Buff中的数据经过10bit转8bit,并且旋转90度后送到第二通路Pass2的output输出中。
步骤6、调用第二通路Pass2的setIRisAddr函数,将IRisRead指针导向mpIRisBuffer,IRisLock指针导向MIRisLockMtx,实现第一通路Pass1和第二通路Pass2的Buffer地址共享。
步骤7、通过DataCallBack回调将数据直接上传给App的OnPreviewFrame。
步骤8、在App端进行深度/灰度数据与彩色数据合成。
步骤9、由相机客户端App进行合成后的数据预览、录制。
综上所述,本发明一种利用双摄像头实现三维图像的方法及装置通过将获得景深数据旁路图像信号处理器的图像处理功能,将景深数据与另外一路摄像头的图像信息进行3D合成,并通 过I2C通讯总线控制拍摄同步、帧同步,实现了实时、高保真的3D照片和视频,本发明适应各种拍摄环境,包括夜景、画面中有移动的物体等,本发明可实现移动终端包括手机、平板、电视、智能汽车、VR、AR、无人机上的基于旁路ISP、双摄帧同步的3D双摄方法。
与现有技术相比,本发明具有如下优点:
1、本发明基于现有主控芯片的单图像信号处理器(ISP)方案,不需要额外的ISP来对接红外摄像头,降低了智能终端的设计成本;
2、本发明采用旁路ISP来获得景深数据的方法具有高度自适应,在不需要3D拍摄的情况下,负责景深数据的第二摄像头可以正常采集彩色图像,提高双摄拍摄照片和视频的分辨率、帧率及质量。
应当说明的是,上述实施例均可根据需要自由组合。以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种利用双摄像头实现三维图像的方法,包括如下步骤:
    利用第一摄像头获取目标物体的图像信息发送至图像信号处理器;
    利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据,发送至图像信号处理器;
    图像信号处理器于接收到所述图像信息与景深数据后,若判断为所述景深数据,则将所述景深数据直接传递至3D图像生成模块,若判断出为所述图像信息,则进行常规的图像信号处理后送至所述3D图像生成模块;
    利用3D图像生成模块对所述景深数据和经图像信号处理后的图像信息进行合成,生成3D图像。
  2. 如权利要求1所述的一种利用双摄像头实现三维图像的方法,其特征在于,所述利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出目标物体的景深数据的步骤具体为:
    利用结构光发射单元对相同视角画面空间投射相位光栅结构光;
    利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像;
    对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出所述目标物体的景深数据。
  3. 如权利要求2所述的一种利用双摄像头实现三维图像的方法,其特征在于,所述利用结构光发射单元对相同视角画面空间投射相位光栅结构光为:利用第二摄像头上的激光二极管模块对相同视角画面空间投射相位光栅结构光。
  4. 如权利要求2所述的一种利用双摄像头实现三维图像的方法,其特征在于:所述第二摄像头,为通用彩色摄像头;在所述利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像之前进一步包括:
    将所述第二摄像头的工作模式设置为IR灰度模式。
  5. 如权利要求1至4任一所述的一种利用双摄像头实现三维图像的方法,其特征在于:所述图像信息和景深数据通过两路MIPI接口分别发送给所述图像信号处理器,或者将所述图像信息与景深数据封装后通过一路MIPI接口发送给所述图像信号处理器。
  6. 如权利要求1至4任一所述的一种利用双摄像头实现三维图像的方法,其特征在于:所述方法通过利用I2C通讯总线控制拍摄同步、帧同步。
  7. 一种利用双摄像头实现三维图像的装置,包括:
    图像信息采集模块,用于利用第一摄像头获取目标物体的图像信息,并发送至图像信号处理器;
    景深数据生成模块,用于利用第二摄像头采集相同视角画面空间的发生条纹形变的相位光栅图像,并计算出所述目标物体的景深数据,发送至图像信号处理器;
    图像信号处理器,用于接收到所述图像信息以及所述景深数据后,若判断为景深数据,则将所述景深数据直接传递至3D图像生成模块,若判断为所述图像信息,则对其进行常规的图像信号处理后送至所述3D图像生成模块;
    3D图像生成模块,用于对所述景深数据和所述经图像信号处理后的图像信息进行合成,生成3D图像。
  8. 如权利要求7所述的一种利用双摄像头实现三维图像的装置,其特征在于,所述景深数据生成模块进一步包括:
    结构光发射单元,用于对相同视角画面空间投射相位光栅结构光;
    相位光栅图像采集单元,用于通过利用第二摄像头采集经相位光栅结构光投射后发生条纹形变后的相位光栅图像;
    景深数据计算单元,用于对采集的发生条纹形变后的相位光栅图像进行解调得到包含深度信息的相位变化,再进行类光学三角函数计算出所述目标物体的景深数据。
  9. 如权利要求8所述的一种利用双摄像头实现三维图像的装置,其特征在于:所述结构光发射单元,用于通过利用第二摄像头上的激光二极管模块对相同视角画面空间投射相位光栅结构光。
  10. 如权利要求8所述的一种利用双摄像头实现三维图像的装置,其特征在于:所述第二摄像头采用通用彩色摄像头,其工作模式设置为IR灰度模式。
PCT/CN2017/103820 2017-06-29 2017-09-27 一种利用双摄像头实现三维图像的方法及装置 WO2019000676A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710516187.4 2017-06-29
CN201710516187.4A CN107124604B (zh) 2017-06-29 2017-06-29 一种利用双摄像头实现三维图像的方法及装置

Publications (1)

Publication Number Publication Date
WO2019000676A1 true WO2019000676A1 (zh) 2019-01-03

Family

ID=59719938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103820 WO2019000676A1 (zh) 2017-06-29 2017-09-27 一种利用双摄像头实现三维图像的方法及装置

Country Status (3)

Country Link
US (1) US10523917B2 (zh)
CN (1) CN107124604B (zh)
WO (1) WO2019000676A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112224146A (zh) * 2020-10-20 2021-01-15 广州柒度科技有限公司 一种具有前端防护结构的计算机用显示设备

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124604B (zh) * 2017-06-29 2019-06-04 诚迈科技(南京)股份有限公司 一种利用双摄像头实现三维图像的方法及装置
CN108513055B (zh) * 2018-06-02 2024-05-14 Oppo广东移动通信有限公司 电子组件和电子装置
CN109151303B (zh) * 2018-08-22 2020-12-18 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN108965732B (zh) * 2018-08-22 2020-04-14 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109190533B (zh) * 2018-08-22 2021-07-09 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN108989606B (zh) 2018-08-22 2021-02-09 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110971889A (zh) * 2018-09-30 2020-04-07 华为技术有限公司 一种获取深度图像的方法、摄像装置以及终端
CN109582811B (zh) * 2018-12-17 2021-08-31 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN111510757A (zh) * 2019-01-31 2020-08-07 华为技术有限公司 一种共享媒体数据流的方法、装置以及系统
TWI707163B (zh) * 2019-05-06 2020-10-11 大陸商三贏科技(深圳)有限公司 相機模組
CN111581415B (zh) * 2020-03-18 2023-07-04 时时同云科技(成都)有限责任公司 确定相似物体的方法、物体相似度的确定方法和设备
CN212623504U (zh) * 2020-06-08 2021-02-26 三赢科技(深圳)有限公司 结构光投影仪
WO2022176419A1 (ja) * 2021-02-18 2022-08-25 ソニーセミコンダクタソリューションズ株式会社 信号処理装置および信号処理方法、並びにプログラム
CN112926498B (zh) * 2021-03-20 2024-05-24 杭州知存智能科技有限公司 基于多通道融合和深度信息局部动态生成的活体检测方法及装置
CN114025078A (zh) * 2021-09-27 2022-02-08 苏州雷格特智能设备股份有限公司 一种双目三层架构
CN113884022A (zh) * 2021-09-28 2022-01-04 天津朗硕机器人科技有限公司 一种基于结构光的三维检测装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060033992A1 (en) * 2002-12-02 2006-02-16 Solomon Dennis J Advanced integrated scanning focal immersive visual display
CN104539928A (zh) * 2015-01-05 2015-04-22 武汉大学 一种光栅立体印刷图像合成方法
CN106210474A (zh) * 2016-08-12 2016-12-07 信利光电股份有限公司 一种图像采集设备、虚拟现实设备
US20170094243A1 (en) * 2013-03-13 2017-03-30 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
CN107124604A (zh) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 一种利用双摄像头实现三维图像的方法及装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2378337B (en) * 2001-06-11 2005-04-13 Canon Kk 3D Computer modelling apparatus
US7174033B2 (en) * 2002-05-22 2007-02-06 A4Vision Methods and systems for detecting and recognizing an object based on 3D image data
JP5395507B2 (ja) * 2009-05-21 2014-01-22 キヤノン株式会社 三次元形状測定装置、三次元形状測定方法及びコンピュータプログラム
US8452081B2 (en) * 2011-01-11 2013-05-28 Eastman Kodak Company Forming 3D models using multiple images
US20120242795A1 (en) * 2011-03-24 2012-09-27 Paul James Kane Digital 3d camera using periodic illumination
US8844802B2 (en) * 2011-12-20 2014-09-30 Eastman Kodak Company Encoding information in illumination patterns
CN103796001B (zh) * 2014-01-10 2015-07-29 深圳奥比中光科技有限公司 一种同步获取深度及色彩信息的方法及装置
US10250833B2 (en) * 2015-04-20 2019-04-02 Samsung Electronics Co., Ltd. Timestamp calibration of the 3D camera with epipolar line laser point scanning
CN106550228B (zh) * 2015-09-16 2019-10-15 上海图檬信息科技有限公司 获取三维场景的深度图的设备
US10447999B2 (en) * 2015-10-20 2019-10-15 Hewlett-Packard Development Company, L.P. Alignment of images of a three-dimensional object
CN105306924B (zh) * 2015-10-22 2017-08-25 凌云光技术集团有限责任公司 一种用于线阵双目3d成像的主动纹理方法
CN106254854B (zh) * 2016-08-19 2018-12-25 深圳奥比中光科技有限公司 三维图像的获得方法、装置及系统
CN106254738A (zh) * 2016-08-24 2016-12-21 深圳奥比中光科技有限公司 双图像采集系统及图像采集方法
CN106200249A (zh) * 2016-08-30 2016-12-07 辽宁中蓝电子科技有限公司 结构光和rgb传感器模组整体式集成系统3d相机
US10353535B2 (en) * 2016-10-21 2019-07-16 Misapplied Sciences, Inc. Multi-view display viewing zone layout and content assignment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060033992A1 (en) * 2002-12-02 2006-02-16 Solomon Dennis J Advanced integrated scanning focal immersive visual display
US20170094243A1 (en) * 2013-03-13 2017-03-30 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
CN104539928A (zh) * 2015-01-05 2015-04-22 武汉大学 一种光栅立体印刷图像合成方法
CN106210474A (zh) * 2016-08-12 2016-12-07 信利光电股份有限公司 一种图像采集设备、虚拟现实设备
CN107124604A (zh) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 一种利用双摄像头实现三维图像的方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112224146A (zh) * 2020-10-20 2021-01-15 广州柒度科技有限公司 一种具有前端防护结构的计算机用显示设备
CN112224146B (zh) * 2020-10-20 2021-11-09 广州柒度科技有限公司 一种具有前端防护结构的计算机用显示设备

Also Published As

Publication number Publication date
CN107124604B (zh) 2019-06-04
US20190007675A1 (en) 2019-01-03
CN107124604A (zh) 2017-09-01
US10523917B2 (en) 2019-12-31

Similar Documents

Publication Publication Date Title
WO2019000676A1 (zh) 一种利用双摄像头实现三维图像的方法及装置
US11064110B2 (en) Warp processing for image capture
US11962736B2 (en) Image stitching with electronic rolling shutter correction
US10958820B2 (en) Intelligent interface for interchangeable sensors
EP3067746B1 (en) Photographing method for dual-camera device and dual-camera device
US9578224B2 (en) System and method for enhanced monoimaging
US9961272B2 (en) Image capturing apparatus and method of controlling the same
EP2153641A1 (en) Digital cinema camera system for recording, editing and visualizing images
CN107592452B (zh) 一种全景音视频采集设备及方法
US11979668B2 (en) High dynamic range anti-ghosting and fusion
US11908111B2 (en) Image processing including noise reduction
WO2011014421A2 (en) Methods, systems, and computer-readable storage media for generating stereoscopic content via depth map creation
US20180211413A1 (en) Image signal processing using sub-three-dimensional look-up tables
CN109309784B (zh) 移动终端
KR102301940B1 (ko) 이미지 합성 방법 및 장치
US20230216999A1 (en) Systems and methods for image reprojection
CN102447833A (zh) 图像处理设备和用于控制图像处理设备的方法
CN116437198B (zh) 图像处理方法与电子设备
CN110198391A (zh) 摄像装置、摄像方法和图像处理装置
CN116208851A (zh) 图像处理方法及相关装置
CN112887653A (zh) 一种信息处理方法和信息处理装置
CN114845036B (zh) 电子设备、图像处理器、应用处理器及图像处理方法
WO2019161717A1 (zh) 光栅图片的生成方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17916241

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17916241

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17916241

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/07/2020)