WO2018082480A1 - 一种3d摄像控制方法及3d摄像控制装置 - Google Patents

一种3d摄像控制方法及3d摄像控制装置 Download PDF

Info

Publication number
WO2018082480A1
WO2018082480A1 PCT/CN2017/107518 CN2017107518W WO2018082480A1 WO 2018082480 A1 WO2018082480 A1 WO 2018082480A1 CN 2017107518 W CN2017107518 W CN 2017107518W WO 2018082480 A1 WO2018082480 A1 WO 2018082480A1
Authority
WO
WIPO (PCT)
Prior art keywords
corrected
optical axis
convergence point
camera
spatial depth
Prior art date
Application number
PCT/CN2017/107518
Other languages
English (en)
French (fr)
Inventor
高炜
谭杰夫
Original Assignee
深圳全息信息科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳全息信息科技发展有限公司 filed Critical 深圳全息信息科技发展有限公司
Publication of WO2018082480A1 publication Critical patent/WO2018082480A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Definitions

  • the invention belongs to the technical field of 3D imaging, and in particular relates to a 3D camera control method and a 3D camera control device.
  • 3D shooting has developed rapidly in home entertainment and film production in recent years. 3D shooting is achieved by simulating the eyes of the left and right eyes with two cameras. At present, professional 3D shooting equipment brings high-quality 3D enjoyment to people. For example, Avatar's shooting, Vision and Close-up shooting are two separate sets of millions of large-scale 3D shooting equipment. However, this professional 3D shooting equipment has a limited range of uses.
  • mobile devices such as home VR3D cameras can already provide people with 3D image effects everywhere, and the use scenes are more extensive, but these mobile devices are far from the effects produced by large 3D shooting devices, especially When a 3D camera shoots close-up objects, 3D focus and line-of-sight convergence cannot be achieved, which is far from the real experience of the human eye.
  • the close-up is switched to the distant view, the 3D shooting depth of field cannot be smoothly switched, or there is no switching at all, only fixed 3D is used. Depth of field.
  • the current 3D shooting method can not achieve synchronous real-time focusing, can not achieve real-time line of sight convergence on the move, line of sight to follow, resulting in a poor viewing experience.
  • the embodiment of the invention provides a 3D camera control method and a 3D camera control device, which aims to solve the problem that the prior art cannot achieve synchronous real-time focusing, and can realize real-time line-of-sight convergence and line-of-sight following in the mobile, and the viewing experience is not good.
  • a 3D camera control method includes:
  • a spatial depth of the object Receiving, by the spatial depth sensing module of the 3D imaging device, a spatial depth of the object, the spatial depth being a wheelbase of the spatial depth sensing module to the optical axis of the object;
  • the optical axes of the two camera modules are controlled to converge at the corrected 3D convergence point and are simultaneously focused.
  • obtaining a corrected 3D convergence point corresponding to the two camera modules corresponding to the spatial depth sensing module is specifically:
  • the current 3D convergence position (X, Y) and the spatial depth Z are obtained by the two camera modules, and the corrected 3D convergence point coordinates of the two camera modules corresponding to the spatial depth perception module are (X, Y, Z).
  • controlling the optical axes of the two camera modules to be concentrated on the corrected 3D convergence point is specifically
  • controlling the synchronous focusing of the two camera modules is specifically:
  • the two camera modules are controlled to correct the focus position and correct the focus depth to refocus the parameters.
  • a 3D camera control device comprising:
  • a depth receiving unit configured to receive a spatial depth perception module of the 3D imaging device to acquire a spatial depth of the object, the spatial depth being a wheelbase of the spatial depth sensing module to an optical axis of the object;
  • a convergence point acquiring unit configured to be connected to the depth receiving unit, configured to obtain, according to the spatial depth, a corrected 3D convergence point corresponding to the two camera modules of the spatial depth sensing module;
  • the photographing adjustment unit is connected to the convergence point acquisition unit for controlling the optical axes of the two camera modules to converge at the corrected 3D convergence point, and simultaneously focusing.
  • the convergence point acquiring unit is specifically configured to acquire the current 3D convergence position (X, Y) and the spatial depth Z by using two camera modules, and then correct the 3D convergence point of the two camera modules corresponding to the spatial depth sensing module.
  • the coordinates are (X, Y, Z).
  • the shooting adjustment unit includes:
  • a parameter obtaining module configured to acquire an optical axis convergence parameter according to the corrected 3D convergence point and the two camera modules to aggregate the current 3D convergence point;
  • the first adjustment module is configured to perform a telescopic movement by the telescopic motor of the optical axis direction control module according to the adjustment instruction corresponding to the optical axis convergence parameter to adjust the optical axis of the corresponding camera module to be concentrated on the coordinate of the 3D convergence point.
  • the shooting adjustment unit further includes:
  • a focus calculation module configured to calculate a corrected focus position and a corrected focus depth of the two camera modules when the optical axis converges at the corrected 3D convergence point;
  • the second control module is configured to control the two camera modules to correct the focus position and correct the focus depth to refocus the parameters.
  • a storage medium comprising: a program, wherein the program causes the 3D photographing device to perform:
  • a spatial depth of the object Receiving, by the spatial depth sensing module of the 3D imaging device, a spatial depth of the object, the spatial depth being a wheelbase of the spatial depth sensing module to the optical axis of the object;
  • the optical axes of the two camera modules are controlled to converge at the corrected 3D convergence point and are simultaneously focused.
  • program causes the 3D photographing device to further perform:
  • the current 3D convergence position (X, Y) and the spatial depth Z are obtained by the two camera modules, and the corrected 3D convergence point coordinates of the two camera modules corresponding to the spatial depth perception module are (X, Y, Z).
  • program causes the 3D photographing device to further perform:
  • program causes the 3D photographing device to further perform:
  • the two camera modules are controlled to correct the focus position and correct the focus depth to refocus the parameters.
  • the 3D convergence point is calculated, and the optical axes of the two camera modules are adjusted by the optical axis direction control modules of the two camera modules to be concentrated at the 3D convergence point, and focused, and the light is realized.
  • the axis gathers, the line of sight follows, and the depth of field is smoothly switched, which effectively improves the 3D viewing comfort.
  • FIG. 1 is a structural block diagram of a 3D camera module according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic view showing the change of the optical axis caused by the movement of the telescopic motor according to the first embodiment of the present invention
  • FIG. 3 is a structural block diagram of a 3D photographing apparatus according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic diagram of a spatial depth sensing module and an optical axis of a shooting module according to Embodiment 2 of the present invention
  • FIG. 5 is a flow chart of a 3D camera control method according to Embodiment 3 of the present invention.
  • FIG. 6 is a structural block diagram of a 3D imaging control apparatus according to Embodiment 4 of the present invention.
  • FIG. 1 is a block diagram showing a specific structure of a 3D camera module according to Embodiment 1 of the present invention.
  • the 3D camera module includes: a control module 1, a first camera module 2, a second camera module 3, and a fixing device 5; the control module 1 and the first camera module 1 and the second camera module 2 Electrical connection; the first camera module 1 and The second camera module 2 is mounted in parallel on the fixing device 5;
  • the first camera module 2 includes a first lens 21, and a first image sensor for converting the optical image obtained by the first lens 21 into a first image 22 and a first optical axis direction control module;
  • the second camera module 3 includes a second lens 31, a second image sensor 32 and a second light for converting the optical image obtained by the second lens 31 into a second image
  • An axis direction control module for synthesizing the first image and the second image into a stereoscopic image.
  • the first lens 21 is a wide-angle lens, and the first image sensor 22 is configured to convert the optical image obtained by the first lens 21 into a first image; the first optical axis direction control module is disposed on the first lens 21 The surrounding four telescopic motors 23 are configured to adjust the optical axis of the first lens 21 by controlling the four telescopic motors 23 of the first optical axis direction control module to perform different expansion and contraction changes;
  • the second lens 31 is a wide-angle lens, and the second image sensor 32 is configured to convert the optical image obtained by the second lens 31 into a second image; the second optical axis direction control module is disposed around the second lens 31
  • the four telescopic motors 23 are configured to adjust the optical axis of the second lens 31 by controlling the four telescopic motors 23 of the second optical axis direction control module to perform different expansion and contraction changes.
  • Figure 2 shows a schematic diagram of the movement of the telescopic motor causing the optical axis of the second lens to change.
  • the first and second optical axis direction control modules are composed of four telescopic motors, the four telescopic motors can be flexibly changed, either Or a plurality of telescopic motor changes will adjust the lens, so that the focus of the optical axis of the lens is more flexible, real-time focusing can be achieved, even if the real-time line of sight convergence is not affected during the movement, the line of sight is achieved.
  • the number of the telescopic motors is 2, 6, 8, 10 or 12.
  • control module 1 synchronously controls the first optical axis direction control module, the second optical axis direction control module adjusts the optical axis of the first imaging module 2, and the optical axis of the second imaging module 3 converges at the 3D convergence point, and
  • the first camera module 2 and the second camera module 3 are controlled to perform focusing.
  • the control module 1 is further configured to control the output of the first image and the second image of the first image sensor 22 and the second image sensor 32, and control parameter settings.
  • control module 1 is further configured to merge the first image and the second image into a stereo image.
  • control module 1 is a field programmable gate array (Field-Programmable) A combination of one or more of Gate Array, FPGA), DSP chip, and CPU chip.
  • Field-Programmable A combination of one or more of Gate Array, FPGA), DSP chip, and CPU chip.
  • control module simultaneously adjusts the convergence and respective focus of the optical axes of the two camera modules in multiple directions, flexibly and accurately.
  • FIG. 3 is a block diagram showing a specific structure of a 3D photographing apparatus according to Embodiment 2 of the present invention. For the convenience of description, only parts related to the embodiment of the present invention are shown.
  • the 3D imaging device includes: at least one 3D camera module 301.
  • the 3D imaging device further includes: a CPU processing module 302 and a spatial depth sensing module 303 corresponding to the 3D camera module; the 3D camera module 301 and the spatial depth sensing module 303 are both electrically coupled to the CPU processing module 302.
  • the CPU processing module 302 is configured to adjust, according to the spatial depth parameter of the object acquired by the spatial depth sensing module 303, the first one of the 3D camera modules corresponding to the spatial depth sensing module 303.
  • the optical axes of the camera module and the second camera module converge at the subject while focusing on the subject. 4 shows the optical axis of the camera module and the optical axis of the spatial depth sensing module.
  • the optical axes of the respective camera modules of the optical axis direction control module are concentrated at the object, and can be adapted to smoothly switch the depth of field in various scenes during moving imaging, thereby effectively improving the viewing comfort of shooting 3D video.
  • the 3D imaging device further includes an image processing module 304, configured to perform adjustment, compression encoding, and/or decompression decoding on the stereo image output by the 3D camera module 301, and the image processing module 304 is separately deployed or deployed.
  • image processing module 304 configured to perform adjustment, compression encoding, and/or decompression decoding on the stereo image output by the 3D camera module 301, and the image processing module 304 is separately deployed or deployed.
  • the 3D imaging device further includes an ISP processing module 305, and the number of the ISP processing module 305 is one or more;
  • the ISP processing module 305 When the ISP processing module 305 is multiple, the number thereof is the same as the number of camera modules of the 3D camera module 301;
  • the ISP processing module 305 When the ISP processing module 305 is one, it can be deployed separately or deployed in the CPU processing module 302 or the image processing module 304.
  • the number of the spatial depth sensing module 303 is greater than or equal to the 3D camera module.
  • the number of 301s is deployed in the 3D camera module 301 or deployed in the CPU processing module 302.
  • the spatial depth perception module 303 includes a color image sensor, a black and white image sensor, a structured light sensor, a binocular or multi-view parallax calculator, a light wave band passer, an infrared light emitter, an infrared light receiving sensor, and a laser emitter.
  • the 3D imaging device further includes a wireless network module 306, configured to send the stereoscopic image processed by the CPU processing module 302 to another mobile terminal or an Internet device.
  • the stereoscopic image can also save the 3D imaging device.
  • the 3D photographing apparatus further includes a power module for supplying power to all modules of the 3D photographing apparatus.
  • the 3D photographing device further includes a wired external interface for transmitting or receiving instructions and data.
  • the spatial depth sensing module is added to obtain the scene depth, and the optical axes of the two camera modules are adjusted by the respective telescopic devices of the two camera modules to concentrate on the object, and at the same time, focus on the object, and can adapt to the moving camera.
  • the smooth transition of depth of field in various scenes effectively improves 3D viewing comfort.
  • FIG. 5 is a flowchart of an implementation of a 3D camera control method according to Embodiment 3 of the present invention. The method is applicable to the 3D camera device described in Embodiment 2, and is described in detail as follows:
  • step S501 the spatial depth perception module that receives the 3D imaging device acquires a spatial depth of the subject, which is the wheelbase of the spatial depth perception module to the optical axis of the subject.
  • the number of the spatial depth sensing modules is greater than or equal to the 3D camera module.
  • the number of the spatial depth sensing module is deployed in the 3D camera module or deployed in a CPU processing module.
  • the spatial depth sensing module comprises a color image sensor, a black and white image sensor, a structured light sensor, a binocular or multi-view parallax calculator, a light wave band passer, an infrared light emitter, an infrared light receiving sensor, a laser emitter, and a laser receiving A combination of one or more of a sensor, a radio reflective radar, or an ultrasonic reflective radar.
  • the spatial depth is a depth of field corresponding to the stereoscopic image.
  • step S502 according to the spatial depth, a corrected 3D convergence point of the two camera modules corresponding to the spatial depth perception module is obtained.
  • the current 3D convergence position (X, Y) and the spatial depth Z are acquired by the two camera modules, and the corrected 3D convergence point coordinates of the two camera modules corresponding to the spatial depth perception module are (X, Y). ,Z).
  • step S503 the optical axes of the two camera modules are controlled to converge at the corrected 3D convergence point, and the focus is synchronized.
  • the optical axis convergence parameter is a notification of the expansion and contraction parameters of the telescopic motor in each shooting module according to the error magnitude of the corrected 3D convergence point and the current 3D convergence point.
  • the optical axis that controls the two camera modules is concentrated on the corrected 3D convergence point, specifically
  • the controlling the synchronization of the two camera modules is specifically as follows:
  • the two camera modules are controlled to correct the focus position and correct the focus depth to refocus the parameters.
  • the 3D convergence point is calculated according to the position coordinates of the object and the scene depth, and the optical axes of the two camera modules are adjusted by the optical axis direction control modules of the two camera modules to be concentrated at the 3D convergence point, and focused. , realizes optical axis convergence, line of sight following, smooth switching of depth of field, effectively improved 3D viewing comfort.
  • FIG. 6 is a block diagram showing a specific structure of a 3D camera control apparatus according to Embodiment 4 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
  • the 3D imaging control device includes a depth receiving unit 61, a convergence point acquiring unit 62, and a shooting adjustment unit 63.
  • the depth receiving unit 61 is configured to receive a spatial depth perception module of the 3D imaging device to acquire a spatial depth of the object, where the spatial depth is a wheelbase of the spatial depth sensing module to the optical axis of the object;
  • the convergence point obtaining unit 62 is connected to the depth receiving unit 61, and is configured to obtain, according to the spatial depth, a corrected 3D convergence point of the two camera modules corresponding to the spatial depth sensing module;
  • the photographing adjustment unit 63 is connected to the convergence point acquisition unit 62 for controlling the optical axes of the two camera modules to converge at the corrected 3D convergence point and simultaneously focusing.
  • the convergence point obtaining unit 62 is specifically configured to acquire the current 3D convergence position (X, Y) and the spatial depth Z by using two camera modules, and then correct the 3D convergence of the two camera modules corresponding to the spatial depth sensing module.
  • the point coordinates are (X, Y, Z).
  • the photographing adjustment unit 63 includes:
  • a parameter obtaining module configured to acquire an optical axis convergence parameter according to the corrected 3D convergence point and the two camera modules to aggregate the current 3D convergence point;
  • the first adjustment module is configured to perform a telescopic movement by the telescopic motor of the optical axis direction control module according to the adjustment instruction corresponding to the optical axis convergence parameter to adjust the optical axis of the corresponding camera module to be concentrated on the coordinate of the 3D convergence point.
  • the photographing adjustment unit 63 further includes:
  • a focus calculation module configured to calculate a corrected focus position and a corrected focus depth of the two camera modules when the optical axis is concentrated at the corrected 3D convergence point;
  • a second control module for controlling two camera modules to correct focus position and correct focus depth The parameter is refocused.
  • the 3D convergence point is calculated according to the position coordinates of the object and the scene depth, and the optical axes of the two camera modules are adjusted by the optical axis direction control modules of the two camera modules to be concentrated at the 3D convergence point, and focused.
  • the optical axis convergence is realized, the line of sight is followed, and the depth of field is smoothly switched, which effectively improves the 3D viewing comfort.
  • the 3D camera control device provided by the embodiment of the present invention can be applied to the foregoing third embodiment of the method.
  • Embodiment 5 of the present invention provides a storage medium, the storage medium comprising: a program, wherein the program causes the 3D photographing device to execute:
  • a spatial depth of the object Receiving, by the spatial depth sensing module of the 3D imaging device, a spatial depth of the object, the spatial depth being a wheelbase of the spatial depth sensing module to the optical axis of the object;
  • the optical axes of the two camera modules are controlled to converge at the corrected 3D convergence point and are simultaneously focused.
  • program causes the 3D photographing device to further perform:
  • the current 3D convergence position (X, Y) and the spatial depth Z are obtained by the two camera modules, and the corrected 3D convergence point coordinates of the two camera modules corresponding to the spatial depth perception module are (X, Y, Z).
  • program causes the 3D photographing device to further perform:
  • program causes the 3D photographing device to further perform:
  • the two camera modules are controlled to correct the focus position and correct the focus depth to refocus the parameters.

Abstract

本发明适用于立体显示技术领域,提供了一种3D摄像控制方法及3D摄像控制装置,该方法包括:通过3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。本发明,实时灵活调整拍摄模块的光轴汇聚和聚焦,实现了光轴汇聚,视线跟随。

Description

一种3D摄像控制方法及3D摄像控制装置 技术领域
本发明属于3D成像技术领域,尤其涉及一种3D摄像控制方法及3D摄像控制装置。
背景技术
3D拍摄近年来在家庭娱乐和电影制作方面发展迅速,3D拍摄是通过用两台摄像机模拟人的眼睛拍左眼和右眼的画面实现的。目前专业3D拍摄设备给人们带来高品质的3D享受,如阿凡达的拍摄,远景与近景拍摄是分开的两套价值百万的大型3D拍摄设备完成,然而这种专业3D拍摄设备使用范围有限。另一方面,家用VR3D相机等移动设备也已经可以为人们呈现随处可及的3D影像效果,使用场景更广泛,但是这些移动设备却远远达不到大型3D拍摄设备制作出来的效果,尤其是3D相机拍摄近距离物体时,无法实现3D聚焦与视线汇聚,与人眼的真实体验相差很远;当近景切换到远景时,3D拍摄景深不能平滑切换,或者根本没有切换,只采用固定的3D景深。总之,当前3D拍摄方法无法做到同步实时聚焦,不能实现移动中实时视线汇聚,视线跟随,导致了不佳的观看体验。
发明内容
本发明实施例提供了一种3D摄像控制方法及3D摄像控制装置,旨在解决现有技术无法做到同步实时聚焦,不能实现移动中实时视线汇聚和视线跟随,观看体验不佳的问题。
一方面,提供一种3D摄像控制方法,所述3D摄像控制方法包括:
接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D 汇聚点;
控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
进一步地,所述根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点具体为:
以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
进一步地,所述控制两个摄像模块的光轴汇聚于所述校正3D汇聚点具体为
根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
进一步地,所述控制两个摄像模块同步聚焦具体为:
计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
另一方面,提供一种3D摄像控制装置,所述3D摄像控制装置包括:
深度接收单元,用于接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
汇聚点获取单元,与所述深度接收单元连接,用于根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
拍摄调整单元,与所述汇聚点获取单元连接,用于控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
进一步地,所述汇聚点获取单元具体用于以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
进一步地,所述拍摄调整单元包括:
参数获取模块,用于根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
第一调整模块,用于根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
进一步地,所述拍摄调整单元还包括:
聚焦计算模块,用于计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
第二控制模块,用于控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
再一方面,提供一种存储介质,所述存储介质包括:程序,其中所述程序使所述3D拍摄设备执行:
接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
进一步地,所述程序使所述3D拍摄设备还执行:
以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
进一步地,所述程序使所述3D拍摄设备还执行:
根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
进一步地,所述程序使所述3D拍摄设备还执行:
计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
本申请实施例包括以下优点:
根据被拍摄物的位置坐标及场景景深,计算出3D汇聚点,通过两个摄像模块各自对应的光轴方向控制模块调整两个摄像模块的光轴汇聚于3D汇聚点,并聚焦,实现了光轴汇聚,视线跟随,景深平滑切换,有效提升了3D观看舒适度。
附图说明
图1是本发明实施例一提供的3D摄像模组的结构框图;
图2是本发明实施例一提供的伸缩马达运动导致光轴改变的示意图;
图3是本发明实施例二提供的3D拍摄设备的结构框图;
图4是本发明实施例二提供的空间深度感知模块与拍摄模块光轴的示意图;
图5是本发明实施例三提供的3D摄像控制方法的流程框图;
图6是本发明实施例四提供的3D摄像控制装置的结构框图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的实现进行详细描述:
实施例一
图1示出了本发明实施例一提供的3D摄像模组的具体结构框图,为了便于说明,仅示出了与本发明实施例相关的部分。在本实施例中,该3D摄像模组包括:控制模块1、第一摄像模块2、第二摄像模块3、固定装置5;所述控制模块1与第一摄像模块1、第二摄像模块2电性连接;所述第一摄像模块1和 第二摄像模块2平行装配在所述固定装置5上;所述第一摄像模块2包括第一镜头21、用于将所述第一镜头21获得光学图像转换成第一图像的第一图像传感器22和第一光轴方向控制模块;所述第二摄像模块3包括第二镜头31、用于将所述第二镜头31获得光学图像转换成第二图像的第二图像传感器32和第二光轴方向控制模块,所述控制模块1用于将第一图像和第二图像合成为立体图像。
其中,第一镜头21为广角镜头,第一图像传感器22用于将所述第一镜头21获得光学图像转换成第一图像;所述第一光轴方向控制模块由设置在所述第一镜头21周围的4个伸缩马达23组成,通过控制第一光轴方向控制模块的4个伸缩马达23做不同的伸缩变化来调整第一镜头21的光轴;
第二镜头31为广角镜头,第二图像传感器32用于将所述第二镜头31获得光学图像转换成第二图像;所述第二光轴方向控制模块由设置在所述第二镜头31周围的4个伸缩马达23组成,通过控制第二光轴方向控制模块的4个伸缩马达23做不同的伸缩变化来调整第二镜头31的光轴。图2示出了伸缩马达运动导致第二镜头光轴改变的示意图,不难看出,由于第一、第二光轴方向控制模块由4个伸缩马达组成,四个伸缩马达可以灵活变动,任一个或者多个伸缩马达变动都会调整镜头,使得镜头的光轴的聚焦更加灵活,可实现实时聚焦,即使移动中实时视线汇聚也不受影响,实现视线跟随。
优选的,所述伸缩马达的数量为2、6、8、10或12。
进一步地,所述控制模块1同步控制第一光轴方向控制模块、第二光轴方向控制模块调整第一摄像模块2的光轴、第二摄像模块3的光轴汇聚于3D汇聚点,并控制第一摄像模块2、第二摄像模块3进行聚焦。
其中,所述控制模块1还用于控制第一图像传感器22、第二图像传感器32行场同步的输出第一、第二图像,并控制参数设置。
进一步地,所述控制模块1还用于将第一图像、第二图像合并为立体图像。
进一步地,所述控制模块1为现场可编程门阵列(Field-Programmable  Gate Array、FPGA)、DSP芯片、CPU芯片之一种或多种之组合。
本实施例,分别通过在摄像模块周围设置多个伸缩马达,通过控制模块同时多方位、灵活、准确调整两个摄像模块的光轴的汇聚和各自聚焦。
实施例二
图3示出了本发明实施例二提供的3D拍摄设备的具体结构框图,为了便于说明,仅示出了与本发明实施例相关的部分。在本实施例中,该3D拍摄设备包括:至少一个3D摄像模组301。优选的,3D拍摄设备还包括:CPU处理模块302和与所述3D摄像模组对应空间深度感知模块303;所述3D摄像模组301和空间深度感知模块303均与所述CPU处理模块302电性连接,所述CPU处理模块302用于根据所述空间深度感知模块303获取的被拍摄物所处空间深度参数,实时调整所述空间深度感知模块303对应的所述3D摄像模组中第一摄像模块、第二摄像模块的光轴汇聚于被拍摄物处,并同时聚焦于被拍摄物处。其中,图4示出了摄像模块的光轴和空间深度感知模块的光轴。通过光轴方向控制模块各自摄像模块的光轴汇聚于被拍摄物处,能适应移动摄像时各种场景下的景深平滑切换,有效提升拍摄3D视频的观看舒适度。
进一步地,所述3D拍摄设备还包括图像处理模块304,用于对所述3D摄像模组301输出的立体图像进行调整、压缩编码和/或解压解码,所述图像处理模块304单独部署或者部署于所述CPU处理模块302中。
进一步地,所述3D拍摄设备还包括ISP处理模块305,所述ISP处理模块305的数量为一个或多个;
所述ISP处理模块305为多个时,其数量与所述3D摄像模组301的摄像模块数量相同;
所述ISP处理模块305为一个时,其可单独部署,或部署于CPU处理模块302或图像处理模块304中。
进一步地,所述空间深度感知模块303的数量大于或等于所述3D摄像模组 301的数量,其中,所述空间深度感知模块303部署于所述3D摄像模组301,或部署于CPU处理模块302。
其中,所述空间深度感知模块303包括彩色图像传感器、黑白图像传感器、结构光传感器、双目或多目视差计算器、光波带通器、红外光发射器、红外光接收传感器、激光发射器、激光接收传感器、无线电反射式雷达或超声波反射式雷达的一种或多种之组合。
进一步地,所述3D拍摄设备还包括无线网络模块306,用于将所述CPU处理模块302处理完毕后的立体图像发送至其他移动终端或互联网设备。优选的,立体图像也可以保存3D拍摄设备。
进一步地,所述3D拍摄设备还包括电源模块,用于为所述3D拍摄设备的所有模块提供电能。
进一步地,所述3D拍摄设备还包括有线对外接口,用于发送或接收指令及数据。
本实施例,增加空间深度感知模块获取场景景深,通过两个摄像模块各自对应的伸缩装置调整两个摄像模块的光轴汇聚被拍摄物处,并同时聚焦于被拍摄物处,能适应移动摄像时各种场景下的景深平滑切换,有效提升了3D观看舒适度。
本发明实施例涉及的3D摄像模组详情参见上述实施例一的描述,在此不再赘述。
实施例三
图5示出了本发明实施例三提供的3D摄像控制方法的实现流程,所述方法适用于实施例二所述的3D拍摄设备,详述如下:
在步骤S501中,接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距。
在本实施例中,所述空间深度感知模块的数量大于或等于所述3D摄像模组 的数量,其中,所述空间深度感知模块部署于所述3D摄像模组,或部署于CPU处理模块。所述空间深度感知模块包括彩色图像传感器、黑白图像传感器、结构光传感器、双目或多目视差计算器、光波带通器、红外光发射器、红外光接收传感器、激光发射器、激光接收传感器、无线电反射式雷达或超声波反射式雷达的一种或多种之组合。所述空间深度为对应立体图像的景深。
在步骤S502中,根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点。
在本实施例中,以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
在步骤S503中,控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
在本实施例中,所述光轴汇聚参数是根据所述校正3D汇聚点和当前3D汇聚点的误差大小,转化的通知各个拍摄模块中伸缩马达的伸缩参数。其中,所述控制两个摄像模块的光轴汇聚于所述校正3D汇聚点具体为
根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
所述控制两个摄像模块同步聚焦具体为:
计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
本实施例,根据被拍摄物的位置坐标及场景景深,计算出3D汇聚点,通过两个摄像模块各自对应的光轴方向控制模块调整两个摄像模块的光轴汇聚于3D汇聚点,并聚焦,实现了光轴汇聚,视线跟随,景深平滑切换,有效提升了 3D观看舒适度。
实施例四
图6示出了本发明实施例四提供的3D摄像控制装置的具体结构框图,为了便于说明,仅示出了与本发明实施例相关的部分。在本实施例中,该3D摄像控制装置包括:深度接收单元61、汇聚点获取单元62和拍摄调整单元63。
其中,深度接收单元61,用于接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
汇聚点获取单元62,与所述深度接收单元61连接,用于根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
拍摄调整单元63,与所述汇聚点获取单元62连接,用于控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
进一步地,所述汇聚点获取单元62具体用于以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
进一步地,所述拍摄调整单元63包括:
参数获取模块,用于根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
第一调整模块,用于根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
进一步地,所述拍摄调整单元63还包括:
焦点计算模块,用于计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
第二控制模块,用于控制两个摄像模块以校正聚焦位置和校正聚焦深度为 参数重新聚焦。
本实施例,根据被拍摄物的位置坐标及场景景深,计算出3D汇聚点,通过两个摄像模块各自对应的光轴方向控制模块调整两个摄像模块的光轴汇聚于3D汇聚点,并聚焦,实现了光轴汇聚,视线跟随,景深平滑切换,有效提升了3D观看舒适度。
本发明实施例提供的3D摄像控制装置可以应用在前述对应的方法实施例三中,详情参见上述实施例三的描述,在此不再赘述。
实施例五
本发明实施例五提供了一种存储介质,所述存储介质包括:程序,其中所述程序使所述3D拍摄设备执行:
接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
进一步地,所述程序使所述3D拍摄设备还执行:
以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
进一步地,所述程序使所述3D拍摄设备还执行:
根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
进一步地,所述程序使所述3D拍摄设备还执行:
计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置 和校正聚焦深度;
控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
以上仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (12)

  1. 一种3D摄像控制方法,其特征在于,所述3D摄像控制方法包括:
    接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
    根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
    控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
  2. 根据权利要求1所述的3D摄像控制方法,其特征在于,所述根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点具体为:
    以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
  3. 根据权利要求1所述的3D摄像控制方法,其特征在于,所述控制两个摄像模块的光轴汇聚于所述校正3D汇聚点具体为
    根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
    根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
  4. 根据权利要求3所述的3D摄像控制方法,其特征在于,所述控制两个摄像模块同步聚焦具体为:
    计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
    控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
  5. 一种3D摄像控制装置,其特征在于,所述3D摄像控制装置包括:
    深度接收单元,用于接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
    汇聚点获取单元,与所述深度接收单元连接,用于根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
    拍摄调整单元,与所述汇聚点获取单元连接,用于控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
  6. 根据权利要求5所述的3D摄像控制装置,其特征在于,所述汇聚点获取单元具体用于以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
  7. 根据权利要求5所述的3D摄像控制装置,其特征在于,所述拍摄调整单元包括:
    参数获取模块,用于根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
    第一调整模块,用于根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
  8. 根据权利要求7所述的3D摄像控制装置,其特征在于,所述拍摄调整单元还包括:
    聚焦计算模块,用于计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
    第二控制模块,用于控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
  9. 一种存储介质,其特征在于,所述存储介质包括:程序,其中所述程序使所述3D拍摄设备执行:
    接收3D拍摄设备的空间深度感知模块获取被拍摄物的空间深度,所述空间深度为所述空间深度感知模块至被拍摄物光轴的轴距;
    根据所述空间深度,获得与空间深度感知模块对应两个摄像模块的校正3D汇聚点;
    控制两个摄像模块的光轴汇聚于所述校正3D汇聚点,并同步聚焦。
  10. 根据权利要求9所述的存储介质,其特征在于,所述程序使所述3D拍摄设备还执行:
    以两个摄像模块获取当前3D汇聚位置(X、Y)和所述空间深度Z,则与空间深度感知模块对应两个摄像模块的校正3D汇聚点坐标为(X、Y、Z)。
  11. 根据权利要求9所述的存储介质,其特征在于,所述程序使所述3D拍摄设备还执行:
    根据所述校正3D汇聚点和两个摄像模块汇聚得当前3D汇聚点,获取光轴汇聚参数;
    根据与所述光轴汇聚参数对应的调整指令,通过光轴方向控制模块的伸缩马达做伸缩运动以调整对应的摄像模块的光轴汇聚于所述3D汇聚点坐标。
  12. 根据权利要求11所述的存储介质,其特征在于,所述程序使所述3D拍摄设备还执行:
    计算在光轴汇聚于所述校正3D汇聚点处时两个摄像模块的校正聚焦位置和校正聚焦深度;
    控制两个摄像模块以校正聚焦位置和校正聚焦深度为参数重新聚焦。
PCT/CN2017/107518 2016-11-02 2017-10-24 一种3d摄像控制方法及3d摄像控制装置 WO2018082480A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610943668.9A CN106412557A (zh) 2016-11-02 2016-11-02 一种3d摄像控制方法及3d摄像控制装置
CN201610943668.9 2016-11-02

Publications (1)

Publication Number Publication Date
WO2018082480A1 true WO2018082480A1 (zh) 2018-05-11

Family

ID=58013678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107518 WO2018082480A1 (zh) 2016-11-02 2017-10-24 一种3d摄像控制方法及3d摄像控制装置

Country Status (2)

Country Link
CN (1) CN106412557A (zh)
WO (1) WO2018082480A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412557A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像控制方法及3d摄像控制装置
CN106412403A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像模组及3d拍摄设备
CN107147891B (zh) * 2017-05-17 2019-03-01 浙江大学 光轴可调节式三目深度获取摄像机
CN115499640B (zh) * 2021-06-17 2024-05-07 深圳市光鉴科技有限公司 具有3d摄像模组的显示装置和电子设备
CN114422665A (zh) * 2021-12-23 2022-04-29 广东未来科技有限公司 一种基于多摄像头的拍摄方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101651841A (zh) * 2008-08-13 2010-02-17 华为技术有限公司 一种立体视频通讯的实现方法、系统和设备
CN103888750A (zh) * 2012-12-20 2014-06-25 比比威株式会社 三维影像拍摄控制系统及方法
WO2014154839A1 (en) * 2013-03-27 2014-10-02 Mindmaze S.A. High-definition 3d camera device
CN106412403A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像模组及3d拍摄设备
CN106412557A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像控制方法及3d摄像控制装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289144B (zh) * 2011-06-30 2013-12-18 浙江工业大学 基于全方位视觉传感器的智能3d摄像设备
CN102665087B (zh) * 2012-04-24 2014-08-06 浙江工业大学 3d立体摄像设备的拍摄参数自动调整系统
KR101888956B1 (ko) * 2012-05-31 2018-08-17 엘지이노텍 주식회사 카메라 모듈 및 그의 오토 포커싱 방법
CN103973957B (zh) * 2013-01-29 2018-07-06 上海八运水科技发展有限公司 双目3d相机自动调焦系统及方法
CN103220544B (zh) * 2013-04-25 2015-05-27 重庆大学 一种主动离轴平行式立体成像方法
TWI515470B (zh) * 2014-04-30 2016-01-01 聚晶半導體股份有限公司 使用多鏡頭的自動對焦系統及其方法
CN104113748A (zh) * 2014-07-17 2014-10-22 冯侃 3d拍摄系统及实现方法
CN105451012B (zh) * 2015-11-18 2018-07-31 湖南拓视觉信息技术有限公司 三维成像系统和三维成像方法
CN105956586B (zh) * 2016-07-15 2019-06-11 瑞胜科信息(深圳)有限公司 一种基于tof 3d摄像机的智能跟踪系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101651841A (zh) * 2008-08-13 2010-02-17 华为技术有限公司 一种立体视频通讯的实现方法、系统和设备
CN103888750A (zh) * 2012-12-20 2014-06-25 比比威株式会社 三维影像拍摄控制系统及方法
WO2014154839A1 (en) * 2013-03-27 2014-10-02 Mindmaze S.A. High-definition 3d camera device
CN106412403A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像模组及3d拍摄设备
CN106412557A (zh) * 2016-11-02 2017-02-15 深圳市魔眼科技有限公司 一种3d摄像控制方法及3d摄像控制装置

Also Published As

Publication number Publication date
CN106412557A (zh) 2017-02-15

Similar Documents

Publication Publication Date Title
WO2018082481A1 (zh) 一种3d摄像模组及3d拍摄设备
WO2018082480A1 (zh) 一种3d摄像控制方法及3d摄像控制装置
JP5683025B2 (ja) 立体画像撮影装置および立体画像撮影方法
CN101840146A (zh) 自动矫正视差的立体图像拍摄方法及装置
CN107038724A (zh) 全景鱼眼相机影像校正、合成与景深重建方法与系统
WO2011108277A1 (ja) 立体撮像装置および立体撮像方法
CN107111864A (zh) 用于平滑视图切换和缩放的计算性多相机调整
EP3902236A1 (en) Terminal, photographing method, and storage medium
US9253470B2 (en) 3D camera
JP2014501086A (ja) 立体画像取得システム及び方法
US20130202191A1 (en) Multi-view image generating method and apparatus using the same
WO2011108283A1 (ja) 立体撮像装置および立体撮像方法
WO2022262839A1 (zh) 现场演出的立体显示方法、装置、介质及系统
KR20080040542A (ko) 양안식 자동화 입체영상 촬영시스템
US9258546B2 (en) Three-dimensional imaging system and image reproducing method thereof
CN113112407B (zh) 基于电视的照镜视野生成方法、系统、设备及介质
US20160050405A1 (en) Apparatus and camera for filming three-dimensional video
US9402068B2 (en) Lens system for 3D video taking
CN116260956B (zh) 一种虚拟现实拍摄方法及系统
JP5223096B2 (ja) 3d映像撮影制御システム、3d映像撮影制御方法、およびプログラム
CN110199519A (zh) 用于多相机设备的方法
JP2014135714A (ja) 立体映像信号処理装置及び立体映像撮像装置
WO2017092369A1 (zh) 一种头戴设备、三维视频通话系统和三维视频通话实现方法
CN109194947A (zh) 双目摄像模组及移动终端
CN105227813A (zh) 一种能跟随人眼瞳孔变化调整的微型摄像方法和专用摄像机及专用眼镜框

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/10/2019)

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17867667

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17867667

Country of ref document: EP

Kind code of ref document: A1