WO2017149441A1 - Adaptive control of image capture parameters in virtual reality cameras - Google Patents

Adaptive control of image capture parameters in virtual reality cameras Download PDF

Info

Publication number
WO2017149441A1
WO2017149441A1 PCT/IB2017/051152 IB2017051152W WO2017149441A1 WO 2017149441 A1 WO2017149441 A1 WO 2017149441A1 IB 2017051152 W IB2017051152 W IB 2017051152W WO 2017149441 A1 WO2017149441 A1 WO 2017149441A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
camera
time instant
image capture
objects
Prior art date
Application number
PCT/IB2017/051152
Other languages
French (fr)
Inventor
Krishna Govindarao
Mithun Uliyar
Ravi Shenoy
Original Assignee
Nokia Technologies Oy
Nokia Usa Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to IN201641007009 priority Critical
Priority to IN201641007009 priority
Application filed by Nokia Technologies Oy, Nokia Usa Inc. filed Critical Nokia Technologies Oy
Publication of WO2017149441A1 publication Critical patent/WO2017149441A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23222Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23216Control of parameters, e.g. field or angle of view of camera via graphical user interface, e.g. touchscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23238Control of image capture or reproduction to achieve a very large field of view, e.g. panorama
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/247Arrangements of television cameras

Abstract

In an example embodiment, method, apparatus and computer program product are provided. The method includes accessing image capture parameters of a plurality of component cameras at a first time instant, where the image capture parameters for a respective component camera of the component cameras are determined based on a scene appearing in a field of view (FOV) of the respective component camera. At a second time instant, a change in appearance of one or more objects of the scene from a FOV of a first component camera to a FOV of a second component camera is determined. Upon determining the change of the appearance of the one or more objects at the second time instant, image capture parameters of the second component camera are set based on image capture parameters of the first component camera accessed at the first time instant.

Description

ADAPTIVE CONTROL OF IMAGE CAPTURE PARAMETERS IN VIRTUAL REALITY

CAMERAS

TECHNICAL FIELD

[0001] Various implementations relate generally to method, apparatus, and computer program product for adaptive camera control of virtual reality cameras.

BACKGROUND

[0002] Virtual reality (VR) cameras have multiple component cameras (for example, eight component cameras) that cover the entire three-dimensional (3D) field of view around themselves, and every component camera has its own image pipeline which processes raw images from the respective component camera to obtain a quality image. The images obtained from multiple component cameras are then 'stitched' for virtual reality consumption. The stitching is similar to panorama algorithms and is made possible, as a VR camera rig is so designed that the component cameras have overlapping field of views (FOVs).

[0003] In a scenario, where a VR camera is moving, for example worn by a hiker on a backpack or on a dolly for shooting a VR movie, the component cameras keep seeing different scenes, for example, the FOVs of the component cameras keep changing. Existing algorithms for adjusting 3A parameters (e.g., exposure, focus and white balance) of the component cameras take time to adapt and converge, similar to any conventional cameras. When one suddenly changes the FOV of a conventional camera, the color and exposure is initially 'off, and in a matter of a few seconds or within a second, it corrects itself. While the above is tolerable in conventional video, in the case of VR, when one is immersed in the content, if the image content varies as one moves his/her head, it may be very discomforting.

[0004] In an example, imagine a room with a window and the VR camera, in which a camera A may be imaging the scene near the window and outside, while an adjacent camera, say, camera B is imaging an interior part of the room. The exposures for cameras A and B will be very different, with the camera A having a very low exposure and the camera B having a moderately high exposure. If the cameras A and B move such that the part of the scene containing the window comes into the FOV of camera B, then, the video captured by the camera B will be saturated and overly bright. This is because it takes some time for the camera 3A algorithms to figure out that the scene has changed, and exposure has to be accordingly adjusted. When content like this is consumed via a VR headset in an immersive manner, this may cause discomfort and lead to a sub-par user experience.

SUMMARY OF SOME EMBODIMENTS

[0005] Various aspects of example embodiments are set out in the claims.

[0006] In a first aspect, there is provided a method comprising: accessing, at a first time instant, one or more image capture parameters of a plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera; determining, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and upon determining the change in the appearance of the one or more objects at the second time instant, setting one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant.

[0007] In a second aspect, there is provided an apparatus comprising: a virtual reality camera comprising a plurality of component cameras to capture image frames of a scene, at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform: access, at a first time instant, one or more image capture parameters of a plurality of component cameras of a virtual reality camera, the one or more image capture parameters for a respective component camera of the plurality of component cameras determined based on a scene appearing in a field of view of the respective component camera; determine, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and upon determining the change in the appearance of the one or more objects at the second time instant, set one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant.

[0008] In a third aspect, there is provided a computer program product comprising at least one computer-readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to at least perform: access, at a first time instant, one or more image capture parameters of a plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera; determine, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and upon determining the change in the appearance of the one or more objects at the second time instant, set one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant. [0009] In a fourth aspect, there is provided an apparatus comprising: means for accessing, at a first time instant, one or more image capture parameters of a plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera; means for determining, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and upon determining the change in the appearance of the one or more objects at the second time instant, means for setting one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant.

BRIEF DESCRIPTION OF THE FIGURES [0010] Various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:

[0011] FIGURE 1 illustrates a device, in accordance with an example embodiment;

[0012] FIGURE 2 illustrates an apparatus for adaptive control of image capture parameters in a virtual reality camera, in accordance with an example embodiment;

[0013] FIGURE 3A illustrates an example representation of an image capturing of a scene by a virtual reality camera at a first time instant, in accordance with an example embodiment, and FIGURE 3B illustrates an example representation of an image capturing of the scene by the virtual reality camera at a second time instant, in accordance with an example embodiment;

[0014] FIGURE 4A illustrates another example representation of an image capturing of a scene by a virtual reality camera at a first time instant, in accordance with an example embodiment, and FIGURE 4B illustrates another example representation of an image capturing of the scene by the virtual reality camera at a second time instant, in accordance with an example embodiment;

[0015] FIGURE 5 is a flowchart depicting an example method for adaptive control of image capture parameters in a virtual reality camera, in accordance with an example embodiment; and

[0016] FIGURE 6 is a flowchart depicting an example method for adaptive control of image capture parameters in a virtual reality camera, in accordance with another example embodiment.

DETAILED DESCRIPTION

[0017] Example embodiments and their potential effects are understood by referring to FIGURES 1 through 6 of the drawings.

[0018] FIGURE 1 illustrates a device 100, in accordance with an example embodiment. It should be understood, however, that the device 100 as illustrated and hereinafter described is merely illustrative of one type of device that may benefit from various embodiments, therefore, should not be taken to limit the scope of the embodiments. As such, it should be appreciated that at least some of the components described below in connection with the device 100 may be optional and thus in an example embodiment may include more, less or different components than those described in connection with the example embodiment of FIGURE 1. The device 100 could be any of a number of types of touch screen based mobile electronic devices, for example, portable digital assistants (PDAs), mobile televisions, gaming devices, cellular phones, all types of computers (for example, laptops, mobile computers or desktops), cameras including virtual reality cameras, mobile digital assistants, or any combination of the aforementioned, and other types of communications devices.

[0019] The device 100 may include an antenna 102 (or multiple antennas) in operable communication with a transmitter 104 and a receiver 106. The device 100 may further include an apparatus, such as a controller 108 or other processing devices that provides signals to and receives signals from the transmitter 104 and the receiver 106, respectively. The signals may include signaling information in accordance with the air interface standard of the applicable cellular system, and/or may also include data corresponding to user speech, received data and/or user generated data. In this regard, the device 100 may be capable of operating with one or more air interface standards, communication protocols, modulation types and access types. By way of illustration, the device 100 may be capable of operating in accordance with any of a number of first, second, third and/or fourth- generation communication protocols or the like. For example, the device 100 may be capable of operating in accordance with second-generation (2G) wireless communication protocols such as IS- 136 (time division multiple access (TDMA)), GSM (global system for mobile communication), and IS-95 (code division multiple access (CDMA)), or with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA1000, wideband CDMA (WCDMA) and time division- synchronous CDMA (TD-SCDMA), with 3.9G wireless communication protocol such as evolved universal terrestrial radio access network (E-UTRAN), with fourth-generation (4G) wireless communication protocols, or the like. As an alternative (or additionally), the device 100 may be capable of operating in accordance with non-cellular communication mechanisms. For example, computer networks such as the Internet, local area network, wide area networks, and the like; short range wireless communication networks such as include Bluetooth® networks, Zigbee® networks, Institute of Electric and Electronic Engineers (IEEE) 802.1 lx networks, and the like; wireline telecommunication networks such as public switched telephone network (PSTN).

[0020] The controller 108 may include circuitry implementing, among others, audio and logic functions of the device 100. For example, the controller 108 may include, but are not limited to, one or more digital signal processor devices, one or more microprocessor devices, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field-programmable gate arrays (FPGAs), one or more controllers, one or more application-specific integrated circuits (ASICs), one or more computer(s), various analog to digital converters, digital to analog converters, and/or other support circuits. Control and signal processing functions of the device 100 are allocated between these devices according to their respective capabilities. The controller 108 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission. The controller 108 may additionally include an internal voice coder, and may include an internal data modem. Further, the controller 108 may include functionality to operate one or more software programs, which may be stored in a memory. For example, the controller 108 may be capable of operating a connectivity program, such as a conventional web browser. The connectivity program may then allow the device 100 to transmit and receive web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like. In an example embodiment, the controller 108 may be embodied as a multi-core processor such as a dual or quad core processor. However, any number of processors may be included in the controller 108.

[0021] The device 100 may also comprise a user interface including an output device such as a ringer 110, an earphone or speaker 112, a microphone 114, a display 116, and a user input interface, which may be coupled to the controller 108. The user input interface, which allows the device 100 to receive data, may include any of a number of devices allowing the device 100 to receive data, such as a keypad 118, a touch display, a microphone or other input devices. In embodiments including the keypad 118, the keypad 118 may include numeric (0-9) and related keys (#, *), and other hard and soft keys used for operating the device 100. Alternatively or additionally, the keypad 118 may include a conventional QWERTY keypad arrangement. The keypad 118 may also include various soft keys with associated functions. In addition, or alternatively, the device 100 may include an interface device such as a joystick or other user input interface. The device 100 further includes a battery 120, such as a vibrating battery pack, for powering various circuits that are used to operate the device 100, as well as optionally providing mechanical vibration as a detectable output.

[0022] In an example embodiment, the device 100 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 108. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. In an example embodiment in which the media capturing element is a camera module 122, the camera module 122 may include a digital camera capable of forming a digital image file from a captured image. As such, the camera module 122 includes all hardware, such as a lens or other optical component(s), and software for creating a digital image file from a captured image. Alternatively, the camera module 122 may include the hardware needed to view an image, while a memory device of the device 100 stores instructions for execution by the controller 108 in the form of software to create a digital image file from a captured image. In an example embodiment, the camera module 122 may further include a processing element such as a co-processor, which assists the controller 108 in processing image data and an encoder and/or a decoder for compressing and/or decompressing image data. The encoder and/or decoder may encode and/or decode according to a JPEG standard format or another like format. For video, the encoder and/or the decoder may employ any of a plurality of standard formats such as, for example, standards associated with H.261, H.262/ MPEG-2, H.263, H.264, H.264/MPEG-4, MPEG-4, and the like. In some cases, the camera module 122 may provide live image data to the display 116. Moreover, in an example embodiment, the display 116 may be located on one side of the device 100 and the camera module 122 may include a lens positioned on the opposite side of the device 100 with respect to the display 116 to enable the camera module 122 to capture images on one side of the device 100 and present a view of such images to the user positioned on the other side of the device 100.

[0023] The device 100 may further include a user identity module (UIM) 124. The UIM 124 may be a memory device having a processor built in. The UIM 124 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), or any other smart card. The UIM 124 typically stores information elements related to a mobile subscriber. In addition to the UIM 124, the device 100 may be equipped with memory. For example, the device 100 may include a volatile memory 126, such as volatile random access memory (RAM) including a cache area for the temporary storage of data. The device 100 may also include other non-volatile memory 128, which may be embedded and/or may be removable. The non-volatile memory 128 may additionally or alternatively comprise an electrically erasable programmable read only memory (EEPROM), flash memory, hard drive, or the like. The memories may store any number of pieces of information, and data, used by the device 100 to implement the functions of the device 100. [0024] FIGURE 2 illustrates an apparatus 200 for adaptive camera control of virtual reality cameras, in accordance with an example embodiment. The apparatus 200 may be employed, for example, in the device 100 of FIGURE 1. However, it should be noted that the apparatus 200, may also be employed on a variety of other devices both mobile and fixed, and therefore, embodiments should not be limited to application on devices such as the device 100 of FIGURE 1. In an example embodiment, the apparatus 200 may be a virtual reality camera or may be a virtual reality camera that includes multiple component cameras for capturing 360 degree view of a scene. Alternatively, embodiments may be employed on a combination of devices including, for example, those listed above. Accordingly, various embodiments may be embodied wholly at a single device, for example, the device 100 or in a combination of devices. Furthermore, it should be noted that the devices or elements described below may not be mandatory and thus some may be omitted in certain embodiments. [0025] Herein, the term 'virtual reality camera' refers to any camera system that comprises a plurality of components cameras configured with respect to each other such that the plurality of component cameras are used to capture 360 degree views of the surrounding. Hence, references to the term 'virtual reality camera' throughout the description should be construed as any camera system that has multiple cameras for capturing a 360 degree view of the surrounding. The plurality of component cameras may have overlapping field of views, such that the images (or image frames) captured by the plurality of component cameras may be stitched to generate a 360 degree view of the surrounding. Examples of the virtual reality cameras may include a camera system comprising multiple component cameras that can be worn by a hiker on a backpack or on a dolly for shooting a VR movie, or mounted on a mobile van, or may be configured in form of other wearable devices. Additional examples of the virtual reality cameras may include surveillance cameras that may be fixed to stationary objects, or may be positioned in indoor spaces such as shopping malls, convention centers, etc. [0026] The apparatus 200 includes or otherwise is in communication with at least one processor 202 and at least one memory 204. Examples of the at least one memory 204 include, but are not limited to, volatile and/or non-volatile memories. Some examples of the volatile memory include, but are not limited to, random access memory, dynamic random access memory, static random access memory, and the like. Some examples of the non-volatile memory include, but are not limited to, hard disks, magnetic tapes, optical disks, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, and the like. The memory 204 may be configured to store information, data, applications, instructions or the like for enabling the apparatus 200 to carry out various functions in accordance with various example embodiments. For example, the memory 204 may be configured to buffer input data comprising media content for processing by the processor 202. Additionally or alternatively, the memory 204 may be configured to store instructions for execution by the processor 202.

[0027] An example of the processor 202 may include the controller 108. The processor 202 may be embodied in a number of different ways. The processor 202 may be embodied as a multi-core processor, a single core processor; or combination of multi-core processors and single core processors. For example, the processor 202 may be embodied as one or more of various processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In an example embodiment, the multi-core processor may be configured to execute instructions stored in the memory 204 or otherwise accessible to the processor 202. Alternatively or additionally, the processor 202 may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 202 may represent an entity, for example, physically embodied in circuitry, capable of performing operations according to various embodiments while configured accordingly. For example, if the processor 202 is embodied as two or more of an ASIC, FPGA or the like, the processor 202 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, if the processor 202 is embodied as an executor of software instructions, the instructions may specifically configure the processor 202 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 202 may be a processor of a specific device, for example, a mobile terminal or network device adapted for employing embodiments by further configuration of the processor 202 by instructions for performing the algorithms and/or operations described herein. The processor 202 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 202.

[0028] A user interface 206 may be in communication with the processor 202. Examples of the user interface 206 include, but are not limited to, an input interface and/or an output interface. The input interface is configured to receive an indication of a user input. The output user interface provides an audible, visual, mechanical or other output and/or feedback to the user. Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, and the like. Examples of the output interface may include, but are not limited to, a display such as light emitting diode display, thin-film transistor (TFT) display, liquid crystal displays, active-matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, ringers, vibrators, and the like. In an example embodiment, the user interface 206 may include, among other devices or elements, any or all of a speaker, a microphone, a display, and a keyboard, touch screen, or the like. In this regard, for example, the processor 202 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface 206, such as, for example, a speaker, ringer, microphone, display, and/or the like. The processor 202 and/or user interface circuitry comprising the processor 202 may be configured to control one or more functions of one or more elements of the user interface 206 through computer program instructions, for example, software and/or firmware, stored on a memory, for example, the at least one memory 204, and/or the like, accessible to the processor 202.

[0029] In an example embodiment, the apparatus 200 may include an electronic device. Some examples of the electronic device include a virtual reality camera or a surveillance camera with or without communication capabilities, and the like. In an example embodiment, the electronic device may include a user interface, for example, the user interface 206, having user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs. In an example embodiment, the electronic device may include a display circuitry configured to display at least a portion of the user interface 206 of the electronic device. The display and display circuitry may be configured to facilitate the user to control at least one function of the electronic device.

[0030] In an example embodiment, the electronic device may be embodied as to include a transceiver. The transceiver may be any device operating or circuitry operating in accordance with software or otherwise embodied in hardware or a combination of hardware and software. For example, the processor 202 operating under software control, or the processor 202 embodied as an ASIC or FPGA specifically configured to perform the operations described herein, or a combination thereof, thereby configures the apparatus 200 or circuitry to perform the functions of the transceiver. The transceiver may be configured to receive media content. Examples of the media content may include audio content, video content, data, and a combination thereof.

[0031] In an example embodiment, the electronic device may be embodied as to include a virtual reality (VR) camera 208. In an example embodiment, the VR camera 208 include multiple component cameras (e.g., component cameras 210, 212, 214 and 216) that are positioned with respect to each other such that they have overlapping field of views and a 360 degree 3-D view of the scene surrounding the VR camera 208 can be obtained by based on the images/image frames captured individually by the component cameras 210, 212, 214 and 216 of the VR camera 208. Only four component cameras 210, 212, 214 and 216 are shown for example purposes to facilitate present description, and it should be understood that there may be more than four component cameras present in the VR camera 208. The VR camera 208 may be in communication with the processor 202 and/or other components of the apparatus 200. The VR camera 208 may be in communication with other imaging circuitries and/or software, and is configured to capture digital images or to capture video or other graphic media. In an example embodiment, the VR camera 208 may be an array camera or a plenoptic camera capable of capturing light-field images (having multiple views of the same scene) and various views of images of the scene can be generated from such captured images. The VR camera 208, and other circuitries, in combination, may be examples of at least one component camera such as the camera module 122 of the device 100.

[0032] These components (202-208) may communicate to each other via a centralized circuit system 218 to facilitate adaptive control of image capture parameters of the component cameras for example, the component cameras 210, 212, 214 and 216 of the VR camera 208. The centralized circuit system 218 may be various devices configured to, among other things, provide or enable communication between the components (202-208) of the apparatus 200. In certain embodiments, the centralized circuit system 218 may be a central printed circuit board (PCB) such as a motherboard, a main board, a system board, or a logic board. The centralized circuit system 218 may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media.

[0033] In an example embodiment, the processor 202 is configured to, with the content of the memory 204, and optionally with other components described herein, to cause the apparatus 200 to access one or more image capture parameters for each of the component cameras 210, 212, 214 and 216 of the VR camera 208. In an example embodiment, the one or more image acquisition parameters include 3A parameters, for example, exposure, focus and white balance. In an example embodiment, a processing means may be configured to access the one or more image parameters. A non-limiting example of the processing means may include the processor 202, which may be an example of the controller 108, and the memory 204.

[0034] For example, at a first time instant (e.g., time 'tl'), one or more image capture parameters (Pl(tl), P2(tl), P3(tl)) for each of a plurality of component cameras (CI, C2, C3 and C4) of a VR camera (such as the component cameras 210, 212, 214 and 216 of the VR camera 208) are accessed. Without loss of generality, in an example, the image capture parameters (Pl(tl), P2(tl), P3(tl)), may represent exposure setting, focus setting and white balance setting, respectively for any component camera at the time instant 'tl' . In an example embodiment, the values of the image capture parameters, for example, the Pl(tl) are set as per the field of view (the scene before the component camera that can be imaged by the component camera) of the camera component. For example, a component camera imaging a bright part of the scene will be using a lower exposure compared to another component camera imaging a darker part of the scene. For instance, in an example it is assumed that at the time 'tl', the component camera CI is imaging the scene near the window from which the outside Sun is visible, while an adjacent component camera, say, the component camera C2 is imaging an interior part of the room. In this example, the exposures for the component cameras CI and C2 will be very different, with CI having a very low exposure and C2 having a moderately high exposure. If the 'tl' is considered as a time instant when the VR camera (a combination of the component cameras CI, C2, C3 and C4) is initialized to take images/image frames of a scene; within a few frames from the time 'tl', the component cameras CI, C2, C3 and C4 converge to their optimal settings for the image capture parameters. For example, exposures of all component cameras CI, C2, C3 and C4 are set. [0035] In an example embodiment, the values of the image capture parameters (Pl(tl), P2(tl), P3(tl)) are accessed for each of the component cameras (CI, C2, C3 and C4), and as these values are computed as optimal values based on the content of the scene, these values are stored. For instance, (Pl(cl,tl), P2(cl,tl), P3(cl,tl)) are the parameters for the component camera CI at the time instant 'tl'; (Pl(c2,tl), P2(c2,tl), P3(c2,tl) are the parameters for the component camera C2 at the time instant 'tl', and the (Pl(c3,tl), P2(c3,tl), P3(c3,tl)) are the parameters for the component camera C3 at the time instant 'tl' .

[0036] In an example embodiment, the apparatus 200 is caused to determine at a second time instant (e.g., time 't2'), if there is a change in appearance of one or more objects from field of views of one component camera to another component camera of the plurality of component cameras due to a relative movement between the virtual reality (VR) camera and the one or more objects of the scene. For instance, one or more objects that appear in the field of view of a first camera at the first time instant (time 'tl') may appear in the field of view (FOV) of a second camera at the second time instant (time 't2') due to a relative movement between the VR camera and the one or more objects of the scene or due to a relative movement between the VR camera and the entire scene. In an example, consider an object Ol appears in the FOV of the first component camera (e.g., camera C2) at the time 'tl' ; and at the time 't2', due to a relative movement between the VR camera and scene, the object Ol now appears in the FOV of the second component camera (e.g., camera C3). Similarly, in an example, consider an object 02 appears in the FOV of the first component camera (e.g., camera C3) at the time 'tl' ; and at the time 't2', due to a relative movement between the VR camera and scene, the object 02 now appears in the FOV of the second component camera (e.g., camera CI).

[0037] Herein, it should be understood that the terms 'first component camera' and the 'second component camera' are used merely to distinguish between two separate component cameras of the plurality of component cameras present in the VR camera, and that the 'first component camera' refer to a component camera from the FOV of which the one or more objects move and appear in the FOV of another component camera referred to as the 'second component camera' . It should further be noted that the 'relative movement' between the VR camera and the one or more objects of the scene or the entire scene may happen either because of movement of the VR camera, movement of the person/object carrying the VR camera, movement of the entire scene, movement of the one or more objects in the scene, or one or more combinations of these movements. The movement of the objects may be determined based on tracking the objects using suitable object detection and tracking algorithms. Further, the movement of the VR camera is tracked based on an accelerometer reading associated with the VR camera. For instance, the VR camera may include components such as an accelerometer, a gyroscope, etc to continuously compute the degree of rotations, or changes in the orientation of the VR camera between any two time instants.

[0038] Upon determination of the change in the appearance of the at least one object at the second time instant (t2), in an example embodiment, the apparatus 200 is caused to set one or more image capture parameters of the second component camera based on the one or more image capture parameters of the first component camera that are already accessed at the first time instant (tl). For example, if a change in appearance of the object Ol from the FOV of the first component camera (e.g., component camera C2) at the time 'tl' to the FOV of the second component camera (e.g., component camera C3) at the time 't2' is determined, the one or more image capture parameters of the component camera C3 are set same as the one or more image capture parameters of the component camera C2 at the time 'tl'. For instance, in a representation, (Pl(cl,t2), P2(cl,t2), P3(cl,t2)) are the image capture parameters for the component camera CI at the time instant 't2'; (Pl(c2,t2), P2(c2,t2), P3(c2,t2) are the parameters for the component camera C2 at the time instant 't2', and the (Pl(c3,t2), P2(c3,t2), P3(c3,t2)) are the parameters for the component camera C3 at the time instant 't2'. In this example, the values of (Pl(c3,t2), P2(c3,t2), P3(c3,t2) may be same as the respective values of image capture parameters of the component camera CI at time 'tl', for example, (Pl(cl,tl), P2(cl,tl), P3(cl,tl). In an example embodiment, a processing means may be configured to set the one or more image parameters for the second component camera. A non-limiting example of the processing means may include the processor 202, which may be an example of the controller 108.

[0039] FIGURES 3A and 3B represent schematic representation of a VR camera 302 for capturing images of a scene for facilitating description of an example embodiment. In FIGURE 3A, a position of the VR camera 302 at the time instant 'tl' is shown. It is to be understood that a relative movement between the VR camera 302 and the scene (or one more objects of the scene) may be caused either by a movement of a vehicle 304 on which the VR camera 302 is mounted or the rotation of the VR camera 302 around one of its axis vertical to the vehicle 304, or movement of the one or more objects of the scene, for example, movement of a person 308 or a vehicle 310 or any other movable objects in the scene.

[0040] In this example representation of FIGURE 3A, without loss of generality, the VR camera 302 includes component cameras CI, C2, C2 and C4. At time instant 'tl', scene containing Sun 320 appears in the FOV of the component camera CI, a scene containing water body 322 appears in the FOV of the component camera C2, a scene containing the person 308 appears in the FOV of the component camera C3 and a scene containing the vehicle 310 appears in the FOV of the component camera C4. The FOV of the component camera CI is exemplarily shown within two dashed lines 352 and 354, FOV of the component camera C2 is exemplarily shown within two dashed lines 356 and 358, FOV of the component camera C3 is exemplarily shown within two dashed lines 360 and 362, and FOV of the component camera C4 is exemplarily shown within two dashed lines 364 and 366. It should be noted that the component cameras C1-C4 may have overlapping FOVs, for example, the area shown between the dashed lines 352 and 356 represents an overlapping FOV between the component cameras CI and C2, and similarly the area shown between the dashed lines 358 and 360 represents an overlapping FOV between the component cameras CI and C3. In an example embodiment, the component cameras CI, C2, C3 and C4 adjust to image capture parameters (exposure, focus and white balance) within few frames from the time when the VR camera 302 is initialized, based on the content the scene. The following Table 1 lists down the image capture parameter values (optimal values) for the component cameras CI, C2, C3 and C4 at the time instant tl.

Table 1

Figure imgf000016_0001

[0041] In one scenario, it may be assumed that the vehicle 304 moves such that the orientation of the VR camera 302 changes at the time instant 't2' with respect to the orientation of the VR camera 302 at the time instant 'tl'. For instance, as shown in the example representation of FIGURE 3B, at time instant 't2', scene containing the Sun 320 appears in the FOV of the component camera C2, scene containing the water body 322 appears in the FOV of the component camera C3, scene containing the person 308 appears in the FOV of the component camera C4 and scene containing the vehicle 310 appears in the FOV of the component camera CI. [0042] In various example embodiments, the apparatus 200 is caused to set the image capture parameters at the time instant 't2' based on the previous settings instead of re-calculating the settings for the image capture parameters. For instance, the Sun 320 appeared in the FOV of the component camera CI at the time instant 'tl' and due to relative movement between the VR camera 302 and the scene, the Sun 320 appears in the FOV of the component camera C2 at the time instant 't2' . Hence, the image capture parameters for the component camera C2 at the time instant 't2' can be taken directly from the corresponding values of the image capture parameters for the component camera CI at the time instant 'tl'. In an example embodiment, in the present example scenario explained herein, the updated values of the image capture parameters for the component cameras CI, C2, C3 and C4 at the time instant 't2' are listed in the following Table 2.

Table 2

Figure imgf000017_0001

[0043] FIGURES 4A and 4B represent another schematic representation of a VR camera for capturing images of a scene for facilitating description of an example embodiment. The representation of the scene in FIGURE 4A is at a time instant 'tl' and a representation of the scene in FIGURE 4B is at a time instant 't2', where the time instant 't2' is subsequent to the time instant 'tl ' . A VR camera 402 is shown in FIGURES 4A and 4B comprising component cameras CI, C2 and C3. The scene comprises a room comprising an exterior window 410 having glass panels, a spotlight 412 and a performer (e.g., dancer) 414 near a wall 416.

[0044] As shown in FIGURE 4A, at the time instant 'tl ', the exterior window 410 having outside view of daylight appears in the FOV of the component camera CI, the spotlight 412 appears in the FOV of the component camera C2 and the performer 414 (near the wall 416) appears in the FOV of the component camera C3. The FOV of the component camera CI is exemplarily shown within two dashed lines 452 and 454, FOV of the component camera C2 is exemplarily shown within two dashed lines 456 and 458, and FOV of the component camera 408 is exemplarily shown within two dashed lines 460 and 462. In a scenario, the time instant 'tl' being considered a time when the VR camera 402 is initialized, the image capture parameters of the component cameras CI, C2 and C3 are set by the processor 202 based on the content of the scene to be captured by the component cameras CI, C2 and C3 (e.g., objects present in the FOVs of the component cameras CI, C2 and C3). Example values of some example image capture parameters set for the component cameras CI, C2 and C3 are listed in the following Table 3. It is to be understood that the image capture parameters in the Table 3 are merely for illustrative purposes, and corresponding values are not intended to be accurate.

Table 3

Figure imgf000018_0001

[0045] In a scenario, it is assumed that the performer 414 moves such that the performer 414 appears in the FOV of the component camera CI at the time instant 't2' instead of in the FOV of the component camera C3. Accordingly, in this example scenario, both the objects such as the window 410 and the performer 414 appear in the FOV of the component camera CI and only the wall 416 appears in the FOV of the component camera C3. In this example scenario, if the performer 414 is considered as the object of interest, the settings of the image capture parameters of the component camera C3 at the time instant 'tl' are transferred as the settings of the image capture parameters of the component camera CI at the time instant 't2', and image capture parameters for other component cameras C2 and C3 may remain unchanged. In this example scenario explained herein, the updated values of the image capture parameters for the component cameras CI, C2 and C3 at the time instant 't2' are listed in the following Table 4.

Table 4

Figure imgf000018_0002
[0046] Accordingly, it is noted that the objects are tracked across the FOVs of the components cameras CI, C2 and C3 so that the image capture parameters (3 A parameters) can be intelligently transferred, so that the end user perceives even tone throughout the imaging process by the component cameras CI, C2 and C3 of the VR camera 402.

[0047] It is further noted that in some scenarios, it may happen that views of the scene for a component camera may change significantly, and the same views may not have been imaged previously. In an example embodiment, in such scenarios, the one or more image capture parameters may be computed as per the actual content of the scene appearing in the FOV of the component camera. Further, in some scenarios, it may happen that some object that was at a distance from the component camera now moves close to the component camera, thus drastically changing what the component camera sees. The apparatus 200 is caused to determine such scenarios by tracking of objects, and the apparatus 200 is caused to determine new image capture parameters (e.g., new 3A parameters). In an example embodiment, local motion of objects/entities in the scene for each component camera can be tracked to see if the objects/entities are moving towards the component camera. Upon occurrence of such scenario, in an example embodiment, the 3A parameters can be computed and tracked on the entities/objects, which means, even as the objects/entities nears the camera, the component camera keeps adapting gradually. In an example, if the component camera is using a matrix metering mode, with different regions having different weights, it could adapt the weights such that the entities/objects moving towards the camera get the highest weight, so that the entities/objects are rendered well in terms of lightness and color. Alternately, upon sensing that an object/entity is moving towards the camera, spot metering could be used on the entity/object. [0048] FIGURE 5 is a flowchart depicting an example method 500 for adaptive control of image capture parameters in a virtual reality camera in accordance with an example embodiment. The method 500 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2. [0049] At 505, the method 500 includes accessing, at a first time instant, one or more image capture parameters of a plurality of component cameras of a virtual reality camera. More specifically, the one or more image capture parameters for a respective component camera of the plurality of component cameras are determined based on a scene appearing in a field of view of the respective component camera. In an example embodiment, the one or more image capture parameters are associated with acquisition of views (or images) of a scene. Some non-limiting examples of the image capture parameters include exposure, focus and white balance. In an example embodiment, the image capture parameters accessed at the first time instant are optimal image capture parameters that are set based on the content of the scene that appear in the FOV of the component cameras.

[0050] At 510, the method 500 includes determining, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a FOV of a second component camera of the plurality of component cameras. It is to be noted that the one or more objects appear in the FOV of the first component camera at the first time instant and the one or more objects appear in the FOV of the second component camera at the second time instant.

[0051] At 515, the method 500, upon determining the change in the appearance of the one or more objects at the second time instant, set one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera that are already accessed at the first time instant.

[0052] FIGURE 6 is a flowchart depicting an example method for adaptive control of image capture parameters, in accordance with another example embodiment. The method 600 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2. [0053] At 605, the VR camera is initialized to take images (or views) of the scene, and the each of the component camera of the VR camera is ready to capture image frames. At 610, it may be assumed that at time t = 'tl' (i.e., first time instant), the image capture parameters of the component cameras are determined (or adjusted/set) according to content of the scene appearing in the FOV of the respective component cameras. For example, if in an FOV of a component camera, a bright scene or bright objects appear, then the exposure value for the component camera can be adjusted to a smaller value, and if in an FOV of a component camera, a slightly darker scene or not so bright objects appear, then the exposure value for such component can be a slightly higher value. The image capture parameters are set by a processing element, for example, the processor 202. At 615, the one or more image capture parameters of the plurality of component cameras are stored in a memory location or buffer, for example, the memory 204.

[0054] At 620, it is determined if there is a change in appearance of the one or more objects from FOV of one component camera (e.g., first component camera) to FOV of another component camera (e.g., second component camera). It is noted that the change in appearance of the one or more objects from FOV of one camera to another camera over time may occur due to a relative movement between the VR camera and the one or more objects of the scene, or movement of entire scene with respect to the VR camera.

[0055] At 625, at time t = 't2', upon determination of the changes in appearance of the one or more objects, the one or more image capture parameters of the another component camera (second component camera) based on the one or more image capture parameters of the component camera (first component camera) in which the one or more objects already appeared at the time t = 'tl'. At 630, the image frame of the scene is captured by the plurality of component cameras based on the image capture parameters of the respective component cameras set at the time t = 't2' . It is noted that in an example embodiment, changes in appearance of objects from FOV of one component camera to another component camera are determined on a continuous basis, and upon determination the already computed values of the image parameters of the component camera are used, to preclude re- computation of the image capture parameters for the component camera in which the objects appear. [0056] It should be noted that to facilitate discussions of the flowcharts of FIGURES 5 and

6, certain operations are described herein as constituting distinct steps performed in a certain order. Such implementations are examples only and non-limiting in scope. Certain operation may be grouped together and performed in a single operation, and certain operations can be performed in an order that differs from the order employed in the examples set forth herein. Moreover, certain operations of the methods 500 and 600 are performed in an automated fashion. These operations involve substantially no interaction with the user. Other operations of the methods 500 and 600 may be performed by in a manual fashion or semi-automatic fashion. These operations involve interaction with the user via one or more user interface presentations. [0057] The methods depicted in these flowcharts may be executed by, for example, the apparatus 200 of FIGURE 2. Operations of the flowchart, and combinations of operation in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described in various embodiments may be embodied by computer program instructions. In an example embodiment, the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of an apparatus and executed by at least one processor in the apparatus. Any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus embody means for implementing the operations specified in the flowchart. These computer program instructions may also be stored in a computer-readable storage memory (as opposed to a transmission medium such as a carrier wave or electromagnetic signal) that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the operations specified in the flowchart. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer- implemented process such that the instructions, which execute on the computer or other programmable apparatus, provide operations for implementing the operations in the flowchart. The operations of the methods are described with help of apparatus 200. However, the operations of the methods can be described and/or practiced by using any other apparatus.

[0058] Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is to improve adaptive control of image capture parameters of component cameras in a virtual reality camera. Various example embodiments make use for the already computed values of the image capture parameters for the component cameras for adaptively setting the image capture parameters for the component cameras over time based on the changes in the scene or changes in the orientation or positioning of the virtual camera. Hence, instead of re-computing the image capture parameters all the time, various embodiments described herein transfer the image capture parameters from one component camera to another component, following objects or regions of interest across the FOV of the component cameras, as the component cameras or the scene moves.

[0059] Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a computer program product. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of an apparatus described and depicted in FIGURES 1 and/or 2. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer. [0060] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above- described functions may be optional or may be combined. [0061] Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. [0062] It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present disclosure as defined in the appended claims.

Claims

CLAIMS What is claimed is:
1. A method comprising:
accessing, at a first time instant, one or more image capture parameters of a plurality of component cameras, one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera;
determining, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and
upon determining the change in the appearance of the one or more objects at the second time instant, setting one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant.
2. The method as claimed in claim 1, further comprising capturing, at the second time instant, an image frame of the scene by the second component camera based on the one or more image capture parameters that are set for the second component camera at the second time instant.
3. The method as claimed in claim 2, wherein the determining the change in appearance of the one or more objects at the second time instant comprises tracking the one or more objects.
4. The method as claimed in claims 1 or 3, wherein the determining the change in appearance of the one or more objects at the second time instant comprises tracking a movement of a virtual reality camera comprising the plurality of component cameras.
5. The method as claimed in claim 4, wherein the tracking the movement of the virtual reality camera is based on an accelerometer reading associated with the virtual reality camera.
6. The method as claimed in claim 1, wherein setting the one or more image capture parameters for the second component camera further comprises computing the one or more image capture parameters for the second component camera if the one or more objects appearing in the field of view of the second component camera at the second time instant have not appeared in each of the plurality of component cameras at the first time instant.
7. The method as claimed in claim 1, wherein the one or more objects are one or more objects of interest.
8. The method as claimed in any of claims 1 to 7 further comprising configuring the plurality of component cameras as a virtual reality camera to capture a 360 degree view of the scene.
9. The method as claimed in any of the claims 1 to 8, wherein the one or more image capture parameters comprise a white balance.
10. The method as claimed in any of the claims 1 to 9, wherein the one or more image capture parameters comprise a focus.
11. The method as claimed in any of the claims 1 to 10, wherein the one or more image capture parameters comprise an exposure.
12. An apparatus comprising:
a virtual reality camera comprising a plurality of component cameras to capture image frames of a scene;
at least one processor; and
at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform:
access, at a first time instant, one or more image capture parameters of the plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on the scene appearing in a field of view of the respective component camera;
determine, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and upon determining the change in the appearance of the one or more objects at the second time instant, set one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera that are accessed at the first time instant.
13. The apparatus as claimed in claim 12, wherein the apparatus is further caused, at least in part to facilitate capturing, at the second time instant, an image frame of the scene by the second component camera based on the one or more image capture parameters that are set for the second component cameras at the second time instant.
14. The apparatus as claimed in claim 13, wherein for determining the change in appearance of the one or more objects at the second time instant, the apparatus is further caused, at least in part to track the one or more objects.
15. The apparatus as claimed in claims 12 or 14, wherein for determining the change in appearance of the one or more objects at the second time instant, the apparatus is further caused, at least in part to, track a movement of the virtual reality camera.
16. The apparatus as claimed in claim 15, wherein apparatus is caused to track the movement of the virtual reality camera based on an accelerometer reading associated with the virtual reality camera.
17. The apparatus as claimed in claim 12, wherein for setting the one or more image capture parameters for the second component camera, the apparatus is further caused, at least in part to compute the one or more image capture parameters for the second component camera if the one or more objects appearing in the field of view of the second component camera at the second time instant have not appeared in each of the plurality of component cameras at the first time instant.
18. The apparatus as claimed in claim 12, wherein the one or more objects are one or more objects of interest.
19. The apparatus as claimed in any of the claims 12 to 18, wherein the plurality of component cameras are configured in the virtual reality camera such that the plurality of component cameras capture a 360 degree view of the scene.
20. The apparatus as claimed in any of the claims 12 to 19, wherein the one or more image capture parameters comprise a white balance.
21. The apparatus as claimed in any of the claims 12 to 20, wherein the one or more image capture parameters comprise a focus.
22. The apparatus as claimed in any of the claims 12 to 21, wherein the one or more image capture parameters comprise an exposure.
23. A computer program product comprising at least one computer -readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to at least perform:
access, at a first time instant, one or more image capture parameters of a plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera;
determine, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and
upon determining the change in the appearance of the one or more objects at the second time instant, set one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera that are accessed at the first time instant.
24. The computer program product as claimed in claim 23, wherein the apparatus is further caused, at least in part to facilitate capturing, at the second time instant, an image frame of the scene by the second component cameras based on the one or more image capture parameters that are set for the second component cameras at the second time instant.
25. The computer program product as claimed in claim 24, wherein for determining the change in appearance of the one or more objects at the second time instant, the apparatus is further caused, at least in part to track the one or more objects.
26. The computer program product as claimed in claims 23 or 25, wherein for determining the change in appearance of the one or more objects at the second time instant, the apparatus is further caused, at least in part to, track a movement of a virtual reality camera comprising the plurality of component cameras.
27. The computer program product as claimed in claim 26, wherein apparatus is further caused to track the movement of the virtual reality camera based on an accelerometer reading associated with the virtual reality camera.
28. The computer program product as claimed in claim 23, wherein for setting the one or more image capture parameters for the second component camera, the apparatus is further caused, at least in part to compute the one or more image capture parameters for the second component camera if the one or more objects appearing in the field of view of the second component camera at the second time instant have not appeared in each of the plurality of component cameras at the first time instant.
29. The computer program product as claimed in claim 23, wherein the one or more objects are one or more objects of interest.
30. The computer program product as claimed in any of the claims 23 to 29, wherein the plurality of component cameras are configured as a virtual reality camera to capture a 360 degree view of the scene.
31. The computer program product as claimed in any of the claims 23 to 30, wherein the one or more image capture parameters comprise a white balance.
32. The computer program product as claimed in any of the claims 23 to 31, wherein the one or more image capture parameters comprise a focus.
33. The computer program product as claimed in any of the claims 23 to 32, wherein the one or more image capture parameters comprise an exposure.
34. An apparatus comprising:
means for accessing, at a first time instant, one or more image capture parameters of a plurality of component cameras, the one or more image capture parameters for a respective component camera of the plurality of component cameras is determined based on a scene appearing in a field of view of the respective component camera;
means for determining, at a second time instant, if there is a change in appearance of one or more objects of the scene from a field of view of a first component camera of the plurality of component cameras to a field of view of a second component camera of the plurality of component cameras, wherein the one or more objects appear in the field of view of the first component camera at the first time instant and the one or more objects appear in the field of view of the second component camera at the second time instant; and
upon determining the change in the appearance of the one or more objects at the second time instant, means for setting one or more image capture parameters of the second component camera based on one or more image capture parameters of the first component camera accessed at the first time instant.
35. The apparatus as claimed in claim 34, further comprising means for capturing, at the second time instant, an image frame of the scene by the second component camera based on the one or more image capture parameters that are set for the second component camera at the second time instant.
36. The apparatus as claimed in claim 35, wherein means for determining the change in appearance of the one or more objects at the second time instant comprises means for tracking the one or more objects.
37. The apparatus as claimed in claims 34 or 36, wherein means for determining the change in appearance of the one or more objects at the second time instant comprises means for tracking a movement of a virtual reality camera comprising the plurality of component cameras.
38. The apparatus as claimed in claim 37, wherein means for tracking the movement of the virtual reality camera is based on an accelerometer reading associated with the virtual reality camera.
39. The apparatus as claimed in claim 34, wherein means for setting the one or more image capture parameters for the second component camera comprises means for computing the one or more image capture parameters for the second component camera if the one or more objects appearing in the field of view of the second component camera at the second time instant have not appeared in each of the plurality of component cameras at the first time instant.
40. The apparatus as claimed in claim 34, wherein the one or more objects are one or more objects of interest.
41. The apparatus as claimed in any of claims 34 to 40, wherein the plurality of component cameras are configured as a virtual reality camera to capture a 360 degree view of the scene.
42. The apparatus as claimed in any of the claims 34 to 41, wherein the one or more image capture parameters comprise a white balance.
43. The apparatus as claimed in any of the claims 34 to 42, wherein the one or more image capture parameters comprise a focus.
44. The apparatus as claimed in any of the claims 34 to 43, wherein the one or more image capture parameters comprise an exposure.
PCT/IB2017/051152 2016-02-29 2017-02-28 Adaptive control of image capture parameters in virtual reality cameras WO2017149441A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
IN201641007009 2016-02-29
IN201641007009 2016-02-29

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP17759348.0A EP3424210A4 (en) 2016-02-29 2017-02-28 Adaptive control of image capture parameters in virtual reality cameras
US16/080,140 US10491810B2 (en) 2016-02-29 2017-02-28 Adaptive control of image capture parameters in virtual reality cameras

Publications (1)

Publication Number Publication Date
WO2017149441A1 true WO2017149441A1 (en) 2017-09-08

Family

ID=59742580

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/051152 WO2017149441A1 (en) 2016-02-29 2017-02-28 Adaptive control of image capture parameters in virtual reality cameras

Country Status (3)

Country Link
US (1) US10491810B2 (en)
EP (1) EP3424210A4 (en)
WO (1) WO2017149441A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019061558A (en) * 2017-09-27 2019-04-18 キヤノン株式会社 Image processing device, image processing method and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030234866A1 (en) * 2002-06-21 2003-12-25 Ross Cutler System and method for camera color calibration and image stitching
US20090207246A1 (en) 2005-07-29 2009-08-20 Masahiko Inami Interactive image acquisition device
EP2549753A1 (en) 2010-03-15 2013-01-23 Omron Corporation Surveillance camera terminal
US20150145952A1 (en) * 2012-06-11 2015-05-28 Sony Computer Entertainment Inc. Image capturing device and image capturing method
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
US20160088280A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Camera system for three-dimensional video
US20170118400A1 (en) * 2015-10-21 2017-04-27 Google Inc. Balancing exposure and gain at an electronic device based on device motion and scene distance

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10150656A (en) * 1996-09-20 1998-06-02 Hitachi Ltd Image processor and trespasser monitor device
US8587661B2 (en) * 2007-02-21 2013-11-19 Pixel Velocity, Inc. Scalable system for wide area surveillance
WO2009151903A2 (en) 2008-05-20 2009-12-17 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with hetergeneous imagers
US8180107B2 (en) * 2009-02-13 2012-05-15 Sri International Active coordinated tracking for multi-camera systems
US8885023B2 (en) 2010-09-01 2014-11-11 Disney Enterprises, Inc. System and method for virtual camera control using motion control systems for augmented three dimensional reality
US8498448B2 (en) * 2011-07-15 2013-07-30 International Business Machines Corporation Multi-view object detection using appearance model transfer from similar scenes
US8773542B2 (en) 2012-05-17 2014-07-08 Samsung Electronics Co., Ltd. Apparatus and method for adaptive camera control method based on predicted trajectory
US20140037135A1 (en) 2012-07-31 2014-02-06 Omek Interactive, Ltd. Context-driven adjustment of camera parameters
US9723272B2 (en) 2012-10-05 2017-08-01 Magna Electronics Inc. Multi-camera image stitching calibration system
US9451162B2 (en) 2013-08-21 2016-09-20 Jaunt Inc. Camera array including camera modules
KR102105189B1 (en) 2013-10-31 2020-05-29 한국전자통신연구원 Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object
US9357136B2 (en) 2014-02-04 2016-05-31 Nokia Technologies Oy Using inertial sensors to provide smoothed exposure and white balance adjustments for video and photographic applications

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030234866A1 (en) * 2002-06-21 2003-12-25 Ross Cutler System and method for camera color calibration and image stitching
US20090207246A1 (en) 2005-07-29 2009-08-20 Masahiko Inami Interactive image acquisition device
EP2549753A1 (en) 2010-03-15 2013-01-23 Omron Corporation Surveillance camera terminal
US20150145952A1 (en) * 2012-06-11 2015-05-28 Sony Computer Entertainment Inc. Image capturing device and image capturing method
US20150348580A1 (en) * 2014-05-29 2015-12-03 Jaunt Inc. Camera array including camera modules
US20160088280A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Camera system for three-dimensional video
US20170118400A1 (en) * 2015-10-21 2017-04-27 Google Inc. Balancing exposure and gain at an electronic device based on device motion and scene distance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DIVERDI S ET AL.: "Envisor: Online Environment Map Construction for Mixed Reality", VIRTUAL REALITY CONFERENCE, 8 March 2008 (2008-03-08), Piscataway, NJ, USA, pages 19 - 26, XP031339993, ISBN: 978-1-4244-1971-5 *
HUIMIN LU ET AL.: "A Novel Camera Parameters Auto-adjusting Method Based on Image Entropy", ROBOCUP 2009: ROBOT SOCCER WORLD CUP XIII, 18 February 2010 (2010-02-18), Berlin Heidelberg , Berlin , Heidelberg, pages 192 - 203, XP019138653, ISBN: 978-3-642-11875-3 *
LUIS GAEMPERLE ET AL.: "An Immersive Telepresence System Using a Real-Time Omnidirectional Camera and a Virtual Reality", IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA, 1 December 2014 (2014-12-01), pages 175 - 178, XP055282584, ISBN: 978-1-4799-4311-1 *
See also references of EP3424210A4 *

Also Published As

Publication number Publication date
US10491810B2 (en) 2019-11-26
EP3424210A4 (en) 2019-12-18
US20190052800A1 (en) 2019-02-14
EP3424210A1 (en) 2019-01-09

Similar Documents

Publication Publication Date Title
US10785401B2 (en) Systems and methods for adjusting focus based on focus target information
US9374529B1 (en) Enabling multiple field of view image capture within a surround image mode for multi-LENS mobile devices
US10404926B2 (en) Warp processing for image capture
US9979889B2 (en) Combined optical and electronic image stabilization
CN105323425B (en) Scene motion correction in blending image system
US9363438B2 (en) Digital image processing
KR101792641B1 (en) Mobile terminal and out-focusing image generating method thereof
AU2013297221B2 (en) Image processing method and apparatus
JP6395810B2 (en) Reference image selection for motion ghost filtering
US8363157B1 (en) Mobile communication device with multiple flashpoints
EP3010226B1 (en) Method and apparatus for obtaining photograph
KR102045957B1 (en) Method and apparatus for photographing of a portable terminal
TWI602152B (en) Image capturing device nd image processing method thereof
EP2933999B1 (en) Method and apparatus for obtaining an image with motion blur
CN104702851B (en) Use the powerful auto-exposure control of embedded data
US8947553B2 (en) Image processing device and image processing method
US9253397B2 (en) Array camera, mobile terminal, and methods for operating the same
US9247133B2 (en) Image registration using sliding registration windows
US20170094195A1 (en) Automatic composition of composite images or videos from frames captured with moving camera
JP4846259B2 (en) Brightness correction
US10484621B2 (en) Systems and methods for compressing video content
WO2019109801A1 (en) Method and device for adjusting photographing parameter, storage medium, and mobile terminal
US10205924B2 (en) Method and system of lens shading color correction using block matching
CN105100615A (en) Image preview method, apparatus and terminal
US9710132B2 (en) Image display apparatus and image display method

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017759348

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017759348

Country of ref document: EP

Effective date: 20181001

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17759348

Country of ref document: EP

Kind code of ref document: A1