WO2017125779A1 - Système pour une vidéo immersive permettant la capture segmentée d'une scène - Google Patents

Système pour une vidéo immersive permettant la capture segmentée d'une scène Download PDF

Info

Publication number
WO2017125779A1
WO2017125779A1 PCT/IB2016/000605 IB2016000605W WO2017125779A1 WO 2017125779 A1 WO2017125779 A1 WO 2017125779A1 IB 2016000605 W IB2016000605 W IB 2016000605W WO 2017125779 A1 WO2017125779 A1 WO 2017125779A1
Authority
WO
WIPO (PCT)
Prior art keywords
immersive video
image sensors
acquisition parameter
photometric
individual values
Prior art date
Application number
PCT/IB2016/000605
Other languages
English (en)
Other versions
WO2017125779A8 (fr
Inventor
Nicolas Burtey
Alex FINK
Stéphane VALENTE
Original Assignee
Videostitch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Videostitch filed Critical Videostitch
Priority to PCT/IB2016/000605 priority Critical patent/WO2017125779A1/fr
Publication of WO2017125779A1 publication Critical patent/WO2017125779A1/fr
Publication of WO2017125779A8 publication Critical patent/WO2017125779A8/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the invention relates to the field of digital cameras and digital images and videos, and in particular cameras for producing static images or videos of large fields of view, through segmented capture of a scene, like panoramic or full or near full sphere images or videos.
  • the full-sphere images or videos are also known as "360°", or omnidirectional, images or videos.
  • the invention relates to the field of stitching of images or videos taken from multiple digital image sensors, in a segmented capture process.
  • Each camera sensors may have a limited field of view and the sensors may be kept in fixed position one from one another using a central rig.
  • the number of sensors may be two or more.
  • Obtaining a panoramic or a 360° picture or video using those sensors requires stitching together the outputs of the multiple sensors. This allows at the end covering a large field of view, such as a panoramic scene, and up to a complete sphere of 360° by 180° capturing the complete environment of the central point or central axis.
  • the image sensors have various control parameters, such as exposure parameters and white-balance gains.
  • the exposure parameters determine how light or dark a captured image will result.
  • the exposure parameters are in fact a set of three distinct parameters: the aperture, which relates to the area of the lens or the diaphragm where the light can enter the camera; the shutter speed, related to the duration of the exposure - in digital cameras, it is the integration time of the sensor; and the sensitivity of the camera - in film-based photography, it is the ISO speed of the film - in digital cameras, it is altered by modifying an analog and/or digital gain applied to the sensor output.
  • the white-balance defines how the whites are rendered in a colour picture, captured by a film in analog cameras, or an image sensor in a digital camera. It often relates to the "colour temperature" of the light sources. For instance, sunlight, having a high colour temperature, contains more blue wavelengths than incandescent light bulbs, having a low temperature. Such differences can result in blueish colour casts in sun-light and redish colour casts under light bulbs.
  • colour temperature For instance, sunlight, having a high colour temperature, contains more blue wavelengths than incandescent light bulbs, having a low temperature. Such differences can result in blueish colour casts in sun-light and redish colour casts under light bulbs.
  • digital cameras use white balance gains, applied individually on the colour channels of the pictures, most often red, green and blue channels.
  • the white-balance parameters can be either the colour temperature of the light sources, or directly the channel gains.
  • tone-mapping and gamma correction curves are used to adapt the sensor dynamics (for instance, 12 Exposure Values - EVs) into the output format dynamics.
  • Typical output video formats can represent 8 EVs of dynamic range.
  • the gamma correction is a non-linear operation that was historically designed to compensate the non-linear response of cathodic tubes used in television, modelled by a correction curve applied to the colour channels. It can also map a 10 or 12 EV signal into an 8 EV one.
  • the tone mapping operation is more general than the gamma correction, but it shares with it the goal to adapt the sensor or acquired scene dynamics into an output format one. It can either compress the acquired scene dynamics if they are superior to the one of the output format, or expand them if the scene lacks contrast compared to the output format.
  • One such mechanism is known as auto-contrast stretch, which occurs on low dynamics scenes to stretch the scene contrast in order to fully use the dynamics of the output format of the pictures or videos.
  • photometric acquisition parameters since they act on the photometric response of the camera.
  • the cameras should all work in a temporally synchronized way to record pictures at the exact same time. This feature is known in another field of photography and video recording since it is also required and used for stereo video using binocular viewing, which is a technical field in which two cameras acquire a picture respectively for the left and right eyes.
  • the two digital sensors have the same exposure parameters, including aperture, sensor gains and integration times, and also the same white-balance gains, tone-mapping and gamma correction curves, because they are directed at the same subject-matter or scene, with single illumination and colorimetric dynamics.
  • these parameters are determined and updated on a regular basis using the illumination statistics of the sensor. This allows acquiring the image of the scene correctly, because a digital image sensor can capture only a limited dynamic range at a time, typically 10 to 12 EVs, and its exposure parameters thus need to be adapted to the current dynamics of the scene.
  • the sensors are synchronized parameter-wise to avoid photometric differences that would be annoying for the comfort of the user, since the two images or videos relate to the same subject-matter.
  • some parts of the environment may be in bright light while others may not be lit at all, or remain in shadows. Some parts may be lit by the sun while others may be lit by artificial lights.
  • the exposure parameters and the white balance gains for the multiple sensors may need to be different to individually adapt to different subject-matters.
  • a solution is to have a determination for each sensor, independently from the others, of the exposure parameters and the white balance gains, on the basis of the illumination statistics of the considered sensor. But stitching of the images is then unsatisfactory because the stitched borders of the captured images show boundary effects related to the variations of parameters, and the output video shows large differences in exposures or in colours.
  • WO2015 27535 discloses a method of generating a panoramic video using image stitching comprising a normalization of the colour profile of the video frames.
  • KR20140034703 discloses a colour correction method for panorama video stitching on the basis of a reference image chosen from the input images.
  • CN104240211 discloses a method for video stitching using brightness and colour differences in regions having similarity of textures in the neighbouring pictures and an iterative process.
  • Another solution is to have a single set of exposure parameters and a single set of white balance gains determined for the whole set of sensors of the camera, on the basis of illumination statistics of a single sensor, of several sensors or of all sensors. But this solution leads also to unsatisfactory results, since the full dynamics of the scene cannot be captured properly. The areas in bright light are saturated to white and the areas in low light are completely or almost completely black. The colours may also be wrongly captured if a single set of white balance gains is used in the presence of different colour temperatures of the light sources.
  • the invention relates to a system for immersive video, comprising a camera assembly having a set of image sensors positioned to together proceed to a segmented capture of a scene, the system for immersive video further comprising
  • a first component to set, for the said set of image sensors, a dispersion tolerance parameter for a digital image photometric acquisition parameter; - and a second component to obtain illumination statistics from each image sensor of the set, obtain the dispersion tolerance parameter from the first component, determine individual values of said photometric acquisition parameter for each of the image sensors of the set using the respective illumination statistics to adapt to the photometric dynamics of a respective segment of the scene and using the said dispersion tolerance parameter to limit the dispersion of the said individual values, and control each of the image sensors with the respective individual values.
  • the invention allows having a wider dynamic range for the complete scene while keeping the variations of exposure or white-balance gains, or other photometric acquisition parameters, within limits that a stitching algorithm can gracefully handle, and also offering variations across the pictures to be assembled that are smoother than with prior art methods and compatible with a natural colour output.
  • the invention can also have the following advantageous and optional features:
  • the second component can be further configured to determine a general value of the said digital image photometric acquisition parameter for the scene using illumination statistics from each of the image sensors of the set, and determine the individual values of said photometric acquisition parameter further using the general value as a reference.
  • the first component can further designate one or several image sensors as reference sensors or master sensors. In that case, the second component determines a reference individual value of said photometric acquisition parameter for each of the reference sensors on the basis of illumination statistics from the respective image sensors, and controls the said reference sensors with these reference individual values of said photometric acquisition parameter.
  • the second component also determines individual values of the photometric acquisition parameter for the sensors that are not reference or master sensors, using the reference individual values for the reference sensors as one or several references. The second component then controls each of these image sensors that are not reference or master sensors with the respective individual values.
  • the second component can alternatively be configured to determine individual values further using an individual value determined for a neighbouring sensor of the set as a reference.
  • the system for immersive video can further comprise a stitching module to obtain captured images from the image sensors of the set and the associated individual values of the photometric acquisition parameter, determine local correction values to correct the photometric differences across the boundaries of the stitched images, and create a stitched image for rendering the scene with limited boundary effects related to the photometric acquisition parameter using said captured images and the local correction values for said photometric acquisition parameter for said border zones.
  • a stitching module to obtain captured images from the image sensors of the set and the associated individual values of the photometric acquisition parameter, determine local correction values to correct the photometric differences across the boundaries of the stitched images, and create a stitched image for rendering the scene with limited boundary effects related to the photometric acquisition parameter using said captured images and the local correction values for said photometric acquisition parameter for said border zones.
  • the system for immersive video can comprise a stitching module (the same or another stitching module) to obtain captured images from the image sensors of the set and the associated individual Values of the photometric acquisition parameter or parameters, undo on the captured images the respective photometric acquisition parameter variations corresponding to the respective individual values of the photometric acquisition parameter or parameters to create individual intermediate digital representations, and create a stitched intermediate digital representation with said individual intermediate digital representations and, tone map the stitched intermediate representation to create a stitched image for the rendering of the scene.
  • the photometric acquisition parameter or parameters can include an exposure parameter, in particular a sensor gain, or several exposure parameters. It can also include one or several white balance parameters, such as a white balance gain.
  • the second component can be configured to determine primary individual values for the photometric acquisition parameter of the respective sensors using the respective illumination statistics and further determine modulated individual values by clipping the primary individual values within a range encompassing the general value and having a width defined by the dispersion tolerance parameter.
  • the second component can alternatively or optionally be configured to determine modulated individual values by clipping the primary individual values within a range encompassing the value or values of the reference image sensor or sensors and having a width defined by the dispersion tolerance parameter.
  • the set of image sensors can be the whole set of image sensors of the camera assembly.
  • the set of image sensors can alternatively be a subset of the image sensors of the camera assembly.
  • the second component can allow differences for the exposure parameters or the white balance parameters for the set of image sensors, these differences being limited by use of the limitation described above, while choosing other parameters, for instance the tone-mapping, auto-contrast stretch or gamma correction parameters, identical for all image sensors of the set of image sensors.
  • the second component can choose a unique value of the integration time and/or a unique value for the aperture value for all image sensors of the set of image sensors to avoid different motion blur or depths of field between cameras, while allowing differences in the sensor gains of the set of image sensors, these differences being limited by use of the modulation described above.
  • the first component and the second component can be in a single chip, or are in different chips communicating by wireless or wireline connection. They can be implemented directly by dedicated chips or application-specific integrated circuit, or by computer programs running on micro-controllers or a general purpose computing device.
  • the system for immersive video can comprise a camera rig to keep the sensors in a predefined position one relative to one another.
  • the system for immersive video can comprise a synchronization controller to send a synchronization signal to at least a first one of said sensors to enable temporal synchronization of said first sensor with a second one of the sensors.
  • the number of image sensors is at least two, or any superior number to record a panoramic or a full sphere scene.
  • the capture of the scene can be a video recording.
  • All image sensors of the set can be controlled with a unique value of integration time.
  • All image sensors of the set can be controlled with unique tone-mapping parameters.
  • All image sensors of the set can be controlled with unique contrast stretch parameters.
  • All image sensors of the set can be controlled with unique gamma correction curves.
  • the invention is also related to a method for immersive video using a camera assembly having a set of image sensors positioned to together proceed to a segmented capture of a scene, wherein the method for immersive video comprises obtaining for the said set of image sensors, a dispersion tolerance parameter for a digital image photometric acquisition parameter, obtaining illumination statistics from each image sensor of the set, determining an individual value of said photometric acquisition parameter for each of the image sensors of the set using the respective illumination statistics to adapt to photometric dynamics of a respective segment of the scene and using the dispersion tolerance parameter to limit a dispersion of the individual values, and controlling each of the image sensors with the respective individual values.
  • the method may further comprise a step of stitching by obtaining captured images from the image sensors of the set and the associated individual values of the photometric acquisition parameter, determining local correction values for said photometric acquisition parameter using the respective individual values of the photometric acquisition parameters of neighbouring captured images, and creating a stitched image for rendering of the scene with limited boundary effects related to the photometric acquisition parameter using said captured images and the determined local correction values.
  • the method may further comprise a step of stitching by obtaining captured images from the image sensors of the set and the associated individual values of the photometric acquisition parameter, undoing on the captured images the respective photometric acquisition parameter variations corresponding to the respective individual values of the photometric acquisition parameter to create individual intermediate digital representations, and creating a stitched intermediate digital representation with said individual intermediate digital representations.
  • the method may further comprise a step of tone mapping the stitched intermediate representation to create a stitched image (SI) for the rendering of the scene.
  • SI stitched image
  • the figure 1 shows a first embodiment of a device according to the invention.
  • the figure 2 shows a second embodiment of a device according to the invention.
  • the figure 3 shows a third embodiment of a device according to the invention.
  • the figure 4 shows a first embodiment of a method according to the invention.
  • the figure 5 shows a second embodiment of a method according to the invention.
  • a first embodiment of a system for immersive video 1 comprises a camera assembly 2 that can be a mobile or fixed device comprising several digital image sensors 10, 20, ... nO positioned on a camera rig or casing to capture images of the environment in a panoramic manner, for example with a full-sphere covering 360° in yaw and 180° in pitch. It is directed at a scene S. It can be used as a mobile device, or rigidly attached to a fixed or mobile object such as a car.
  • the multiple sensors 10, 20, ... nO are temporally synchronized using a synchronization signal that is communicated by a pilot to each of the sensors, or made available to slave sensors by a master sensor.
  • the camera assembly 2 comprises a first component 100 and a second component 200 that both participate in controlling the image sensors 10, 20, ... nO.
  • the second component 200 and the first component 100 can be in a single chipset or in different chipsets.
  • the first component 100 can be a master component, while the second component 200 can be a slave component.
  • the second component 200 obtains illumination statistics IS1 from the sensor 0, illumination statistics IS2 from the sensor 20, ... and illumination statistics from the sensor nO.
  • the first component 100 allows a user, for example, to set in a predetermined manner a width ⁇ for an exposure parameter range. This can be done using a manual setting on the camera assembly rig or casing, or in another manner, for example using a remote control, or a graphical user interface running on a separate computer.
  • the second component 200 determines a general value GV of the exposure parameter using all illumination statistics obtained from the sensors 10, 20, ... nO, and using a non-weighted formula to use these statistics such as a non- weighted average or non-weighted statistical median, or using a weighted formula if appropriate.
  • the second component 200 also determines an individual value IV1 , IV2, ... IVn for the exposure parameter for each of the sensors 10, 20, ... nO, refered to as a primary individual value.
  • the primary individual values can also be determined by individual controllers attached to the sensors, but in this case, these values must be transmitted to the second component 200.
  • the primary individual values are calculated by any method, using a non-weighted formula such as a non-weighted average or non-weighted statistical median, or using a weighted formula if appropriate.
  • the second component 200 then modulates each of the individual values of the exposure parameter, starting from the primary values IV1 , IV2, ... IVn by clipping these values within a range comprising or encompassing the general value GV and having the width ⁇ , for example, in an embodiment, a range centered on the general value GV.
  • modulation method may be used, provided that the individual values are modulated for preparing the stitching of the images, ie by limiting the dispersion of the values using the width ⁇ as a limiting factor.
  • the modulated individual values are referred as MIV1 , MIV2, ... MlVn.
  • the sensors 10, 20, nO are controlled using the modulated values MIV1 , MIV2, ... MlVn.
  • the sensors 10, ... nO collect images 11 , ... In and send them to an interface means 300 of the camera assembly 2 that either stores the images for future download and use on a computer (or video display) 3, or send them on the fly to the device 3, which can be for instance a computer or a video display, by wireless or wireline communication.
  • the device 3 comprises a stitching module 400 that collects the images captured by the sensors 10, 20, ... nO and also optionally the modulated individual values MIV1 , MIV2, MlVn of the exposure parameter, that are sent to it by the second component 200, directly or preferably through the interface means 300.
  • the stitching module 400 obtains the captured images 11 , In of the scene S from the image sensors IS1 , ISn and optionally the associated modulated individual values MIV1 , MlVn of the exposition parameters.
  • the stitching module 400 determines smoothly varying local correction values for said exposure parameter in border zones of neighbouring captured images using the respective modulated individual values MIV1 , MlVn of the photometric acquisition parameters of the neighbouring captured images, and create a stitched image SI for rendering the scene S with limited boundary effects related to the exposure parameters varying from one sensor to the other.
  • the stitching module 400 uses the captured images 11 , ... In and the local correction values for the exposure parameter for the border zones. Alternatively, the stitching module 400 undoes on the captured images 11 , In the respective exposure parameter variations corresponding to the respective modulated individual values MIV1 , MlVn to create individual intermediate digital representations, create a stitched intermediate digital representation with said individual intermediate digital representations sharing the same exposure, and tone maps the stitched intermediate representation to create a final stitched image SI for rendering of the scene S.
  • a second embodiment of a system for immersive video 1 is shown. It is similar to the embodiment of figure 1 , except that the first component 100 is not part of the camera assembly 2, but part of the stitching device 3 (which can be for instance a computer or a video display). It communicates with the second component 200 through the interface means 300.
  • the stitching device 3 which can be for instance a computer or a video display. It communicates with the second component 200 through the interface means 300.
  • the individual values are clipped within a range encompassing the primary individual value IVp associated with one particular predetermined sensor among the set, called a master or reference sensor, and identified as pO.
  • the range can be centered on this individual value IVp used as a reference.
  • the reference sensor pO can be controlled with the associated primary individual value IVp while the other sensors are controlled with their modulated individual values MIV1 , ... , MlVn.
  • the exposure parameter can be replaced by a set of white balance gains, or any other photometric acquisition parameter.
  • the invention allows limited white- balance variations across the pictures that are assembled therefore adapting the white balance locally and having more natural colours in the output video.
  • the number of the sensors is at least 2. It can be any larger number, typically from 2 to 32.
  • the images can be static photographs, but preferably are video images.
  • the sensors 10, 20, ... nO can form a subset of the whole set of sensors of the system for immersive video 1.
  • the sensors 10, 20... nO can be only two neighbouring sensors 10 and 20, among other sensors of the camera assembly 2.
  • an allowed deviation ⁇ is enforced across the two neighbouring sensors 10 and 20. It is also enforced across other pairs of neighbouring sensors, such as, for example, sensors 20 and 30 until it is enforced for all pairs of neighbouring sensors. Multiple components such as component 200 can be used to enforce the allowed deviation across all pairs of neighbouring sensors.
  • the allowed deviation can be replaced by any dispersion tolerance parameter, including a complex dispersion tolerance parameter allowing only dispersions having constrained shapes.
  • the camera assembly 2 and device 3 can reside in a single physical object or in several ones communicating by a wireless or wireline connection.
  • the device 3 can be a distant computer or collection of computers, for example in the Internet cloud, and does not need to operate at the same time as the camera assembly 2. It can for instance be a computer that will stitch the acquired pictures and videos at a later time, in an offline manner, in a post-production context.
  • FIG 4 a method for immersive video according to a first embodiment of the invention is depicted. It uses the camera assembly 2.
  • It comprises a step of obtaining S1 a dispersion tolerance parameter ⁇ for a digital image photometric acquisition parameter.
  • the tolerance parameter is obtained through a human-machine interface or a communication receptor or by a calculation.
  • This step S1 is followed by a step of obtaining S2 illumination statistics from each image sensor, and a step of determining S3 an individual value of said photometric acquisition parameter for each of the image sensors of the set using the respective illumination statistics to adapt to photometric dynamics of a respective segment of the scene and using the dispersion tolerance parameter ⁇ to limit a dispersion of the individual values.
  • the method further uses a step of controlling S4 each of the image sensors with the respective individual values.
  • the step S1 can be performed only once, and the steps S2 to S4 several times. But the step S1 can also be performed several times during a single shooting.
  • the method further comprises a step of stitching S51 by obtaining captured images and the associated individual values of the photometric acquisition parameter, determining local correction values for said photometric acquisition parameter using the respective individual values of the photometric acquisition parameters of neighbouring captured images, and creating a stitched image for rendering of the scene with limited boundary effects related to the photometric acquisition parameter using said captured images and the determined local correction values.
  • FIG 5 a method for immersive video according to a first embodiment of the invention is depicted. It uses the camera assembly 2. Steps S1 to S4 are identical to the embodiment depicted in figure 4.
  • the method further comprises a step of stitching S52 by obtaining captured images from the image sensors and the associated individual values of the photometric acquisition parameter, undoing on the captured images the respective photometric acquisition parameter variations corresponding to the respective individual values of the photometric acquisition parameter to create individual intermediate digital representations, and creating a stitched intermediate digital representation with said individual intermediate digital representations.
  • the method further comprises a step of tone mapping S53 the stitched intermediate representation to create a stitched image for the rendering of the scene.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un système pour une vidéo immersive (1), comprenant un ensemble de capteurs d'image (10, …, n0) positionnés pour capturer ensemble des images de différentes parties d'une scène (S) permettant la capture segmentée d'une scène, le système pour une vidéo immersive comprenant en outre – un premier élément (100) pour régler, pour ledit ensemble de capteurs d'image (10, …, n0), un paramètre de tolérance à la dispersion (Δ) pour un paramètre d'acquisition photométrique d'image numérique ; et un second élément (200) pour commander chacun des capteurs d'image (10, …, n0) ayant des valeurs individuelles (MIV1, …, MIVn) respectives dudit paramètre d'acquisition photométrique.
PCT/IB2016/000605 2016-01-22 2016-01-22 Système pour une vidéo immersive permettant la capture segmentée d'une scène WO2017125779A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2016/000605 WO2017125779A1 (fr) 2016-01-22 2016-01-22 Système pour une vidéo immersive permettant la capture segmentée d'une scène

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2016/000605 WO2017125779A1 (fr) 2016-01-22 2016-01-22 Système pour une vidéo immersive permettant la capture segmentée d'une scène

Publications (2)

Publication Number Publication Date
WO2017125779A1 true WO2017125779A1 (fr) 2017-07-27
WO2017125779A8 WO2017125779A8 (fr) 2017-10-05

Family

ID=56101758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2016/000605 WO2017125779A1 (fr) 2016-01-22 2016-01-22 Système pour une vidéo immersive permettant la capture segmentée d'une scène

Country Status (1)

Country Link
WO (1) WO2017125779A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3729803A4 (fr) * 2017-12-20 2020-11-04 Texas Instruments Incorporated Traitement d'images de caméras multiples
EP4161087A3 (fr) * 2021-09-30 2023-06-28 Texas Instruments Inc. Procede et appareil d'exposition automatique

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201708A1 (en) * 2001-02-23 2004-10-14 Takaaki Endo Imaging apparatus controller and control method thereof, image processing apparatus and method thereof, and program code and storage medium
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window
KR20140034703A (ko) 2012-09-12 2014-03-20 한국전자통신연구원 파노라마 생성을 위한 컬러 보정 장치 및 이를 이용한 레퍼런스 이미지 선택 방법
CN104240211A (zh) 2014-08-06 2014-12-24 中国船舶重工集团公司第七0九研究所 用于视频拼接的图像亮度与色彩均衡方法及系统
WO2015127535A1 (fr) 2014-02-26 2015-09-03 Searidge Technologies Inc. Assemblage d'images et correction de couleur automatique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201708A1 (en) * 2001-02-23 2004-10-14 Takaaki Endo Imaging apparatus controller and control method thereof, image processing apparatus and method thereof, and program code and storage medium
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window
KR20140034703A (ko) 2012-09-12 2014-03-20 한국전자통신연구원 파노라마 생성을 위한 컬러 보정 장치 및 이를 이용한 레퍼런스 이미지 선택 방법
WO2015127535A1 (fr) 2014-02-26 2015-09-03 Searidge Technologies Inc. Assemblage d'images et correction de couleur automatique
CN104240211A (zh) 2014-08-06 2014-12-24 中国船舶重工集团公司第七0九研究所 用于视频拼接的图像亮度与色彩均衡方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3729803A4 (fr) * 2017-12-20 2020-11-04 Texas Instruments Incorporated Traitement d'images de caméras multiples
US11057573B2 (en) 2017-12-20 2021-07-06 Texas Instruments Incorporated Multi camera image processing
EP4161087A3 (fr) * 2021-09-30 2023-06-28 Texas Instruments Inc. Procede et appareil d'exposition automatique
US11943540B2 (en) 2021-09-30 2024-03-26 Texas Instruments Incorporated Method and apparatus for automatic exposure

Also Published As

Publication number Publication date
WO2017125779A8 (fr) 2017-10-05

Similar Documents

Publication Publication Date Title
CN102104785B (zh) 摄像设备及其控制方法
US8643768B2 (en) Multiple lens imaging apparatuses, and methods and programs for setting exposure of multiple lens imaging apparatuses
US9148638B2 (en) Digital photographing apparatus
EP2426927B1 (fr) Appareil de traitement d'images, procédé de traitement d'images et programme informatique
JP5743696B2 (ja) 画像処理装置、画像処理方法及びプログラム
JPWO2006059573A1 (ja) 色彩調整装置及び方法
JP2016019080A (ja) 画像処理装置、その制御方法、および制御プログラム
JP2014123914A (ja) 撮像装置及びその制御方法、プログラム並びに記憶媒体
CN106412534B (zh) 一种图像亮度调节方法及装置
CN100550990C (zh) 图像校正装置以及图像校正方法
JP7156302B2 (ja) 画像処理装置、画像処理方法、プログラム
WO2017125779A1 (fr) Système pour une vidéo immersive permettant la capture segmentée d'une scène
JP2015005927A (ja) 画像処理装置およびその制御方法
JP5854716B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP6196882B2 (ja) マルチエリアホワイトバランス制御装置、マルチエリアホワイトバランス制御方法、マルチエリアホワイトバランス制御プログラム、マルチエリアホワイトバランス制御プログラムを記録したコンピュータ、マルチエリアホワイトバランス画像処理装置、マルチエリアホワイトバランス画像処理方法、マルチエリアホワイトバランス画像処理プログラム、マルチエリアホワイトバランス画像処理プログラムを記録したコンピュータ及びマルチエリアホワイトバランス画像処理装置を備えた撮像装置
US11451719B2 (en) Image processing apparatus, image capture apparatus, and image processing method
JP6552165B2 (ja) 画像処理装置、その制御方法、および制御プログラム
JP2022135677A (ja) 画像処理装置、その制御方法及びプログラム
EP3442218A1 (fr) Appareil de traitement d'images pour produire des images avec des caracteristiques entree/sortie différentes pour des régions d'image différentes et des informations relatives aux régions, appareil client pour recevoir des images avec des caracteristiques entree/sortie différentes pour des régions d'image différentes et des informations relatives aux régions et pour afficher les régions dans une manière reconnaisable
JP6725105B2 (ja) 撮像装置及び画像処理方法
JP2014220701A (ja) 撮像装置及び撮像方法
JP2005151495A (ja) 画像処理方法
JP5166859B2 (ja) ホワイトバランス制御装置及びそれを用いた撮像装置、並びにホワイトバランス制御方法
US11470258B2 (en) Image processing apparatus and image processing method to perform image processing on divided areas
JP2019033470A (ja) 画像処理システム、撮像装置、画像処理装置、制御方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16727222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24/10/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16727222

Country of ref document: EP

Kind code of ref document: A1