EP2987319A1 - Method for generating an output video stream from a wide-field video stream - Google Patents
Method for generating an output video stream from a wide-field video streamInfo
- Publication number
- EP2987319A1 EP2987319A1 EP14719707.3A EP14719707A EP2987319A1 EP 2987319 A1 EP2987319 A1 EP 2987319A1 EP 14719707 A EP14719707 A EP 14719707A EP 2987319 A1 EP2987319 A1 EP 2987319A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- video stream
- output video
- wide field
- output
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000006870 function Effects 0.000 claims description 29
- 230000000694 effects Effects 0.000 claims description 22
- 230000004048 modification Effects 0.000 claims description 14
- 238000012986 modification Methods 0.000 claims description 14
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 238000011161 development Methods 0.000 claims description 4
- 239000004149 tartrazine Substances 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000002151 riboflavin Substances 0.000 claims description 2
- 230000004907 flux Effects 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000009877 rendering Methods 0.000 abstract description 9
- 238000012545 processing Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G06T3/06—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
Definitions
- the invention relates to the field of video processing.
- the invention more particularly relates to a method of generating an output video stream for display in a two-dimensional display space according to a particular rendition.
- a video may be from an assembly from a plurality of viewpoints.
- the displayed images of the video come from a polygonal computation from a three-dimensional model on which a texture is mapped. This restitution is known to those skilled in the art under the name English "UV mapping".
- the result of the use of polygons is a degradation, also called distortion, of the rendering in the sense that the polygons geometrically modify the source video by geometric approximation of the latter.
- the visual rendering is approximate and of inferior quality to that of the source video.
- the object of the present invention is to propose a solution that makes it possible to obtain a fast restitution and whose distortion in the sense of the prior art is avoided.
- This goal is achieved by a method of generating an output video stream for display in a two-dimensional display space, said method comprising the steps of:
- - select a wide field video stream, defining a desired projection and a desired field of view with a view to obtaining a particular, notably non-distorted, reproduction of at least part of the wide-field video stream in the display space; performing a first geometric function enabling for each point of the display space, generating an intermediate point located in a three-dimensional reference, said first function taking into parameters the desired projection and the desired field of view, - forming the output video stream from the selected field wide flow taking into account the intermediate points.
- the step of forming the output video stream is implemented so that each pixel of an image of the output video stream, intended to be displayed by a point of the display space, is function, according to the corresponding intermediate point, at least one pixel of a corresponding image of the wide field video stream.
- the step of forming the output video stream comprises, for each intermediate point, a step of determining a reference point in the associated image of the wide field video stream in order to restore a pixel of the image the output video stream according to said reference point.
- the step of determining the reference point comprises the execution of a second geometric function taking as input the corresponding intermediate point and configured so as to transform the coordinates of the intermediate point, according to the desired projection and to a determined projection associated with the flow wide-field video, in reference point coordinates within the corresponding image of the wide-field video stream.
- the method that it comprises, for each reference point, a step of determining a pixel of the image of the output video stream to be displayed at the point of the corresponding display space from a pixel of the wide field video image image at said reference point or interpolation between multiple pixels of the wide field video image image at the reference point.
- the method comprises a step of modifying the desired field of view and includes a step of updating the coordinates of the intermediate points.
- this step of modifying the field of view is assimilated to a modification of the orientation of a virtual camera located in a spherical landmark and / or a modification of the angle of view of the virtual camera, and step of updating comprises, for each intermediate point, the following steps:
- the modification of the orientation of the virtual camera takes into account at least one following parameter: the latitude angle of the virtual camera in the spherical coordinate system, the angle of longitude of the virtual camera in the spherical coordinate system, a the field of view angle value of the virtual camera.
- At least the step of executing the first geometric function is performed by a graphics processor, in particular in which each image of the wide field video stream forms a texture of which part of the pixels is used by the graphics processor to form a corresponding image of the output video stream.
- the method comprises a step of defining an effect filter to be applied to the images of the output video stream during their formation in order to restore a particular effect, in particular an effect filter chosen from: contrast, a brightness filter, a saturation filter, a hue filter, an image distortion filter.
- the invention also relates to a method for displaying at least one output video stream in a display space of at least one screen, said method comprising at least one step of implementing the method of generating the output video stream as described, and a step of displaying said output video stream generated in the display space of said at least one screen.
- each screen has a plurality of display spaces each displaying a corresponding output video stream, said output video streams being formed from the same wide field video stream according to a different field of view and / or a different projection. and / or a different effect.
- the display method is configured to allow a step of displaying a plurality of separate output videos, at least two videos of outputs of the plurality of output videos from different wide field video streams, in particular, the output videos of a plurality of output videos are displayed in display spaces of the same screen, or in different screen display spaces.
- the invention also relates to a device for managing at least one output video stream, said device comprising at least one element configured so as to implement the method of generating an output video stream as described, and a human-machine interface comprising at least one display screen for displaying the output video stream in at least one display space of said at least one screen.
- the human-machine interface comprises at least one of the following elements: a selection element of a wide field video stream in a storage space, a definition element of the desired field of view, a definition element a desired projection, in particular from a predefined list of projections, an effect selection element to be applied, an element making it possible to move temporally within the wide-field video stream in order to restore the output video stream to a precise moment, a control element of the soundtrack of the output video stream.
- the human-machine interface comprises at least one element for modifying the field of view, in particular chosen from: a touch screen, an accelerometer, a gyroscope, a magnetometer, a display space of the wide field video stream from which he is possible to choose the desired field of vision, a pointing device, a gesture interface with motion detection.
- each screen comprises a plurality of display spaces, a separate output video stream being played in each of the display spaces, the output video streams played being each resulting from an implementation of the method of generating the output video stream as described, in particular from the same wide field video stream or different wide field video stream.
- the invention also relates to a system comprising a compiler and a memory provided with a software development kit for creating an application for an operating system.
- the development kit includes at least one element configured to implement the method of generating the video stream as described within the application when it is executed after being compiled by the compiler.
- FIG. 1 illustrates a particular implementation of steps of a method for generating an output video stream
- FIG. 2 is a view showing the implementation kinematics of the method of FIG. 1 on a particular device comprising a central processing unit and a graphics processor,
- FIGS. 3 to 5 illustrate different representations of the desired projections of the method according to FIG. 1,
- FIG. 6 illustrates in more detail one of the steps of the method according to FIG. 1,
- FIG. 7 illustrates a method for displaying at least one video stream obtained from the method of FIG. 1,
- FIG. 8 illustrates a device for implementing the method of generating the output video stream and allowing its display at a screen.
- the method for generating a video stream described below differs from the existing one in particular in an optimized management of the computing resources while allowing a restitution, at least in part, of a video stream. wide field according to geometric transformations.
- the method of generating an output video stream 1, in order to display it in a two-dimensional display space 2 preferably comprising a plurality of display points, comprises a step E1 in which a wide field video stream 3 is selected.
- the selected wide field video stream 3 may be selected by an operator making a choice among several predefined wide field video streams, or making a choice from a file management system, especially local or remote, using a human machine interface.
- the two-dimensional display space 2 can correspond to a display space of a screen 4. This display space 2 then comprises display points intended to restore pixels of the output video stream. Typically, each of the points of the display space 2 is associated, preferably only, with a pixel of the screen 4 and corresponds to pixel coordinates in a two-dimensional space. In other words, each point of the display space will display a corresponding pixel of an image of the output video stream 1.
- the method of generating the output video stream 1 can then also include a step E2 of defining a display space.
- This display space can be fixed, defined manually by an operator via a human machine interface, or other.
- these steps E1 and E2 can be implemented using instructions executed by a main processor, also called central processing unit or CPU (Central Processing Unit).
- main processor also called central processing unit or CPU (Central Processing Unit).
- CPU Central Processing Unit
- a wide field video stream 3 may correspond to a video stream derived from an assembly from a plurality of video streams according to different fields of view within the same environment.
- This plurality of video streams can be obtained by using several cameras oriented in different directions to generate several complementary films of the environment at the same time.
- this wide field video stream covers a field of view angle exceeding the human field of view angle.
- the wide field video stream is a 360 degree video by 180 degrees.
- the wide-field video stream can also be a computer-generated video, a video taken from a single-lens system, or a high-definition video.
- a video is considered to be of high definition when its definition, in particular in terms of number of pixels, is greater than that of the display space.
- the wide field video stream has a resolution in pixels greater than the resolution of the output video stream.
- the number of pixels of the wide field video stream is in this case greater than the number of points of the display space.
- the method may comprise a step E3 of defining a desired projection (P in FIG. 2) and a desired field of view (cam) in FIG. 2) in order to obtain a particular reproduction, in particular without distortion, at least part of the wide field video stream 3 in the display space 2.
- P in FIG. 2 a desired projection
- camera desired field of view
- this step E3 can be carried out using instructions executed by the main processor CPU.
- without distortion is meant that there is no distortion in the geometric sense.
- the generation of the output video stream 1 implements precise geometric projections, thus avoiding the loss of information or the reduction of the information to be processed due to the use of the polygons as is often the case in the three-dimensional image display.
- desired projection is meant a desired representation preferably modifying the current representation of the wide field video stream. Typically, it is a matter of determining, by calculation, one or more mark changes.
- Figure 3 illustrates a flat projection of a sphere, also called equirectangular projection in which the horizontal coordinates are representative of a longitude and the vertical coordinates are representative of a latitude.
- Figure 4 illustrates a rectilinear projection (also called gnomonic projection) of a sphere.
- Figure 5 represents a stereographic projection of which the small planet projection is a special case. The realization of a projection or the transformation of a projection to another will not be described here, it is in fact to apply geometric functions well known to those skilled in the art.
- the desired field of view makes it possible to limit the restitution to a part of the desired projection.
- This desired field of view can be defined by a field of view angle of a virtual camera looking in the desired projection (the angle being representative of a zoom in the desired projection) and / or by a direction of the virtual camera within the desired projection.
- the field of view angle and orientation can be changed.
- the field of view may also include additional parameters such as typical distortion parameters (for example rendering according to a particular type of lens).
- the desired field of view is limited depending on the type of projection. For example, in equirectangular projection from the wide field video stream, we can have a horizontal field of 720 °, we will then have twice the complete scene. In small planet we will see the face of "inner” cube very small, and the "outer" face very exploded / interpolated.
- the method comprises an execution step E4 of a first geometric function f1 (P, CAM) allowing, for each point of the display space 2, to generate an intermediate point situated in a three-dimensional reference said first function taking into parameters the desired projection and the desired field of view.
- this execution step E4 can use the data from steps E2 and E3.
- This approach is unconventional in the sense that the desired projection is not applied to an image from the wide field video stream but to the output video output space (i.e. display space).
- the execution of the first function on the points of the display space makes it possible to greatly limit the computing resources thereafter irrespective of the type of projection, whether complex or not.
- This approach therefore makes it possible to limit, particularly in the context of complex projections, the usual calculations of the technique for determining hidden surfaces (known under the terminology "frustrum culling").
- the method comprises a training step E5 of the output video stream 1 from the selected wide field video stream 3 taking into account the intermediate points.
- each of the intermediate points will be used as part of a determination of a pixel to be displayed at the corresponding point of the display space 2. It is then clear that this execution step E5 uses at least the data from steps E4 and E1.
- the formation step E5 of the output video stream 1 is implemented in such a way that each pixel an image of the output video stream, intended to be displayed by a point of the display space 2, is function, according to the corresponding intermediate point, of at least one pixel of a corresponding image of the broad video stream field 3.
- the wide field video stream 3 will be broken down into a plurality of images each representative of a distinct time instant within wide field video stream. Each of these images will be used in combination with the intermediate points to form a plurality of corresponding images of the output video stream.
- the CPU will make it possible to transmit to a graphics processor the first geometric function f1 with a view to its execution E4 within the graphics processor from data coming from the display space 2 to generate all the intermediate points. Then, the image from the wide field video stream is then considered as a texture within the graphics processor.
- at least the execution step E4 of the first geometric function f1 is performed by a graphics processor, in particular in which each image of the wide-field video stream 3 forms a texture of which part of the pixels is used. by the graphics processor to form a corresponding image of the output video stream.
- a graphics processor (also called GPU in the field for the English "Graphie Processor Unit”) is an integrated circuit providing calculation functions relating to the display.
- the training step E5 of the output video stream 1 may comprise, for each intermediate point E5- 11a E5-2 determination of a reference point from the corresponding intermediate point.
- This reference point corresponds in fact to coordinates, in particular in a three-dimensional space associated with a determined projection of the wide-field video stream.
- This reference point then corresponds to an area, or a pixel of an image of the wide field video stream 3 to be displayed at the point of the display space used to calculate the intermediate point associated with said reference point.
- all the intermediate points are treated in a similar manner.
- the formation step E5 of the output video stream 1 may comprise, for each intermediate point, a step E5-2 determining a reference point in the associated image of the wide field video stream. 3, for rendering a pixel of the image of the output video stream according to said reference point.
- step E5 is also implemented by the graphics processor.
- the calculation function used may be of the shader type.
- a shader is a sequence of instructions executed by the graphics processor and performing part of a process.
- the reference point makes it possible to point a particular zone in the image resulting from the wide field video stream 3.
- this reference point can have coordinates in a three-dimensional space associated with a predetermined projection of the video stream wide field.
- the predetermined projection of the wide field video stream 3 is a known parameter resulting from the generation of the wide field video stream 3.
- the reference point will allow, when it corresponds exactly to one pixel of the image resulting from the wide field video stream 3 select this pixel for rendering it at the corresponding point of the display space 2. However, most of the time, the reference point will not correspond exactly to a pixel of the image resulting from the wide field video stream 3. In the latter case, it will be possible to generate a new pixel from the interpolation / weighting of several pixels of the image from the wide field video stream 3 associated with the area pointed by the reference point (c '). that is to say the area including for example points of the image from the wide field video stream 3 in the vicinity of the reference point). It is this new pixel that will then be rendered at the corresponding point of the display space.
- the wide field video stream 3, and therefore the image of the wide field video stream 3, can be associated with a representation according to a predetermined projection.
- this wide-field video stream 3 can be represented according to a reference X, Y whose abscissas are representative of an angle varying between 0 degrees and 360 degrees and the ordinates representative of an angle varying between 0 degrees and 180 degrees, these angles to define the predetermined projection.
- This predetermined projection can be that desired, in this case the coordinates of the reference point are simply a function of the coordinates of the intermediate point and the field of view without projection transformation.
- the desired projection is different from that determined, in this case it will be necessary to transform the coordinates of the intermediate point in the reference of the desired projection into coordinates in the reference of the predetermined projection representative of the reference point.
- This coordinate transformation can be implemented from a suitable geometric function that the skilled person can implement from simple conversion calculations.
- the determination step E5-2 of the reference point may comprise the execution of a second geometrical function f2 (FIG. 2) taking as input the corresponding intermediate point and configured so as to transform the coordinates of the intermediate point, according to the desired projection and a predetermined projection associated with the wide-field video stream (in particular the predetermined projection being different from the desired projection), at coordinates of the reference point within the corresponding image of the wide-field video stream 3.
- a second geometrical function f2 (FIG. 2) taking as input the corresponding intermediate point and configured so as to transform the coordinates of the intermediate point, according to the desired projection and a predetermined projection associated with the wide-field video stream (in particular the predetermined projection being different from the desired projection), at coordinates of the reference point within the corresponding image of the wide-field video stream 3.
- the method may comprise, for each reference point, a determination step E5 A pixel of the image of the output video stream to be displayed at the point of the corresponding display space from a pixel of the image of the wide field video stream 3 at said reference point; an interpolation between several pixels of the image of the wide field video stream 3 at the reference point.
- the pixel of the image of the output video stream 1 corresponds to a pixel of the image of the wide field video stream 3 at said reference point or to a pixel generated from an interpolation between several pixels of the image. image of the wide-field video stream at the reference point.
- the pixel generated will be a function of a pixel interpolation of the image from the corresponding wide video located in the vicinity of the corresponding reference point. This makes it possible to limit the artifacts due to the approximations induced by normalization of the pixels of the image resulting from the wide field video 3.
- primitives of the latter may directly be used.
- step E5-2 determining the reference point
- a determination step E5-3 of a pixel to be restored at the point of the display space is implemented ( Figure 6).
- the field of view and / or the projection is changed.
- a new execution of the first geometric function f1 specifying the new projection will be necessary.
- the image of the output video stream being generated is according to the former geometric function, and the next image will be according to the new first geometric function.
- the method comprises a modification step E6 of the desired field of view
- said method comprises a step of updating E7 of the correspondence (that is to say the coordinates) of the intermediate points (FIG. 1). This update step then impacts the execution of step E5.
- the modification step (E6) of the field of view is assimilated to a modification of the orientation of a virtual camera located in a spherical landmark and / or a change in the field of view angle of the virtual camera. Therefore, the update step E1 comprises, for each intermediate point:
- the converted coordinates of the intermediate point of the spherical coordinate system are normalized so as to determine the new correspondences of the intermediate point in the three-dimensional coordinate system (that associated with the desired projection).
- the modification of the orientation of the virtual camera takes into account at least one following parameter: the angle of latitude of the virtual camera in the spherical coordinate system (in particular a cartographic landmark), the angle of longitude of the the virtual camera in the spherical coordinate system and / or the value of the field of view angle of the virtual camera (corresponding to a decrease or increase of the zoom).
- the method comprises a step of defining an effect filter E8 to be applied to the images of the output video stream 1 during their formation in order to restore a particular effect, in particular an effect filter chosen from: contrast, a brightness filter, a saturation filter, a hue filter, an image distortion filter (wave effect, television effect).
- an effect filter chosen from: contrast, a brightness filter, a saturation filter, a hue filter, an image distortion filter (wave effect, television effect).
- these effect filters are intended to change the rendering of the output video stream.
- These effect filters will preferably be applied to the pixels of the images of the output video stream 1, and in particular by the graphics processor.
- the method of managing the output video stream may also include a synchronization step (not shown in the figures) of a sound track associated with the wide field video stream 3 with the output video stream 1.
- the output video stream comprises a sound track which will be restored at the same time as the images of said output video stream 1.
- the sound track associated with the wide field video stream 3 can be spatialized so that its reproduction with the output video stream 1 comprises an adjustment of the spatialization according to the field of view.
- the invention also relates to a method of displaying at least one output video stream in a display space of at least one screen.
- a display method illustrated in FIG. 7 comprises at least one implementation step E101 of the method of generating the output video stream as described previously.
- the display method comprises a display step E102 of said output video stream generated in the display space of said at least one screen.
- each screen has a plurality of display spaces displaying (E102) each an output video stream corresponding.
- the output video streams are formed from the same wide field video stream according to a different field of view and / or a different projection and / or a different effect. This makes it possible to propose different possible immersions to a user.
- different effect is meant the application of a filter of different effect.
- the screen comprises two display spaces each respectively for a left eye and a right eye of an observer.
- This allows in particular to offer immersive content with three-dimensional display or not.
- each eye perceives an output video stream associated with a viewing space that is adapted to it.
- the screen may then comprise a divided display matrix so as to delimit the two display spaces.
- the display method makes it possible to display a first output video stream in a display space of a first screen and a second output video stream in a display space of a second screen, the first stream is then intended to be perceived for example by a left eye of the observer while the second stream is intended to be perceived by a right eye of the observer.
- the screen (s) can be included in an immersive helmet (or virtual reality helmet) also known as the "Headset VR" in the field.
- the immersive helmet may comprise a support configured to be worn by an observer.
- This medium also comprises means for receiving a portable multimedia device, in particular a multimedia tablet or a multimedia telephone also known by the term smartphone in the field.
- the screen of the portable multimedia device when mounted on the support, allows the simultaneous display of two video streams each output intended to be perceived by a corresponding eye of the observer.
- the portable multimedia device may comprise a dedicated application for implementing the method of generating the output video stream.
- the display method may allow a plurality of separate output videos to be displayed (in particular simultaneously), at least two videos of outputs of the plurality of output videos being derived from different wide field video streams.
- the output videos of a plurality of output videos may be displayed in display spaces on the same screen, or in different screen display spaces.
- Each of the output videos of the plurality of output videos can be generated by an implementation of the generation method.
- the invention also relates to a device for managing at least one output video stream.
- this device 1000 comprises at least one element 1001 configured so as to implement the method of generating an output video stream as described.
- this element 1001 comprises a CPU, a GPU and a memory containing instructions for implementing the method of generating the video stream by the CPU and the GPU.
- the device comprises a man-machine interface 1002 comprising at least one display screen 1003 intended to display the output video stream in at least one display space 1004 of said at least one screen 1003.
- a human machine interface is configured to allow the interaction of an operator with the device as part of an immersive experience of rendering at least a portion of a wide field video stream.
- the human-machine interface 1002 comprises at least one of the following elements: a selection element 1005 of a wide field video stream in a storage space (in particular from a list of choice of streams wide field video), a desired field of view definition element 1006 (including a display space for viewing the entire wide field video stream), a definition element 1007 of a desired projection (especially from a predefined list of projections), a selection element of an effect to be applied 1008 (in particular from a predefined list of effects), an element 1009 making it possible to move temporally within the broad-field video stream to restore the output video stream at a specific time a 1010 control element of the soundtrack of the output video stream.
- the storage space may be local to the device (i.e., in a device memory), or remote (e.g. a remote server accessible via a communication network such as the Internet).
- the wide field video stream can be downloaded or broadcast, especially in real time, from the remote storage space.
- the human-machine interface comprises at least one element for modifying the field of view, in particular chosen from: a touch screen, an accelerometer, a gyroscope, a magnetometer, a display space of the wide field video stream from from which it is possible to choose the desired field of vision, a pointing device (for example a computer mouse), a gesture interface with motion detection.
- the gesture interface may correspond to sensors placed on a person, or to one or more cameras filming a person and analyzing his actions.
- the man-machine interface may comprise an immersive helmet comprising, for example, a gyroscope so as to allow an observer equipped with said immersive helmet to move at least his gaze into a virtual reality by interpretation of the signals from the gyroscope.
- each screen comprises a plurality of display spaces, a separate output video stream being played in each of the display spaces, the output video streams played being each from a set of implementation of the method of generating the output video stream, in particular from the same wide field video stream or different wide field video streams.
- a screen of the device may be associated with a device housing that includes the elements necessary for the generation of each output video stream.
- the housing can then form a touch pad.
- another screen of the device (for example a television) can be deported from the housing so as to display the output video stream that is transmitted to it by the housing via a communication means, especially wireless.
- the invention also relates to a system comprising a compiler and a memory provided with a software development kit for creating an application for an operating system.
- the development kit includes at least one element configured to implement the method of generating the video stream within the application when it is executed after being compiled by the compiler.
- the development kit may include a element configured to allow the interaction of the application with all or part of the elements described above of the management device.
- the method of generating an output video stream may be implemented a plurality of times so as to allow the display of a video in an immersive dome having a plurality of display panels .
- each output video stream is displayed on a single pan of the dome associated with it.
- the dome is such that the entire wide field video stream is reproduced when concatenating all the dome display panels.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1353565A FR3004881B1 (en) | 2013-04-19 | 2013-04-19 | METHOD FOR GENERATING AN OUTPUT VIDEO STREAM FROM A WIDE FIELD VIDEO STREAM |
PCT/EP2014/058008 WO2014170482A1 (en) | 2013-04-19 | 2014-04-18 | Method for generating an output video stream from a wide-field video stream |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2987319A1 true EP2987319A1 (en) | 2016-02-24 |
Family
ID=48782410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14719707.3A Ceased EP2987319A1 (en) | 2013-04-19 | 2014-04-18 | Method for generating an output video stream from a wide-field video stream |
Country Status (4)
Country | Link |
---|---|
US (1) | US10129470B2 (en) |
EP (1) | EP2987319A1 (en) |
FR (1) | FR3004881B1 (en) |
WO (1) | WO2014170482A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3249928A1 (en) * | 2016-05-23 | 2017-11-29 | Thomson Licensing | Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices |
GB2550587B (en) * | 2016-05-23 | 2020-05-20 | Canon Kk | Method, device, and computer program for adaptive streaming of virtual reality media content |
US20180307352A1 (en) * | 2017-04-25 | 2018-10-25 | Gopro, Inc. | Systems and methods for generating custom views of videos |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5260779A (en) | 1992-02-21 | 1993-11-09 | Control Automation, Inc. | Method and apparatus for inspecting a printed circuit board |
KR940017747A (en) * | 1992-12-29 | 1994-07-27 | 에프. 제이. 스미트 | Image processing device |
FR2700938B1 (en) | 1993-01-29 | 1995-04-28 | Centre Nat Rech Scient | Method and device for analyzing the movement of the eye. |
EP0650299B1 (en) * | 1993-10-20 | 1998-07-22 | Laboratoires D'electronique Philips S.A.S. | Method of processing luminance levels in a composite image and image processing system applying this method |
FR2714503A1 (en) * | 1993-12-29 | 1995-06-30 | Philips Laboratoire Electroniq | Image processing method and device for constructing from a source image a target image with change of perspective. |
US6578962B1 (en) | 2001-04-27 | 2003-06-17 | International Business Machines Corporation | Calibration-free eye gaze tracking |
US7057663B1 (en) | 2001-05-17 | 2006-06-06 | Be Here Corporation | Audio synchronization pulse for multi-camera capture systems |
US7336299B2 (en) * | 2003-07-03 | 2008-02-26 | Physical Optics Corporation | Panoramic video system with real-time distortion-free imaging |
CN101833968B (en) | 2003-10-10 | 2012-06-27 | 夏普株式会社 | Content reproduction device and method |
US8723951B2 (en) * | 2005-11-23 | 2014-05-13 | Grandeye, Ltd. | Interactive wide-angle video server |
US8572382B2 (en) | 2006-05-15 | 2013-10-29 | Telecom Italia S.P.A. | Out-of band authentication method and system for communication over a data network |
US7661121B2 (en) | 2006-06-22 | 2010-02-09 | Tivo, Inc. | In-band data recognition and synchronization system |
KR101574339B1 (en) | 2008-04-28 | 2015-12-03 | 엘지전자 주식회사 | Method and apparatus for synchronizing a data between a mobile communication terminal and a TV |
JP5927795B2 (en) | 2011-07-22 | 2016-06-01 | ソニー株式会社 | Stereoscopic imaging system, recording control method, stereoscopic video playback system, and playback control method |
US20130127984A1 (en) * | 2011-11-11 | 2013-05-23 | Tudor Alexandru GRECU | System and Method for Fast Tracking and Visualisation of Video and Augmenting Content for Mobile Devices |
KR102141114B1 (en) | 2013-07-31 | 2020-08-04 | 삼성전자주식회사 | Method and appratus of time synchornization for device-to-device communications |
US20150142765A1 (en) | 2013-11-17 | 2015-05-21 | Zhen-Chao HONG | System and method for enabling remote file access via a reference file stored at a local device that references the content of the file |
US9473576B2 (en) | 2014-04-07 | 2016-10-18 | Palo Alto Research Center Incorporated | Service discovery using collection synchronization with exact names |
US9754002B2 (en) | 2014-10-07 | 2017-09-05 | Excalibur Ip, Llc | Method and system for providing a synchronization service |
-
2013
- 2013-04-19 FR FR1353565A patent/FR3004881B1/en active Active
-
2014
- 2014-04-18 EP EP14719707.3A patent/EP2987319A1/en not_active Ceased
- 2014-04-18 WO PCT/EP2014/058008 patent/WO2014170482A1/en active Application Filing
-
2015
- 2015-10-19 US US14/887,122 patent/US10129470B2/en active Active
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2014170482A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2014170482A1 (en) | 2014-10-23 |
FR3004881B1 (en) | 2015-04-17 |
US20160112635A1 (en) | 2016-04-21 |
FR3004881A1 (en) | 2014-10-24 |
US20180255241A9 (en) | 2018-09-06 |
US10129470B2 (en) | 2018-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10846888B2 (en) | Systems and methods for generating and transmitting image sequences based on sampled color information | |
US10127632B1 (en) | Display and update of panoramic image montages | |
US10659759B2 (en) | Selective culling of multi-dimensional data sets | |
EP3337158A1 (en) | Method and device for determining points of interest in an immersive content | |
EP1227442B1 (en) | 2D image processing applied to 3D objects | |
US10049490B2 (en) | Generating virtual shadows for displayable elements | |
EP2556660A1 (en) | A method of real-time cropping of a real entity recorded in a video sequence | |
KR20220051376A (en) | 3D Data Generation in Messaging Systems | |
CN112868224B (en) | Method, apparatus and storage medium for capturing and editing dynamic depth image | |
US9594488B2 (en) | Interactive display of high dynamic range images | |
KR102612529B1 (en) | Neural blending for new view synthesis | |
JP7101269B2 (en) | Pose correction | |
CN114175097A (en) | Generating potential texture proxies for object class modeling | |
CN112105983B (en) | Enhanced visual ability | |
CN114631127A (en) | Synthesis of small samples of speaking heads | |
EP3571670A1 (en) | Mixed reality object rendering | |
EP2987319A1 (en) | Method for generating an output video stream from a wide-field video stream | |
US11544894B2 (en) | Latency-resilient cloud rendering | |
US20230106679A1 (en) | Image Processing Systems and Methods | |
US20220139026A1 (en) | Latency-Resilient Cloud Rendering | |
WO2020128206A1 (en) | Method for interaction of a user with a virtual reality environment | |
US20230077410A1 (en) | Multi-View Video Codec | |
Thatte | Cinematic virtual reality with head-motion parallax | |
Alain et al. | Introduction to immersive video technologies | |
FR3013492A1 (en) | METHOD USING 3D GEOMETRY DATA FOR PRESENTATION AND CONTROL OF VIRTUAL REALITY IMAGE IN 3D SPACE |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20151102 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20160803 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: GOPRO, INC. |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20191108 |