EP3834421A2 - Méthode et dispositif de diffusion de vidéo à 360 degrés - Google Patents
Méthode et dispositif de diffusion de vidéo à 360 degrésInfo
- Publication number
- EP3834421A2 EP3834421A2 EP19835685.9A EP19835685A EP3834421A2 EP 3834421 A2 EP3834421 A2 EP 3834421A2 EP 19835685 A EP19835685 A EP 19835685A EP 3834421 A2 EP3834421 A2 EP 3834421A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- display
- tiles
- video
- head
- display window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/36—Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/37—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
Definitions
- the invention is in the field of virtual reality, and more particularly that of streaming systems ("streaming" in English) of 360 degrees video.
- the user watches only part of the entire spherical video at any given time, called the video sphere.
- This part called the display window, depends on the orientation of the user's head and the size of the screen in his terminal, called a head-mounted display, or HMD (Head Mounted Device). , in English).
- HMD Head Mounted Device
- the user's terminal must adapt the audiovisual content in the display window at any time. head movements of the user.
- a first 360-degree video broadcasting technique consists in transmitting the entire content of the video sphere to the user's terminal.
- the user's terminal then locally takes care of extracting from the video sphere the part to be inserted in the display window of the user's helmet.
- This technique has the disadvantage of transporting a much greater amount of data than that which is actually used by the user's terminal. Indeed, the display window represents only about 15% of the entire video sphere.
- This problem of consumption of network resources is major in the case of 360-degree video since the streaming throughput of a complete video sphere can be between several tens and several hundred megabits per second.
- a second dissemination technique aims to overcome the disadvantage of the first, namely to reduce the amount of data transported. This second technique includes several operations.
- the first operations take place in the preparation of the video, before it is transported through the telecommunications network.
- the video sphere is first projected on a two-dimensional (2D) plane. Then it is spatially cut into several parts called tiles, for example forming a grid of the plane. Then each tile is encoded independently of the other tiles making up the video. Each tile can thus be decoded independently of the other tiles on the user's terminal. More precisely, each tile is encoded in several versions corresponding to different quality levels or to different resolutions, for example, at least one high quality version and at least one low quality version.
- the video is cut temporally into segments (or "chunks" in English), by time interval.
- the duration of the time intervals (and therefore the duration of the segments) is fixed for the entire video, and the order of magnitude of this duration is typically one or more seconds.
- Each segment is itself composed of successive images, the number of which depends on the number of frames per second (or "frame rate" in English) of the video, for example 60.
- tile therefore designates a spatial and temporal subdivision of the video sphere.
- a tile represents a few moments (1 second for example) of the video on a partial surface of the sphere.
- the following operations are performed at the user's terminal. These operations must be carried out for each of the segments of the video sphere, during the time interval preceding the display of the segment.
- a first operation consists in predicting the orientation given by the head of the user to the head-mounted display, during the next time interval, that is to say predicting the correct display window for the segment.
- the second operation consists in requesting and receiving video content for this segment and this display window.
- the display window in the head-mounted display is generally larger than the size of a tile; the display of the display window therefore requires the assembly, at the user terminal, of a set of adjacent tiles.
- the terminal must therefore request and receive in high quality the tiles which cover the predicted display window for the next time interval.
- the terminal must also request and receive in low quality the other tiles (those which are outside the predicted display window but which are likely to be there if the prediction is not correct). These tiles in low quality make it possible to maintain the display, if necessary in low quality, of the video when the user performs very marked and unforeseen head movements (ie outside the predicted display window).
- displaying a low quality in all or part of the display window certainly causes a degradation in the quality of experience for the user, but it is preferable to displaying a still image or a screen. black.
- these tiles in low quality also allow this second technique to transport through the network a quantity of data less than the first technique described above.
- the disadvantage of this second technique is a degradation of the quality of experience felt by the user when the prediction is not perfect, as is often the case, for example when he performs a very marked head movement and outside the predicted display window.
- One of the aims of the invention is to remedy these drawbacks of the state of the art.
- the invention improves the situation using a method of obtaining video segments from a video sphere for display in a head-mounted display connected to a video server, the video segments being spatially divided into a plurality of tiles encodable in at least two distinct levels of quality, including a high quality level and a low quality level, a part of the video sphere intended to be displayed at a display instant being called display window, the method comprising before the instant of display of at least two iterations of the sequence of steps following:
- the method further comprising the following steps:
- the proposed method makes this compromise unnecessary. Compared to the prior art where a single estimate is made as soon as possible for the next display instant, the proposed method improves the estimate using at least a second estimate, while guaranteeing the reception of all the necessary tiles. Indeed, if after the second estimate there is not enough time left to request all the necessary tiles again, the first burst of requests guarantees the reception of the missing tiles, even if they are not necessarily all level of quality corresponding to the second estimate.
- the method allows a number of iterations of the estimation phase greater than two, within the limit constituted by the time remaining before the next display instant, and by other parameters such as the bandwidth of the connection between the head-mounted display and the link video server, the computing power of the head-mounted display, etc.
- the duration between two display instants is that of a segment. It is understood that the expression "display instant" designates at the instant of the start of viewing of a segment.
- the request further comprises an indication of a priority level associated with the tile.
- this aspect it is possible, for example, to prioritize requests for high quality tiles over requests for high quality tiles. low quality, or prioritize requests for which a response has not yet been received over requests for which at least one response has already been received.
- the probability is increased that all the necessary tiles will have been received at the time of display. In other words, if some tiles are still missing at the time of display, these will not be the most important for the user, and bandwidth between head-mounted display and video server will have been saved.
- the request is a request for delivery of the encoded tile corresponding to the identified tile.
- the request is a request cancellation of delivery of the encoded tile corresponding to the identified tile, followed by a new request for delivery of the encoded tile, comprising the new quality level or the new priority level associated with the identified tile.
- the iteration is not the first and if the quality level associated with the identified tile has decreased compared to the previous iteration, no new request is issued if the tile has already been received.
- connection between the head-mounted display and the video server includes a separate stream per identified tile.
- the connection between the head-mounted display and the video server uses the HTTP / 2 protocol.
- HTTP / 2 "Hypertext Transfer Protocol", version 2 of the hypertext transfer protocol, in English, described in the normative document rfc7540), is a protocol allowing to manage several flows in the same connection, and in particular allowing cancellation of flows (tiles) during delivery in order for example to correct a characteristic or prioritization of the flow (of the tile), without interrupting the connection. It is therefore particularly suitable for implementing the proposed method.
- the invention also relates to a device for obtaining video segments from a video sphere for display in a head-mounted display connected to a video server, the video segments being spatially divided into a plurality of tiles encodable in at least two distinct quality levels, a high quality level and a low quality level, a part of the video sphere intended to be displayed at a display instant being called display window, the device comprising a receiver, a transmitter, a decoder, a processor and a memory coupled to the processor with instructions intended to be executed by the processor for:
- This device capable of implementing in all of its embodiments the obtaining method which has just been described, is intended to be implemented in a user terminal such as for example a head-mounted display.
- the invention also relates to a head-mounted display comprising a device according to that just described, a position and movement sensor, and a screen.
- head-mounted display it is necessary to understand any user terminal allowing a user to at least partially view a video sphere.
- the invention also relates to a computer program comprising instructions for implementing the steps of the method for obtaining data from a video sphere for display in a head-mounted display connected to a video server, which has just been described, when this program is executed by a processor.
- the invention also relates to an information medium readable by a device included in a head-mounted display, and comprising instructions of a computer program as mentioned above.
- the program mentioned above can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partially compiled form, or in n any other desirable form.
- a support may include a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or also a magnetic recording means.
- a storage means such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or also a magnetic recording means.
- Such storage means can for example be a hard disk, a flash memory, etc.
- an information medium can be a transmissible medium such as an electrical or optical signal, which can be routed via an electrical or optical cable, by radio or by other means.
- a program according to the invention can in particular be downloaded from a network of the Internet type.
- an information medium can be an integrated circuit in which a program is incorporated, the circuit being adapted to execute or to be used in the execution of the process in question.
- FIG. 1 shows an example of cutting a video sphere into tiles, according to a particular embodiment of the invention
- FIG. 2 schematically presents an example of sequencing the steps of the process for obtaining video segments, according to a particular embodiment of the invention
- FIG. 3 shows an example of the structure of an obtaining device video segments, according to a particular aspect of the invention.
- the embodiment presented below uses a subdivision of a video sphere into 24 tiles, a duration of the video segments of 1 second, two iterations of prediction of the display window of 500 ms each for each interval between segments, and the HTTP / 2 protocol for the connection between the head-mounted display and the video server, but these choices represent only an indicative and non-limiting example of embodiment of the invention.
- video sphere is not limited to a sphere but designates any video of which only a part can be displayed at any time, the displayed part depending on the real or virtual position of the display terminal, or on its orientation, that is to say the direction pointed by this one, compared to the complete video.
- the examples developed below include a head-mounted display, but the invention works with any terminal allowing a user to view a "video sphere”.
- Figure 1 shows an example of cutting a video sphere into tiles, according to a particular embodiment of the invention.
- the video sphere is spatially divided into 24 rectangles.
- Each of the rectangles, at a given display instant corresponds to a spatial subdivision of a video segment, also called a tile.
- the rectangles are called tiles in the following.
- the tiles are numbered T1 to T24. For the sake of clarity, only the tiles T1, T2, T23 and T24 are indicated, the locations of the other tiles can easily be deduced.
- the tiles can be encoded (compressed) independently of each other, at different quality levels, for example using a HEVC ("High Efficiency Video Coding") encoder on the video server side and a corresponding decoder on the client side, i.e. on the head-end side.
- HEVC High Efficiency Video Coding
- the display window At any time, only part of the video sphere, called the display window, can be viewed by the user of the head-mounted display, which makes it unnecessary to transmit the complete set of tiles forming the sphere.
- the exact determination of the display window is a problem of prediction, of which several solutions are known. These solutions require the division of the video sphere into different regions, according to their probability of being in the display window during the next display period of a video segment in the head-mounted display.
- FIG. 1 uses regions numbered 1 to 4, and represents a prediction of the display window before any display instant:
- Region 1 represents an estimate of the display window; parts of this area have a very high probability of being included in the display window
- Region 2 represents the extension zone of the display window, corresponding to slight natural head movements of the user; parts of this area have a high probability of being included in the display window,
- Region 3 represents the area of the immediate background, corresponding to larger movements when the user turns his head; parts of this area have a medium probability of being included in the window display,
- Region 4 represents the area of the distant background, corresponding approximately to half of the sphere opposite the display window; parts of this area have a low probability of being included in the display window.
- Region 1 touches 6 tiles: T8 to T10, and T14 to T16.
- Region 2 although slightly larger on the surface, touches the same 6 tiles, no tiles are to be added compared to region 1.
- 10 tiles are to be added: tiles T2 to T5, T1 1, T17, and T20 to T23.
- tiles T1, T6, T7, T12, T13, T18, T19 and T24 are to be added.
- region 2 is configured to be larger than region 1 by 10% in a horizontal axis, and by 5% in a vertical axis.
- Region 4 has no external limits.
- region 2 is the smallest region used, it also includes the tiles from region 1.
- FIG. 2 schematically shows an example of sequencing the steps of the process for obtaining video segments, according to a particular embodiment of the invention.
- the client requests the tiles from region 1 with a high quality level (greater amount of data per tile), and the tiles from region 3 with a low quality level (less amount of data per tile).
- the customer can additionally request the tiles from region 1 with a higher priority than those from region 3. If the bandwidth is insufficient for all the tiles, those from region 1 will be thus received in priority.
- Viewing a 360-degree video is done segment by segment, the time interval between two segment displays being fixed, for example 1 second.
- the method is described below in detail for displaying the tiles the display window at a display instant, and, in parallel with the display, for obtaining the tiles for the next display instant. which is 1 second later. Viewing and obtaining must therefore be repeated as many times as there are time intervals (i.e. seconds) in the full video.
- the client Beforehand, the client must obtain from a server information describing the structure of the content to be recovered, during a step G1.
- This can be for example an MPD file ("Media Presentation Description", or description of the presentation of the medium, in English).
- This file tells the client how the video sphere is spatially subdivided (number of tiles, position in the video sphere), what levels of encoding quality are available for a tile, etc.
- the client processes the information extracted from the file and prepares the display of the very first display window, called the current window. For example, the client issues requests in separate HTTP / 2 streams for each of the tiles it needs for this display window.
- the next step G3 comprises the steps E1 to E5, and is repeated for each display instant, every second, if the time interval between 2 display windows is 1 second, as in our example.
- the client displays the current window, that is to say that the tiles touching the current display window are "played" for the user of the head-mounted display (or "viewed").
- Step E2 the client estimates the next display window, and issues requests for the tiles making up this next display window.
- Step E2 includes steps F1 to F3 repeated several times. For example, a first iteration of steps F1 to F3 is executed at the start of the current time interval, then a second iteration is executed 500 ms later, halfway through the duration of the interval. For the sake of simplicity, the number of iterations is here limited to 2 but a larger number is possible.
- the duration of each iteration is limited in our example to 500 ms, but any other division of the time interval is possible, respecting the minimum duration necessary for an iteration, which depends on factors such as the client's computing power, the volume of video data it must receive, the effective bandwidth between the client and the video server, etc.
- the client estimates the position of the display window which is most likely to be noted at the end of the current interval. Any prediction technique can be used, based for example on the instant position of the head-mounted display, and / or on the trajectory of the head-mounted display, and / or based on information relating to elements of content of particular interest found in certain places of the video sphere in the segments played or still to be played, and / or based on other types of information.
- estimating the position of the display window it is also the limits of each of the regions taken into account (regions 2 and 3) which are estimated.
- the client identifies the tiles from each of the regions taken into account, and associates with each of the tiles an adequate level of quality. For example, the high quality level is associated with the tiles touching region 2, and the low quality level is associated with the tiles touching region 3. As it is the first iteration, no tile for the next display moment has yet been requested by the client.
- the client then sends as many tiles delivery requests as identified tiles to the video server.
- the client can include a weight for each of its requests, proportional to the priority that the client wishes to be given by the server to the delivery of the tile requested in the request. For a tile touching region 2, a high weight is included in the request. On the contrary, for a tile touching region 3, a low weight is included in the request.
- step F1 is repeated identically, 500 ms later than the first time, in our exemplary embodiment with 2 iterations and 1 second per time interval.
- the new estimation of the display window is likely to be better because it is made less before the end of the interval, i.e. less time before the head-mounted display reaches the position which will be the at the next display time.
- step F2 is repeated identically, with a potentially different result.
- the client identifies the tiles for each of the regions, which are determined this time based on the new estimate.
- step F3 of the second iteration requests to the video server are also sent, but in a different way compared to the first iteration. Indeed, all the necessary tiles have already been required once. However, the new estimation of the display window can make certain quality levels associated with the tiles already required unsuitable.
- the request for delivery of this tile is canceled by issuing a request to cancel delivery of this tile, then a new request for delivery of this tile is issued with a level of low quality. If the response to the request from the previous iteration, for the tile with a high quality level, has already been received, the customer nevertheless keeps this tile rather than asking for the delivery of the same tile with a lower quality, in order to preserve the bandwidth between the head-mounted display and the video server.
- the request for delivery of this tile is canceled by issuing a request to cancel delivery of this tile, then a new request for delivery of this tile is issued with a high quality level.
- the client can however decide to be content with it, in order to preserve the bandwidth between the head-mounted display and the video server.
- a tile that has not changed region compared to the previous iteration does not give rise to the issuance of a new request, unless the customer finds a delay in the delivery of certain important tiles, i.e. - say, typically, tiles from region 2.
- the client can decide to review the weight associated with a tile, in order to speed up or slow delivery by the server, compared to others.
- a request for cancellation of delivery of the tile is issued, followed by a request for delivery of this tile with the revised weight.
- the steps F1 to F3 of the following iterations are identical to those of the second iteration described above.
- HTTP / 2 allows the management of a flow per tile in the same connection between the head-mounted display and the video server. Also, HTTP / 2 allows the cancellation of a current request, as well as the indication in a request of the required quality level, and of the desired priority level (using weights).
- step E3 the client receives tiles from the video server, in responses to requests made during steps F3 of step E2. It should be noted that some of these responses can be received while step E2 has not yet been completed.
- This step E3 is in fact composed of multiple sub-steps for receiving a tile.
- the client determines the display window observed at the end of the current time interval. This window is determined by the actual instantaneous position of the head-mounted display, that is to say the position of the user's head, at the end of the time interval.
- the client decodes the tiles received covering the observed display window, then combines these tiles to build a single video segment. Some tiles at the edge of the display window may be only partially included. Alternatively, the client can decode all the tiles received in order to build as much of the 360-degree video as possible, then extract the part necessary for the observed display window. To be able to build the complete 360 degree video, the tiles of the video sphere must be received. For this, it suffices to replace region 3 of this example of implementation of the method with region 4 of FIG. 1, or to add region 4 as a third region, with for example an even lower level of quality. for region 4 than for region 3.
- step E1 the observed display window becomes the current display window and the method returns to step E1, in order to process the next time interval. All of the steps E1 to E5 (that is to say the step G3 in FIG. 2), is repeated until the last time interval of the 360-degree video.
- FIG. 3 an example of the structure of a device for obtaining video segments is now presented, according to a particular aspect of the invention.
- the attachment device 100 implements the process for obtaining video segments, various embodiments of which have just been described.
- Such a device 100 can be implemented in a HMD1 head-mounted display comprising a screen Scr and a position and movement sensor Pos.
- the device 100 comprises a transmitter 101, a receiver 102, a processing unit 130, equipped for example with a microprocessor mR, and controlled by a computer program 110, stored in a memory 120 and implementing the process for obtaining according to the invention.
- the transmitter and receiver can be wireless and use a protocol such as for example WiFi, BlueTooth, 4G, etc.
- the device also includes a decoder 103 of audiovisual encoding format such as for example HEVC.
- the code instructions of the computer program 110 are for example loaded into a RAM memory, before being executed by the processor of the processing unit 130.
- Such a processing unit 130 is capable of, and configured for:
- Estimate the display window as a function of a prediction of an orientation of the head-mounted display capable of being taken at the time of display, for example according to data relating to the head-mounted display transmitted by the sensor (Pos),
- processing unit 130 is also able to, and configured for:
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Controls And Circuits For Display Device (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1857459A FR3084980A1 (fr) | 2018-08-10 | 2018-08-10 | Methode et dispositif de diffusion de video a 360 degres |
PCT/FR2019/051918 WO2020030882A2 (fr) | 2018-08-10 | 2019-08-12 | Méthode et dispositif de diffusion de vidéo à 360 degrés |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3834421A2 true EP3834421A2 (fr) | 2021-06-16 |
Family
ID=67262344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19835685.9A Pending EP3834421A2 (fr) | 2018-08-10 | 2019-08-12 | Méthode et dispositif de diffusion de vidéo à 360 degrés |
Country Status (4)
Country | Link |
---|---|
US (1) | US11490094B2 (fr) |
EP (1) | EP3834421A2 (fr) |
FR (1) | FR3084980A1 (fr) |
WO (1) | WO2020030882A2 (fr) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10419737B2 (en) * | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
KR20190021229A (ko) * | 2016-05-26 | 2019-03-05 | 브이아이디 스케일, 인크. | 뷰포트 적응형 360도 비디오 전달의 방법 및 장치 |
WO2018083211A1 (fr) * | 2016-11-04 | 2018-05-11 | Koninklijke Kpn N.V. | Diffusion en continu d'une vidéo de réalité virtuelle |
EP3721417A1 (fr) * | 2017-12-22 | 2020-10-14 | Huawei Technologies Co., Ltd. | Vidéo de réalité virtuelle (vr) à 360° pour utilisateurs finaux distants |
EP4180396A2 (fr) * | 2018-08-27 | 2023-05-17 | ExxonMobil Technology and Engineering Company | Tamis moleculaires et procede de fabrication de tamis moleculaires |
US10826964B2 (en) * | 2018-09-05 | 2020-11-03 | At&T Intellectual Property I, L.P. | Priority-based tile transmission system and method for panoramic video streaming |
-
2018
- 2018-08-10 FR FR1857459A patent/FR3084980A1/fr not_active Withdrawn
-
2019
- 2019-08-12 WO PCT/FR2019/051918 patent/WO2020030882A2/fr unknown
- 2019-08-12 EP EP19835685.9A patent/EP3834421A2/fr active Pending
- 2019-08-12 US US17/267,415 patent/US11490094B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
WO2020030882A3 (fr) | 2020-04-02 |
WO2020030882A2 (fr) | 2020-02-13 |
US20210297676A1 (en) | 2021-09-23 |
FR3084980A1 (fr) | 2020-02-14 |
US11490094B2 (en) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3225027B1 (fr) | Procédé de composition d'une représentation vidéo intermédiaire | |
EP3449634B1 (fr) | Procédé de composition contextuelle d'une représentation vidéo intermédiaire | |
EP2947888B1 (fr) | Procédé de téléchargement adaptatif de contenus numériques pour plusieurs écrans | |
FR2959636A1 (fr) | Procede d'acces a une partie spatio-temporelle d'une sequence video d'images | |
FR2988964A1 (fr) | Technique de distribution d'un contenu video de type immersif | |
FR3013933A1 (fr) | Diffusion adaptative de contenus multimedia | |
EP3780632A1 (fr) | Systeme de distribution d'un contenu audiovisuel | |
EP3834421A2 (fr) | Méthode et dispositif de diffusion de vidéo à 360 degrés | |
WO2019220034A1 (fr) | Gestion du téléchargement progressif adaptatif d'un contenu numérique au sein d'un terminal de restitution d'un réseau de communication local | |
EP3840335B1 (fr) | Réception d'un contenu numérique en mode truque | |
EP3378232B1 (fr) | Procédé de traitement de données codées, procédé de réception de données codées, dispositifs, et programmes d'ordinateurs associés | |
EP2819424A1 (fr) | Procédé d'amelioration du temps de changement entre programmes audiovisuels | |
FR2923970A1 (fr) | Procede et dispositif de formation, de transfert et de reception de paquets de transport encapsulant des donnees representatives d'une sequence d'images | |
FR3101503A1 (fr) | Gestion du téléchargement progressif adaptatif d’un contenu numérique sur réseau mobile avec sélection d’un débit d’encodage maximum autorisé en fonction d’un godet de données | |
EP2536075B1 (fr) | Procédé de demande d'accès par un terminal à un contenu numérique apte à être téléchargé depuis un réseau. | |
WO2021209706A1 (fr) | Gestion de l'accès à des contenus numériques accessibles en téléchargement progressif adaptatif et encodés selon une méthode d'encodage à débit variable, en fonction d'une charge réseau | |
FR3103668A1 (fr) | Gestion du téléchargement progressif adaptatif d’un contenu numérique sur réseau mobile avec détermination d’un débit d’encodage maximum autorisé sur une session en fonction d’un godet de données | |
FR3093603A1 (fr) | Procédé de navigation accélérée dans un contenu numérique obtenu par téléchargement progressif adaptatif (HAS), gestionnaire, lecteur de flux multimédia et programme d’ordinateur correspondants. | |
WO2020234030A1 (fr) | Restitution d'un contenu en arrière-plan ou sous forme d'incrustation dans le cadre d'un téléchargement progressif adaptatif de type has | |
FR2845556A1 (fr) | Embrouillage adaptatif et progressif de flux video | |
FR3128084A1 (fr) | procédé de gestion de la lecture d’un contenu multimédia. | |
FR3093605A1 (fr) | Procédé de navigation accélérée dans un contenu numérique obtenu par téléchargement progressif adaptatif (HAS), gestionnaire, lecteur de flux multimédia et programme d’ordinateur correspondants. | |
FR3114719A1 (fr) | Procédé de gestion de la lecture d’un contenu numérique au sein d’un terminal lecteur de contenus multimédias connecté à un dispositif de restitution | |
FR3096210A1 (fr) | Procédé de transmission d’un contenu numérique ayant plusieurs versions accessibles depuis un serveur de contenus à destination d’un terminal de restitution. | |
FR2834839A1 (fr) | Procede et dispositif de gestion des transmissions de donnees numeriques multimedia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210226 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230314 |