EP2225882A2 - Verfahren zur kodierung eines skalierbaren videostreams für benutzer mit verschiedenen profilen - Google Patents

Verfahren zur kodierung eines skalierbaren videostreams für benutzer mit verschiedenen profilen

Info

Publication number
EP2225882A2
EP2225882A2 EP08864357A EP08864357A EP2225882A2 EP 2225882 A2 EP2225882 A2 EP 2225882A2 EP 08864357 A EP08864357 A EP 08864357A EP 08864357 A EP08864357 A EP 08864357A EP 2225882 A2 EP2225882 A2 EP 2225882A2
Authority
EP
European Patent Office
Prior art keywords
refinement
base layer
layer
users
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08864357A
Other languages
English (en)
French (fr)
Inventor
Gilles Teniou
Ludovic Noblet
Christophe Daguet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP2225882A2 publication Critical patent/EP2225882A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities

Definitions

  • the present invention generally relates to the field of encoding and decoding images or video sequences of images. More specifically, the invention relates to the techniques of encoding and decoding images according to a video stream organized in data layers, which make it possible to generate a so-called "scalable" video stream (according to the English "scalable”), temporally, in quality and in spatial resolution.
  • the video stream to be coded is compressed according to a predictive and hierarchical layered scheme: the video stream is decomposed into a base layer and one or more refinement layers, each being nested in a layer of higher level. Each layer combined with the information contained in the higher level layer makes it possible to improve the frequency of the images of the decoded stream, its spatial resolution and / or its quality.
  • FIG. 1 An example of a compressed data stream according to this standard is shown in FIG. 1. It is composed of a base layer CB and two refinement layers CR1 and CR2, associated with the base layer CB.
  • the refinement layers CR1 and CR2 encode data making it possible to enhance the quality, the spatial resolution or the frequency of the images coded in the base layer CB, which is for example a stream encoded according to the standard Stroke.
  • the SVC standard also makes it possible to segment these layers CB, CR1 and CR2 in temporal levels NT1, NT2 and NT3.
  • Such techniques allow terminals of different capacities, such as fixed personal computers, mobile phones or PDAs (Personal Digital Assistant), to receive the same content broadcast over a network. such as the Internet, and decode it according to their respective capabilities.
  • a network such as the Internet
  • a low resolution mobile terminal will only decode the base layer of the content received by the transmission network, while the larger fixed terminals will decode the basic layers and refinement of this content, thus displaying the content in its quality.
  • Max so-called “scalable" coding techniques thus allow bandwidth savings, since only one content is broadcast to terminals of different capacities, instead of content per type of terminal.
  • certain content also requires adaptation according to the profile of the user who receives them.
  • a content provider who wants to promote a video program often broadcasts this program for subscribers to the program, while only distributing part of the program to non-subscribers, or distributing it to other subscribers.
  • encrypted Some programs are also suitable for people with hearing loss: a sign language interpretation is embedded in the images.
  • Other programs require moreover a regional stall, for example a newspaper of information will treat different headings according to the regions where it is diffused.
  • scalable encoding techniques images do not provide specific treatment for the personalization of the same video stream.
  • a content server broadcasting this video stream in a communication network must code and broadcast upstream as much video stream scalable, for example as many SVC streams as user profiles. This therefore requires, between the content server and one of the points of presence redistributing this flow to the different users in the communication network, as many multimedia session negotiations as user profiles, and also as many resource allocations between these two entities networks than user profiles.
  • This implementation is not optimal because the different multimedia sessions thus negotiated and maintained are expensive to create and maintain while they have substantially identical characteristics: - beginning and end of the multimedia session,
  • the present invention aims to overcome the disadvantages of the prior art by providing a method and a device for encoding a scalable video stream, adaptable according to the user profile, as well as methods and a transmission system, which allow a saving of network resources while being simple to implement.
  • the invention proposes a method of coding a video sequence into a video stream comprising a base layer and one or more refinement layers, intended for users having different user profiles, characterized in that at least one of said refinement layers codes a visual object distinct from the visual objects encoded by said base layer or by another of said refinement layers, said visual object corresponding to a semantic content intended specifically for the users of one of said profiles.
  • the refinement layers of the same scalable video stream are used to code different contents as a function of different user profiles, the base layer coding part of the content of the video stream that is generic for all these user profiles.
  • This dissociation of the content of the video stream into a generic part and a personalized part makes it possible to save bandwidth when transmitting this stream over a communication network.
  • the coding of this generic part is not necessarily limited to the base layer, refinement layers being for example dedicated to improving the quality, or the spatial resolution, or the frequency, of this single part generic.
  • the invention furthermore relates to a method for transmitting a video stream to users having different user profiles, said users being connected by display terminals to a communication network, and said video stream being coded. using the coding method according to the invention, said transmission method comprising the steps of:
  • said at least one refinement layer encoding a visual object distinct from the visual objects coded by said base layer or by another of said refinement layers, and sending through said communication network of said base layer and said refinement layer thus determined to said user, said refinement layer being intended to be combined with said base layer.
  • this video stream is compressed according to the SVC standard
  • the transmission of a single SVC-type stream is necessary between a content server and an associated point of presence in the communication network, to send to this point of presence.
  • all the customizations of the video stream As the base layer is common to all user profiles, it is sufficient for the point of presence to associate, for each different user profile, this base layer with the refinement layer or layers corresponding to each user profile to recreate multiple video streams, each of these streams being adapted to a specific user profile.
  • the invention makes it possible to simplify the transmission of the personalized video stream between the content server and the point of presence, a single multimedia session negotiation being necessary.
  • said base layer sent to said user codes a sequence of initial images in which areas of interest have been extracted, and said refinement layer sent to said user codes said areas of interest.
  • This embodiment of the method of transmitting a video stream according to the invention makes it possible to promote a video program by showing non-subscriber users only certain parts of the images broadcast to the subscribed users. For example, subscribers to this video program will be able to see the complete images of a football game, while non-subscribers will only see the images of the bare ground without the players or the ball.
  • the refinement layers intended for the subscriber users then code, for example, the meshes associated with the players and the balloon, if the images of the video stream are modeled by deformable meshes, and compressed by a technique based on wavelets. It should be noted that this type of coding is not usable in an SVC stream as normalized today, but is to be expected in future types of scalable flows.
  • This method of video program promotion is less expensive in terms of bandwidth than the usual methods of broadcasting two versions of the program, a full version for subscribers and an incomplete version for non-subscribers. It is also less complex than methods using encryption techniques, in which an encrypted video stream is broadcast, only the subscribers to the corresponding video program having the key to descramble this stream. The fact of not requiring a decryption key also makes hacking less easy.
  • the invention also relates to a method of embedding images of a first video stream in images of a second video stream, characterized in that it uses a third coded video stream by using the coding method according to the invention, and wherein: said base layer encodes images obtained by processing said images of the second video stream, said processing including a step of extracting the pieces of images located on incrustation areas intended to receive said images of first video stream, and said at least one refinement layer encodes said images of the first video stream.
  • This method of embedding images according to the invention makes it possible to insert a first video in a small window inside a second video in full screen, for example the video of a sign language interpreter in the video of a television magazine.
  • This produces a personalized video the first video may vary depending on the user receiving the second video.
  • the keying method according to the invention makes it possible to simplify the decoding of the personalized video stream obtained by this insertion of video, and reduce the flow rate of the stream arriving at the decoder that performs this decoding.
  • the usual PiP methods require, at the level of the decoder, the management of two distinct video streams and therefore of two decoding instances, as well as the implementation of a software that makes it possible to superimpose these two video streams, which is complex.
  • the video stream that is in window mode has a bitrate that is added to that of the video stream displayed in full screen.
  • the keying method according to the invention simplifies the decoding by only sending the decoder a single scalable video stream, for example an SVC-type stream, composed of a base layer coding the images of the video stream in full screen. subtracted from the image window of the video stream in window mode, and from a refinement layer encoding the images of the video stream in window. Since the full-screen video stream is not sent entirely to the user, the bit rate received at the decoder is therefore lower than in the PiP methods. Moreover, thanks to the invention, the superimposition of the videos at the decoder level is done automatically according to the refinement layers that it receives, these refinement layers being able to code different videos according to the profile of the user.
  • a single scalable video stream for example an SVC-type stream
  • the invention also relates to a method of stalling an audiovisual program successively broadcasting a generic sequence of images and personalized sequences of images, using the method of transmitting a video stream according to claim 2, wherein:
  • said base layer encodes said generic sequence
  • said at least one refinement layer encodes only one of said personalized sequences, said base layer no longer coding visual objects, and said personalized sequence being intended for said user.
  • This stalling method makes it possible to perform an audiovisual program stall at a point of presence in a broadcast network, for example the regional stall of a television news, without using specific software, commonly called " splicer ", at the point of presence.
  • a "splicer" is usually used to connect several video streams, for example between the end of a video stream corresponding to a national newscast and the beginning of a video stream corresponding to a regional newscast. Thanks to the invention, this junction is not necessary: for example, a video stream of the SVC type is sent to the decoder of a user, in which the base layer codes the national television newscast when it is broadcast and nothing when the regional newspaper is broadcast, and conversely the layers of refinement code only the regional newspaper when it is broadcast.
  • the invention also relates to a device for encoding a video stream comprising a base layer, and one or more refinement layers, characterized in that it comprises means adapted to implement the coding method according to the invention. .
  • the invention further relates to a system for transmitting a video stream comprising a base layer, and one or more refinement layers for users having different user profiles, said users being connected by viewing terminals to a user. communication network, and said system comprising means of transmission to said users of said base layer through said communication network, characterized in that it further comprises:
  • This transmission system according to the invention is limited for example to a single device, as a point of presence in the communication network, or on the contrary is formed of several servers connected for example to databases of user profiles.
  • the invention also relates to a signal representative of data of a video stream intended for users having different user profiles, said users being connected by display terminals to a communication network, and said data being coded according to a base layer. and one or more refinement layers, said signal being characterized in at least one of said refinement layers codes a visual object distinct from the visual objects encoded by said base layer or by another of said refinement layers, said visual object corresponding to a semantic content specifically intended for the users of one of said profiles.
  • the coding device, the transmission system and the signal representative of data of a video stream according to the invention have advantages similar to those of the coding method and the method of transmitting a video stream according to the invention.
  • the invention finally relates to a computer program comprising instructions for implementing one of the methods according to the invention, when it is executed on a computer.
  • FIG. 1 already commented on in relation with the prior art, represents an SVC video stream
  • FIG. 2 represents a transmission system implementing the coding method, the transmission method, the image-embedding method and the audiovisual program stalling method according to the invention, in a communication network,
  • FIG. 3 represents steps of an embodiment of the coding method according to the invention
  • FIG. 4 represents images coded by different layers of an SVC type video stream encoded according to the invention
  • FIG. 5 represents an encoded SVC type video stream according to the invention and used in a use of the transmission method according to the invention
  • FIG. 6 represents a nesting of refinement layers in a base layer of a SVC type stream, in a first variant of an embodiment of the invention
  • FIG. 7 represents a nesting of refinement layers in a base layer of an SVC type flow, in a second variant of an embodiment of the invention
  • FIG. 8 represents an SVC type video stream encoded according to the invention and used in a use of the image-embedding method according to the invention
  • FIG. 9 represents a nesting of refinement layers in a base layer of an SVC type flow, in a first variant of an embodiment of the invention
  • FIG. 10 represents a nesting of refinement layers; in a base layer of a SVC type flow, in a second variant of an embodiment of the invention
  • FIG. 11 represents an encoded SVC-type video stream according to the invention and used in a use of the audiovisual program stalling method according to the invention
  • FIG. 12 represents a nesting of refinement layers in a base layer of an SVC type flow
  • FIG. 13 represents steps of the transmission method according to the invention.
  • this is implemented in a communication network RES shown in FIG. 2, to which users of different profiles are connected by means of display terminals, in particular terminals TV1, TV2, TV3, TV4, TV5 and TV6.
  • the communication network RES is composed of several types of networks, a core network RES1, which is for example the Internet network, an access network RES2, which is, for example, the PSTN switched telephone network, and an access network RES3.
  • TV1, TV2, TV3 and TV4 display terminals in the access network RES2 are respectively connected to the home gateways PD1, PD2, PD3 and PD4, which are connected to a service point of presence PoP1 in the access network RES2, for example by the ADSL ("Asymmetrical Digital Subscriber Line") technology.
  • ADSL Asymmetrical Digital Subscriber Line
  • terminals TV5 and TV6 are also connected, for example by optical fiber, to a service point of presence PoP2 in the access network RES3, these points of presence PoP1 and PoP2 are connected by optical fiber to a content server SC in the Another type of network architecture that makes it possible to transmit video streams is of course usable for implementing the invention.
  • the coding method according to the invention is implemented, in this embodiment, in a software manner in the content server SC, managing the broadcasting of video sequences, stored in a database BDD connected to the content server SC.
  • the database BDD also includes a broadcast programming table of these video sequences for users connected to the communication network RES. Each of these users has a user profile, which allows certain content to be broadcast only to users of a certain profile.
  • the database BDD therefore also contains a table of correspondence between the video sequences of the database BDD and user profiles, to determine the users to whom each video sequence must be transmitted.
  • the content server SC further comprises two software modules, a pretreatment module MP, for preprocessing certain contents, and a coding module CO, for encoding contents in a scalable stream, the operation of which will be detailed later.
  • the content server SC pre-processes contents and codes according to the invention in order to deliver flows scalable videos fd1, ft1 and thread customizable by the PoP 1 and PoP2 presence points located in the respective access networks RES2 and RES3.
  • the transmission method according to the invention is implemented in the points of presence PoP1 and PoP2, which use for example the streams fM, fd1 and ft1 coded according to the invention for transmitting personalized streams.
  • the stalling method of an audiovisual program according to the invention is implemented in the point of presence PoP2, which uses for example the flux fd1 to perform this stall.
  • the keying method according to the invention is implemented in this exemplary embodiment at the TV4 display terminal, which uses the fi3 stream encoded according to the invention.
  • the presence point PoP1 receives from the content server SC video sequences coded in a non-scalable stream, performs the coding of these sequences according to the invention in a scalable stream, and delivers it to home gateways PD1 to PD4 which then implement the transmission method according to the invention.
  • the transmission of the video streams fd1, ft1 and wire to the points of presence PoP1 and PoP2 from video sequences contained in the database BDD is carried out in three steps E1 to E3 represented in FIG.
  • the first step E1 is a preprocessing step of video sequences contained in the database BDD and intended to be broadcast to the users connected to the communication network RES.
  • the software module MP is a preprocessing step of video sequences contained in the database BDD and intended to be broadcast to the users connected to the communication network RES.
  • the content server SC determines, by consulting the programming table and the correspondence table contained in the database BDD, that it must broadcast a football match in full to subscribed users, and this same football game but private areas of interest to non-subscribed users. These areas of interest are here determined as being the players and the balloon present in each image of the initial video sequence of the football match.
  • the content server SC thus preprocesses, in this step E1, the images of the video sequence of this football match contained in the database BDD so as to generate two streams of data: a first stream encoding the images of the sequence but without coding any player or ball, o and a second stream encoding the players and the ball present in each image of the sequence.
  • masks composed of blocks of pixels are used to determine the zones of the image containing the players or the balloon.
  • the field in the images of the first stream containing neither players nor ball is recomposed by image processing where the players and the ball have been extracted, for example using interpolation techniques.
  • the flow coding the players and the balloon present in each image of the sequence is scrambled by the software module MP, that is to say that the images of the players and the balloon are scrambled.
  • the first stream codes images of the video sequence of the football game to be broadcast, in which the players and the ball are encoded in a degraded or obscured form.
  • the content server SC determines that it must broadcast an information magazine to users connected to the communication network RES, and that it must also broadcast this news magazine but with a small window of inlay showing the video of a sign language interpreter for other users with hearing loss.
  • the content server SC therefore preprocesses, in this step E1, the images of the video sequence of this information magazine contained in the database BDD so as to generate three data streams represented in FIG. 4: o
  • An F1 stream encoding the information magazine images in which are extracted the pixels of a square corresponding to the small window of inlay.
  • the corresponding square is thus coded in black in this stream F1, or in a compression mode allowing to consume the least flow possible.
  • the inter-images of the square are for example coded according to the SKIP mode.
  • a stream F2 encoding images complementary to those encoded by the stream F1, that is to say images of black pixels with the exception of the pixels corresponding to the small window of embedding, whose values are those of the pixels having been extracted from the images of the news magazine.
  • the stream F2 thus encodes pieces of images of the magazine of information corresponding to the small window of inlay.
  • An F3 stream encoding images of black pixels with the exception of the pixels corresponding to the small window of inlay, these reproducing in small size the images of the video showing a sign language interpreter.
  • the MP software module uses for example a mask for determining the area corresponding to the small window of inlay in the large format images, that is to say of the same size as the images of the magazine. 'information.
  • streams F1, F2 and F3 are encoded so as to embed several small videos in the images of the magazine information, different sizes or formats and in different places.
  • the second step E2, performed by the software module CO is a coding step of the video sequences preprocessed in step E1, and other video sequences that do not require preprocessing but must also be broadcast to users connected to the communication network RES.
  • the software module CO performs a coding of these sequences according to scalable flows of the SVC type.
  • scalable flow coding are also usable.
  • This coding has the particularity, compared to the SVC standard, of coding different contents in different layers of the same scalable flow.
  • the base layer encodes generic content for all user profiles, while content that is more specific to one or more user profiles is encoded in refinement layers. These specific contents correspond to distinct visual objects, that is to say, form a semantic content that is added to or substitutes for that which is coded by the base layer or other layers of refinement. In other words, these specific contents do not simply contribute to refining the quality, the spatial resolution or the frequency of the images coded by the base layer or by another layer of refinement, as is the case in the SVC standard.
  • the software module CO produces several flows of type SVC:
  • This stream comprises: a base layer C t o coding in a low resolution the images pre-processed in step E 1 and corresponding to a sequence of bare ground images, o a refinement layer Cn, allowing to refine the resolution of the images coded by the base layer C t o, o a refinement layer CQ, coding in a low resolution the images of the players and the balloon which have been obtained by the pretreatment in step E1 , and a refining layer Co, making it possible to refine the resolution of the images coded by the refinement layer C 1.
  • Each refinement layer indicates in its syntax which layer it uses for inter-layer prediction.
  • the refinement layers are thus nested with respect to each other in the base layer according to a tree represented in FIG. 6: the layer C t3 uses the layer C 1, which uses the layer C n, which itself uses the layer C ⁇ .
  • the refinement layers are nested with respect to each other in the base layer according to a tree represented in FIG. 7: the layer C t3 uses the layer Cn that uses the layer C t o whereas the layer C k uses the Cto layer.
  • the refinement layer C t3 fully codes the images of the players and the balloon obtained by pretreatment in step E1, in a high resolution.
  • This variant allows a subscribed user having a high definition television to decode only three layers, C t o, C n and C t 3 , and a subscriber user having a standard definition television to decode only two layers, instead of respectively 4 layers (C t o, C M ,
  • the players, the ball and the bare ground are coded in blocks that do not correspond to those of the base layer.
  • This feature allows a decoder that receives the stream ft1 not to use the inter-layer prediction when combining the layers C ⁇ and CM for example.
  • the inter-layer prediction is used by the decoder for the layers having a real role of refinement, for example between the layers CM and C ra or C t 3 and C ⁇ .
  • the software module CO also produces the stream corresponding to the news magazine to be broadcast by the content server SC.
  • This flow represented in FIG. 8, comprises: a base layer C, o, coding in a low resolution the images of the stream F1, a refining layer d, making it possible to refine the resolution of the images coded by the base layer C, where a refinement layer Ci 2 , coding in a low resolution the images of the flux F2, o a refinement layer C 13 , making it possible to refine the resolution of the images coded by the refinement layer C & o, a refinement layer C, 4 coding in a weak resolution of the images of the flux F3, o and a refinement layer C, 5> for refining the resolution of the images coded by the refinement layer C 14 .
  • each wire stream refinement layer indicates in its syntax which layer it uses for inter-layer prediction.
  • the refinement layers are thus nested with respect to one another in the base layer according to a tree represented in FIG. 9: the layer C, 3 uses the layer C, 2 , which uses the layer C 11 , which itself uses the layer C, o, and the layer C, 5 uses the layer C l4) which uses the layer C, i, which itself uses the layer C, o.
  • the blocks of the layer C, 2 do not correspond to those of the layer C, i, and the blocks of the layer C, 4 do not correspond to those of the layer C, i, a decoder receiving the stream will not use the inter-layer prediction when it combines C, 2 and C- ⁇ , or C, 4 and C, -
  • the refinement layers are nested with respect to each other in the base layer according to a tree represented in FIG. 10: the layer C 13 uses the layer C 1 which uses the layer C 0, the layer C 5 uses the layer d which uses the layer C 10 , while the layers C, 2 and C, 4 use the layer C, Q.
  • the refinement layers C, 3 and C 15 fully encode the images respectively of the flux F2 and the flux F3, but in a high resolution.
  • This variant allows a subscriber user having a high television definition of decoding only three layers, for example C 10 , C, i and C, 3 , and a subscriber user having a standard definition television to decode only two layers, for example C, o and C, 2 , to place of respectively 4 layers (do, di, C 12 and C, 3 ), and 3 layers (C 10 , C, i and C 12 ) if we use the interlocking corresponding to Figure 9. It allows more to a subscriber user having a standard definition television to receive a more stream in accordance with the capabilities of his television.
  • the software module CO preferably codes the macroblocks corresponding to the small window for embedding in the images of the streams F1, F2 and F3, independently of the other macroblocks of these images.
  • the data encoded in the base layer is used for the reconstruction of the refinement layer.
  • the image of the stream F1 combined with the flow F2 or F3.
  • the black square area will contain something other than black.
  • macroblocks based on the black area of the base layer will rely on an area other than black and give a different reconstruction. This is true only when the refinement layer under consideration is coded without knowledge of the base layer.
  • the macroblocks of the images outside the overlay window should not be coded from the macroblock information of this window.
  • the software module CO also produces the stream fd1, represented in FIG. 11, and corresponding to a national information log followed by regional information logs that the content server SC has to broadcast. These contents do not require pretreatment by the preprocessing module MP. They are therefore directly coded by the coding module CO in the stream fd1 as follows:
  • the base layer Cao of the stream fd1 encodes all the images of the national information log, then images of black pixels, whose number corresponds to the duration of circulation of a regional news magazine.
  • a refinement layer C d i of the stream fd1 encodes images of black pixels, the number of which corresponds to the duration of diffusion of the national news newspaper, and then codes the images of a regional information log.
  • a refinement layer C d2 of the stream fd1 encodes images of black pixels, the number of which corresponds to the duration of diffusion of the national news newspaper, and then codes the images of another regional news journal, of the same duration than the previous one.
  • each refinement layer of the stream fd1 indicates in its syntax which layer it uses for interlayer prediction.
  • the refinement layers are thus nested with respect to each other in the base layer according to a tree represented in FIG. 12: the layer Cd 1 uses the layer C d o, and the layer Cd2 uses the layer C d.
  • a decoder receiving the flux fd1 will not use the inter-layer prediction when it combines Cd2 and C d o, or C d i and Cdo-
  • This coding of the stream fd1 makes it possible to perform a regional stall, after the national newspaper has been broadcast, without implementing specific equipment at a point of presence PoP2 of the network RES3.
  • the streams ft1, fil and fd1 comprise other refinement layers, for example to improve the quality, the frequency or the spatial resolution of the coded images.
  • Step E3 is a step of sending coded streams in step E2 to the entities concerned in the communication network RES.
  • the correspondence table contained in the content server SC enables it to determine that: the stream ft1 is to be sent only to the point of presence PoP1, because the stream ft1 is intended only for users connected to the point of presence PoP1 , in particular the users connected by the TV1 and TV2 display terminals,
  • the wire stream is to be sent only to the point of presence PoP1, because the stream fi 1 is intended only for users connected to the point of presence PoP1, in particular the users connected by the viewing terminals TV3 and TV4,
  • the content server SC therefore transmits the streams ft1 and f ⁇ ' 1 to the presence server PoP1, and the stream fd1 to the presence server PoP2.
  • the fluxes ft1, fil and fd1 each having at least one refinement layer encoding a visual object distinct from the visual objects encoded by the base layer or another refinement layer of the flux in question, the signals carrying the fluxes ft1, f ⁇ ' 1 and fd1 are encoded according to the invention.
  • the content server SC further transmits, in this step E3, a description of the streams ft1 and thread to the presence server PoP1, and a description of the stream fd1 to the presence server PoP2.
  • These descriptions indicate, for the scalable flows ft1, thread and fd1, the users of which profiles are intended these flows and the different layers composing these flows.
  • the transmission of these descriptions takes place, for example, during the multimedia session negotiations preceding the transmission of the streams ft1, thread and fd1 between the server SC and the points of presence PoP1 or PoP2.
  • This stream transmission only requires a session negotiation between the content server SC and the PoP1 or PoP2 point of presence for each of these streams. In addition it allows a saving of bandwidth.
  • PoP2 implement the transmission method according to the invention. This comprises steps E4 and E5 represented in FIG. 13:
  • the step E4 is the determination, according to the profiles of the users connected to the point of presence PoP1, of the refinement layers which are more specifically intended for them.
  • the PoP1 presence point consults the descriptions of the streams ft1 and thread that have been transmitted to it at the step E3 by the content server SC, as well as a user registry, stored and regularly updated in the PoP1 point of presence.
  • This user register indicates in particular which are the profiles of each user connected to the point of presence PoP1.
  • the point of presence PoP1 determines for example that:
  • the user of the display terminal TV1 has a subscriber profile enabling him to receive the refinement layers Cn, C & and C t3 ,
  • the user of the display terminal TV3 has a profile enabling him to receive only the refinement layers C, i, C, 2 and C, 3 ,
  • the presence point PoP2 determines the refinement layers intended for each of the users connected to the point of presence PoP2. For this it consults the description of the stream fd1 that was transmitted to it in step E3 by the content server SC, and a user registry similar to that stored in the point of presence PoPL At the end of this step E4 , the point of presence PoP2 determines for example that:
  • the user of the viewing terminal TV5 has a subscriber profile allowing him to receive only the refinement layer C d - ⁇ , while the user of the viewing terminal TV6 has a subscriber profile allowing him to receive only the layer of refinement
  • Step E5 is the sending, by the PoP1 and PoP2 points of presence, of a base layer and refinement layers to each of the connected users, which is intended for one of the streams ft1, wire or fd1.
  • the point of presence PoP1 transmits to the user connected to the display terminal TV1 a stream ft2 comprising the base layer Cto as well as the refinement layers Cn, Ct 2 and Q 3 , and allowing him to watch the football match broadcast. by the content server SC in its entirety.
  • step E2 the presence point Po P 1 descrambles these refinement layers before transmitting them to the user connected to the TV1 display terminal in the ft2 stream.
  • the user connected to the display terminal TV2 receives from the point of presence PoP1 a stream ft3 comprising only the base layer C t o and the refinement layer C t i. He will hear the comments of the game by viewing only images of bare land, which will encourage him to subscribe to the video program broadcasting the football game to view it in its entirety.
  • the flow ft3 transmitted to the user of the terminal TV2 by the point of presence PoP1 comprises the base layer C t o and the refinement layers Cn, Ct 2 and C 13 .
  • This user will therefore see scrambled images of the players and the ball, which will encourage him to subscribe to the video program broadcasting the football game to cancel this visual discomfort due to jamming.
  • the decoding of the streams ft2 and ft3 by the terminals TV1 and TV2 is done by a decoder SVC in a natural way, thanks to the indication in each layer of refinement of the lower layer on which it relies for the inter-layer prediction, even if this prediction is not systematically used by the decoder.
  • the embodiment of the invention using SVC-type streams therefore does not require specific SVC decoders.
  • the point of presence PoP1 also transmits to the user connected to the display terminal TV3 a stream fi2, comprising the base layer C 10 and refinement layers C, i, C 12 and C, 3, which allows it to view the news magazine broadcast by the content server SC without an overlay window.
  • the video decoder present in the display terminal TV4 implements indeed the process of embedding images of a first video stream in images of a second video stream according to the invention. This encrustation occurs naturally by the recombination of the layers of Qi. C 14 , and C 15 .
  • the presence point PoP1 transmits the stream ft1 to the home gateways PD1 and PD2, and the wire stream to the home gateways PD3 and PD4, the selection of the refinement layers to be sent to the terminals TV1, TV2, TV3 and TV4. according to the profiles of the users being performed at these respective gateways PD1, PD2, PD3 and PD4. These are, for example, remotely configured by the service provider managing the content server SC, for example if this provider is also the access operator of the users of the TV1, TV2, TV3 and TV4 terminals.
  • the presence point PoP2 implements the stall method of an audiovisual program according to the invention, by sending the user connected to the TV5 display terminal a stream fd2 comprising the base layer Cao and the refinement layer of the, and to the user connected to the TV6 display terminal a fd3 stream comprising the base layer C d o and the refinement layer di2-
  • the stream fd1 encodes this log in the base layer C d o, which is transmitted to the point of presence PoP2 and then by this one to the users of the TV5 and TV6 terminals.
  • the content server diffuses two different regional information logs in the refinement layers C ⁇ J I and C d2 of the stream fd1, which are transmitted to the point of presence PoP2, the latter transmitting in turn the refinement layer C d i to the user of the terminal TV5 and the refinement layer C d2 to the user of the terminal TV6.
  • these two users view two separate regional newspapers, without implementing a specific "splicer" at the point of presence PoP2.
  • the junction between the national news journal and each of the regional information logs is not implemented by the PoP2 point of presence, but naturally by the decoders of the TV5 and TV6 terminals, by the recombination of the layers respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
EP08864357A 2007-11-30 2008-12-01 Verfahren zur kodierung eines skalierbaren videostreams für benutzer mit verschiedenen profilen Withdrawn EP2225882A2 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0759452 2007-11-30
PCT/FR2008/052171 WO2009080926A2 (fr) 2007-11-30 2008-12-01 Procede de codage d'un flux video echelonnable a destination d'utilisateurs de differents profils

Publications (1)

Publication Number Publication Date
EP2225882A2 true EP2225882A2 (de) 2010-09-08

Family

ID=39507680

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08864357A Withdrawn EP2225882A2 (de) 2007-11-30 2008-12-01 Verfahren zur kodierung eines skalierbaren videostreams für benutzer mit verschiedenen profilen

Country Status (3)

Country Link
US (1) US8799940B2 (de)
EP (1) EP2225882A2 (de)
WO (1) WO2009080926A2 (de)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8355458B2 (en) 2008-06-25 2013-01-15 Rohde & Schwarz Gmbh & Co. Kg Apparatus, systems, methods and computer program products for producing a single frequency network for ATSC mobile / handheld services
CA2731958C (en) 2008-11-06 2016-10-04 Rohde & Schwarz Gmbh & Co. Kg Method and system for synchronized mapping of data packets in an atsc data stream
DE102009057363B4 (de) * 2009-10-16 2013-04-18 Rohde & Schwarz Gmbh & Co. Kg Verfahren und Vorrichtung zur effizienten Übertragung von überregional und regional auszustrahlenden Programm-und Servicedaten
US20110219097A1 (en) * 2010-03-04 2011-09-08 Dolby Laboratories Licensing Corporation Techniques For Client Device Dependent Filtering Of Metadata
US8989021B2 (en) 2011-01-20 2015-03-24 Rohde & Schwarz Gmbh & Co. Kg Universal broadband broadcasting
US9582505B2 (en) * 2011-03-24 2017-02-28 Echostar Technologies L.L.C. Handling user-specific information for content during content-altering operations
EP2910026B1 (de) 2012-10-19 2017-11-29 Visa International Service Association Digitales rundfunkverfahren mit sicheren netzen und wavelets
US9609336B2 (en) * 2013-04-16 2017-03-28 Fastvdo Llc Adaptive coding, transmission and efficient display of multimedia (acted)
US9454840B2 (en) * 2013-12-13 2016-09-27 Blake Caldwell System and method for interactive animations for enhanced and personalized video communications
GB2552944B (en) 2016-08-09 2022-07-27 V Nova Int Ltd Adaptive content delivery network
EP3824629A4 (de) * 2018-07-18 2022-04-06 Pixellot Ltd. System und verfahren zur inhaltsschichtbasierten videokompression

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7207053B1 (en) * 1992-12-09 2007-04-17 Sedna Patent Services, Llc Method and apparatus for locally targeting virtual objects within a terminal
US6904610B1 (en) * 1999-04-15 2005-06-07 Sedna Patent Services, Llc Server-centric customized interactive program guide in an interactive television environment
US7116717B1 (en) * 1999-12-15 2006-10-03 Bigband Networks, Inc. Method and system for scalable representation, storage, transmission and reconstruction of media streams
JP4703932B2 (ja) * 2000-04-06 2011-06-15 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ オブジェクト条件付きアクセスシステム
US20030030652A1 (en) * 2001-04-17 2003-02-13 Digeo, Inc. Apparatus and methods for advertising in a transparent section in an interactive content page
US20030112868A1 (en) * 2001-12-17 2003-06-19 Koninklijke Philips Electronics N.V. Shape assisted padding for object-based coding
US20040071083A1 (en) * 2002-02-22 2004-04-15 Koninklijke Philips Electronics N.V. Method for streaming fine granular scalability coded video over an IP network
KR20050077874A (ko) * 2004-01-28 2005-08-04 삼성전자주식회사 스케일러블 비디오 스트림 송신 방법 및 이를 이용한 장치
US7319469B2 (en) * 2004-07-26 2008-01-15 Sony Corporation Copy protection arrangement
KR100643291B1 (ko) * 2005-04-14 2006-11-10 삼성전자주식회사 랜덤 엑세스의 지연을 최소화하는 비디오 복부호화 장치 및방법
US20070035665A1 (en) * 2005-08-12 2007-02-15 Broadcom Corporation Method and system for communicating lighting effects with additional layering in a video stream
KR100772868B1 (ko) * 2005-11-29 2007-11-02 삼성전자주식회사 복수 계층을 기반으로 하는 스케일러블 비디오 코딩 방법및 장치
AU2007230602B2 (en) * 2006-03-27 2012-01-12 Vidyo, Inc. System and method for management of scalability information in scalable video and audio coding systems using control messages

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
WO2009080926A2 (fr) 2009-07-02
WO2009080926A3 (fr) 2010-03-25
US8799940B2 (en) 2014-08-05
US20110004912A1 (en) 2011-01-06

Similar Documents

Publication Publication Date Title
WO2009080926A2 (fr) Procede de codage d'un flux video echelonnable a destination d'utilisateurs de differents profils
Chiariglione MPEG and multimedia communications
US9185335B2 (en) Method and device for reception of video contents and services broadcast with prior transmission of data
US20060197828A1 (en) Method and system for delivering dual layer hdtv signals through broadcasting and streaming
EP1470722B1 (de) Vorrichtung zu sicherer verteilung, bedingtem zugriff, kontrollierter sichtbarkeit, privater kopie und verwaltung vom mpeg-4 berechtigungen
FR2806570A1 (fr) Procede et dispositif de codage d'images video
FR2903253A1 (fr) Procede permettant de determiner des parametres de compression et de protection pour la transmission de donnees multimedia sur un canal sans fil.
EP1477009B1 (de) Einrichtung zur sicheren übertragungsaufzeichnung und visualisierung audiovisueller programme
CA2795694A1 (en) Video content distribution
WO2017089689A1 (fr) Procédé de traitement d'une séquence d'images numériques, procédé de tatouage, dispositifs et programmes d'ordinateurs associés
WO2014031320A1 (en) Conveying state information for streaming media
WO2005018232A2 (fr) Procede et systeme repartis securises pour la protection et la distribution de flux audiovisuels
EP1470714B1 (de) Sicheres gerät für die bearbeitung von audiovisuellen daten mit hoher qualität
EP1236352B1 (de) Digitalfernsehrundfunkverfahren, entsprechendes digitalsignal und entsprechende vorrichtung
EP3378232A1 (de) Verfahren zur verarbeitung von codierten daten, verfahren zum empfangen von codierten daten, vorrichtungen und damit verbundene computerprogramme
US20110242276A1 (en) Video Content Distribution
WO2013144531A1 (fr) Procede de tatouage avec streaming adaptatif
FR2903272A1 (fr) Procede permettant de determiner des parametres de compression et de protection pour la transmission de donnees multimedia sur un canal sans fil.
Chiariglione Open source in MPEG
WO2021144247A1 (fr) Procédé de décrochage d'un flux dans un multiplex à débit variable, ledit flux étant constitué d'une pluralité de chunks, site de diffusion et dispositifs associés
FR3003423A1 (fr) Procede et dispositif pour la fourniture de video 3d en utilisant un reseau de radiodiffusion mobile et un reseau de communication sans fil
FR2949283A1 (fr) Procede et installation pour marquer en temps reel un flux video compose d'une succession d'images video codees selon la norme mpeg-2.
FR2948526A1 (fr) Systeme pour le traitement de ressources interactives televisuelles.
FR2931609A1 (fr) Procedes de codage et de decodage pseudo-hierarchiques et systemes associes.
EP1554879A2 (de) Vorrichtung zur umwandlung von mpeg-2-multimedia- und -audiovideo-inhalten in gesicherte inhalte von gleichem typ

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20100628

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ORANGE

17Q First examination report despatched

Effective date: 20170403

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20171014