WO2011073559A1 - Device and method for determining an acceptability score for zapping time - Google Patents

Device and method for determining an acceptability score for zapping time

Info

Publication number
WO2011073559A1
WO2011073559A1 PCT/FR2010/052697 FR2010052697W WO2011073559A1 WO 2011073559 A1 WO2011073559 A1 WO 2011073559A1 FR 2010052697 W FR2010052697 W FR 2010052697W WO 2011073559 A1 WO2011073559 A1 WO 2011073559A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
time
zapping
content
information
waiting
Prior art date
Application number
PCT/FR2010/052697
Other languages
French (fr)
Inventor
Emmanuel Wyckens
Freddy Ceranton
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/04Diagnosis, testing or measuring for television systems or their details for receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4383Accessing a communication channel, e.g. channel tuning
    • H04N21/4384Accessing a communication channel, e.g. channel tuning involving operations to reduce the access time, e.g. fast-tuning for reducing channel switching latency

Abstract

The invention relates to a method for determining a zapping time acceptability score (Q) by measuring (E201) the zapping time between a first audiovisual scene and a second audiovisual scene following a zapping action (E202) in such a way as to also take into account at least one piece of information regarding the waiting content displayed between the first and second audiovisual scenes. The invention also relates to a parametric zapping time acceptability model for use in a determination method such as the one described. The invention further relates to a device implementing such a method and to a decoder including such a device.

Description

Device and method for determining a note acceptability zapping time

The present invention relates to a method for determining a note acceptability zapping time and measuring quality of service (QoS) associated with this zapping time.

More particularly, the invention is concerned by a measure of quality or perceived quality of experience (QoE) for users on the action of zapping.

The invention also relates to a device for determining a note of acceptability able to measure the acceptability of zapping time in audio / video devices.

Such determination or control assesses the different audio / video systems such as set-top boxes, "set top box" or other distributors of audiovisual content.

The invention also relates to an innovative parametric model for use in such a method of determining a score acceptability zapping time.

In existing systems for determining acceptability rating, a measure of the time between the zapping action from a first scene to a second audiovisual audiovisual scene is performed. Depending on the zapping time thus measured, various parametric models have been defined to determine the level of acceptability of the user with respect to time.

The document entitled "Perceived quality of channel zapping" the authors Kooij, R. Ahmed, K., and Brunnstr¨o, K. Proc. of 5th IASTED International Conference on Communication Systems and Network, Aug 28-30 st, 2006 (2006), describes such a control system based on measuring the transit time between the two chains.

A score is based on a quality scale called PMOS acceptability of Notes called PMOS (for "Mean Opinion Score Prediction" in English) comes from and applied models.

However, quality scores as defined ,, does not accurately reflect the acceptability of the user to different situations that can be provided between the first and second display screens.

Thus, it has been observed experimentally that a predetermined zapping time was more easily accepted by a user if a waiting content was displayed between the first and second audiovisual scene.

11 is therefore a need to consider these differences in user perception to get a level actually perceived by the user acceptability and therefore more reliable.

The invention improves the situation.

To this end, the invention relates to a method for determining an acceptability rating of a zapping time from a measurement of the zapping time between a first scene and a second audiovisual audiovisual scene following a Action zapping. The method is such that it comprises the following steps:

- information detection waiting content displayed between the first and second audiovisual scene audiovisual scene;

- selecting a parametric model of a plurality of parametric models stored based on the information detected and the measured time;

- determining an acceptability rating after the selected parametric model. According to the information detected in the standby content, parametric models differ. Selecting the right parametric model will give the acceptability rating adapted to the situation. This acceptability rating takes into account also at least information on the waiting content displayed between the first and second audiovisual scene.

Thus the acceptability rating is determined by taking into account information on the waiting content that can vary the note relative to a determination based only on the zapping time.

This note is more representative of the real feelings of the user and provides a true measure of quality of experience (QoE).

The various particular embodiments mentioned below may be added independently or in combination with each other, with the characteristics mentioned above.

Thus, in particular embodiments, detection of information displayed includes one or more steps of detecting the following group:

- still image detection on all or part of the display,

- black image detection,

- Image sensor frost of the first or second audiovisual scene,

- detection of a strip of textual information,

- detection of moving pictures

- no sound detection.

These various detection methods therefore possible to analyze the content of waiting and detect either type of black image content, still image or jelly, moving image such as a "chaser", a mute or a sequence of several stated type. The possible detection list is absolutely not exhaustive and may include many other types of detections adapted to varying waiting content.

In a particular embodiment, it further comprises a measure of the display time between different information detected, the measured time being used for the selection of the parametric model.

It is also possible to differentiate waiting content through different content display sequences and display such content time. Suitable parametric models are well built to these specifications and stored for use control.

In a preferred embodiment, the parametric models are determined by learning, from time measurements of zapping and information on waiting content.

Parametric models are therefore suitable for waiting content that can be displayed.

The invention also relates to a parametric model of acceptability of the zapping time for use in a method of determining a score acceptability zapping time, determined from measuring parameters of the zapping time between a first audiovisual scene and a second audio-visual scene. This model is as it is further determined from information on a waiting content displayed between the first and second stage.

This type of parametric model, unlike existing models take into account waiting content information. Different parametric models can thus be determined and stored for use in a method of determination as described above. In a particular embodiment, the parametric models take into account it is further determined from measured time between display various information displayed in waiting content.

The invention relates to a device déterrnination a note acceptability zapping time comprising a measuring module zapping time between a first scene and a second audiovisual audiovisual scene after an zapping. The device is such that it further comprises a module for obtaining text information on the standby content displayed between the first and second audiovisual scene for determining the acceptability rating of the zapping time.

It also relates to an audio / video decoder comprising a device as described above.

Such a decoder can be integrated into a lounge decoder, a "set top box", a video player or reading device or broadcast audio / video content.

Finally, the invention relates to a computer program comprising code instructions for implementing the steps of a method of determination as described, when they are executed by a processor. Other features and advantages of the invention will become apparent from reading the following description given by way of non hmitatif example, and with reference to the accompanying drawings, wherein:

- Figure 1 illustrates a system for determining a note acceptability zapping time comprising a device for determining an acceptability rating of the zapping time according to one embodiment of the invention; - Figure 2 illustrates a method for determining a note acceptability zapping time according to one embodiment of the invention;

- Figure 3 illustrates an example of sequence of audiovisual scenes and contents of the queue, displayed on a display screen;

- Figures 4a to 4f illustrate several examples of the queue contents displayed during the zapping time;

- Figure 5 illustrates the reference sequences of detection steps in a standby content according to a particular embodiment of the invention;

- Figure 6 illustrates the sound steps of detecting a standby content according to a particular embodiment of the invention;

- Figures 7a and 7b illustrate examples of parametric models according to the invention; and

- Figure 8 illustrates one embodiment of a device for determining an acceptability rating of the invention.

Figure 1 illustrates a control system acceptability zapping time wherein there is shown an audio-visual content display screen 170 such as a television, an audio / video decoder 160 or "set top box" or any other drive or diffuser of audiovisual content.

Such a decoder comprises a video acquisition module 161, a modulus of audio acquisition 162, a module 163 of infrared detection command for example from a remote control 180 and a control module 164. These modules are conventional and well known and will not be further detailed here.

The decoder 100 is connected to a determination device of a note acceptability zapping time. In another embodiment, this device may be an integral part of the decoder. The determination device 100 inputs a video stream and an audio stream. These flows are analyzed by a detection module 110 which is capable of detecting various information can be displayed as a standby content during a change of audiovisual scenes after an zapping. This zapping action is detected by the module 120 for filtering of IR codes.

Several detection modules are integrated in the module 110 are illustrated herein, a reference image detection module 111 (a reference image being for example a black image), a fluidity loss detection module 112 may e.g. show that an image is frozen, a text banner detection module information 113, a detection module logos or animated images 114 and a stop detection module of its 115.

Other detection modules can thus be provided according to different contents of the queue that can be displayed via the decoder.

At the end of said detection modules, and detected information is time-stamped by the time stamping module 130. Thus different times are measured based on display different information sequences from a standby content.

A simultaneous detection module audio and video rendering is shown at 150. This detection also checks how waiting sequences are decoded. The level of acceptability for the user, of these different sequences varies. The audio and video records of simultaneity module checks the synchronism of waiting content and the sequence of changing audiovisual scenes. During the wait or the appearance of the new program, dissemination of audio before the display of the video signal can improve the perception of zapping time, and the reverse may penalize the same perception.

All information on the standby content and the zapping time measured in 130, are given as input to a selection module 140 will select a parametric model of a plurality of stored templates, which corresponds to the detected content and zapping time measured.

The peculiarity of parametric models is stored as they are determined not only based on a measurement parameter of zapping time but also from information on the waiting content.

The parametric model itself is original.

The determination of this type of model is made by aprentissage after a series of subjective tests done from a user panel that give acceptability rating as waiting content they view and zapping time they suffer .

Examples of such parametric models will be later illustrated with reference to Figures 7a and 7b.

The application of this parametric model 140 in the selected detected and measured information provides an acceptability rating Q output of the device.

This acceptability note can then be transmitted to a processing server that analyzes the note and tested to evaluate the decoder, and thus detect such a potential default on the distribution of content on the decoder itself over a note acceptability that the server would likely wait.

In a possible use of such a device 100, it may be placed at a specific location of the production network to test several different decoders and evaluate these different decoders. This device then has a role of assessment robot.

Referring to Figure 2, the steps of a method of determining a note zapping time acceptability are now described. A first step of the E200 control method includes detecting an action of zapping (Detec.tzap). Upon detecting this zapping action to move from a first scene to a second audiovisual audiovisual scene, a timer is switched to the step for measuring E201 (TIMESTAMP.) The total time of passage from one scene to another .

The zapping time is actually the sum of the time control channel change network or content, of stored audio / video information and the decoding time of the second audio-visual scene.

Meanwhile, a detection step E202 (Detectlnf.) Of information displayed between the first stage and the second stage is performed.

Several types of detections can be made depending on the waiting content may be displayed. Some examples of detections will be described later with reference to Figures 5 and 6.

An optional step E203 of determining the display time information of a waiting content sequence may also be performed.

When the second audiovisual scene is displayed (At.Sc2) in E204, the total time of zapping (t.zap.) Is determined by E 205.

This information of the zapping time and the information from the detection of the waiting content are used in step S 206 to select a parametric model (Param.Model) adapted from a plurality of stored parametric models.

The application of the selected parametric model to information obtained in step E205 and step E202 and optionally to the step E203, to determine (Det.Q) an acceptability rating Q to the step E207. This method thus determines an acceptability rating of zapping time from the time measured zapping and taking into account information on the waiting content displayed during this time of zapping.

This determination provides real vision of quality of experience (QoE) for the user.

Figure 3 illustrates an example of sequence of audiovisual scenes and a standby content after an zapping between two audiovisual channels. After an action of zapping, represented here by 301 at time T_C_C_1, the transition from one chain to display a first audiovisual scene (Sel) to a scene B to display a second audiovisual scene (sc.2) is effected by displaying a standby content (Cont). In the example shown here, the waiting content consists of a 302 arximée picture as "chase" and headband 303 of textual information (EPG for "Electronic Program Guide" in English) and information on the program's progress underway ..

This figure also illustrates that starts capturing video at a time

T_L_A_video. The duration of the acquisition is represented in 304 and is called here D_A_video. During this time of acquisition, the images 1 to n of the A chain (Im.l A Im.2 A ... Im.n A) are displayed for stage 1 of the A chain, the images 1 n (Im.l Cont., ..., Im.n Cont.) of the queue contents, m and m + 1 pictures (hn.rn B, ..., + l Im.m B) the stage 2 of the B chain.

The video capture is preferably carried out on various video connectors for example, on HDMI (For "High Definition Multimedia Interface" in English) or SCART. Video signals can be scanned in YUV or RGB component formats (for Red, Green, Blue) to get video samples. These video samples may be derived directly from the output of a video decoder. The start action and / or end video acquisition is started by a task scheduler or controller as shown at 164 in Figure 2.

Similarly, the audio acquisition is illustrated by a plurality of audio samples (Ech.Audio 1A, Ech. Audio 2 A, Ech. Audio n A) for the A chain and (Ech. Audio m B, ... , Ech.Audio m + 1 B) for the channel B. the audio acquisition time is represented by 305 and is called D_A_audio. It starts instantly T_L_A_audio.

Audio acquisition is preferably carried out on the various audio connectors like HDMI or SCART. The can be digitized audio signals in PCM (for "Pulse Code Modulation" in English) to obtain audio samples. These samples may also come directly from the output of the video decoder. The action of start and / or end audio acquisition is initiated by the task scheduler or controller 164. These actions can be launched simultaneously with the actions on video capture.

The channel change command can be done through an infrared codes transmission or via a management keys of the decoder.

The moment of the launch of execution code of zapping is shown here in 301, at time T_C_C_1. H is greater than the time audio and T_L_A_video T_L_A.

In this example shown in Figure 3, a waiting content with a "chaser" animated and information banner is displayed during the zapping time.

Figures 4a-4f illustrate various waiting content with one or more displayed contents. Thus, in Figure 4a, the waiting content is a black image with a textual program information banner (EPG), time (h) and a slider informing the program progress being . In Figure 4b, the waiting content further comprises information of FIG 4a, a moving picture as a "chaser".

4c, the waiting content is a frozen image, that is to say, a still image is generally the last image to the scene before the action of zapping. In addition to this still image, the waiting content also includes the information banner described above.

In Figure 4d, the waiting content further includes information described in Figure 4c, a moving picture as a "chaser".

On the fourth figure, the waiting content includes a sequence where a first frozen image is displayed with the information banner and a black image is displayed with the same banner.

In Figure 4f, the same sequence is displayed with more, an animated image as a "chaser" for each of the displayed images.

Obviously, these illustrations represent only possible exemplary embodiments. Many others waiting content can be displayed with one or more images in a sequence, with images or logos, animated or not. The audio portion of the waiting content can also intervene. For example, the sound of the first stage may continue for a time interval less than or equal to the zapping time or the sound may be interrupted for the duration of zapping.

To detect these waiting content, an information detection step of waiting content is performed. This step may contain one or more detection sub-steps. These sub-steps may for example be a black image detection, a still image sensing, frozen image of a band detection of textual and animated information, detection of moving images, a stop detection sound, etc ... Figure 5 illustrates one of these detections. Detection of reference pictures in the waiting content is well illustrated. For a given decoder or specific TV service, the waiting content includes the same images or image sequences. These pictures are reference pictures which may for example be stored in this decoder after a training sequence via video acquisitions. This training sequence is performed before the content analysis stage. Thus the reference images to Im.Irl Im.Irn such as illustrated in FIG 5 in 501 are recorded.

The detection of these reference pictures in the queue contents is carried out by comparing these reference images and the images of the displayed waiting sequence. This comparison can be performed on all or part of the displayed image, particularly to avoid the spawn area of ​​the information banner. A tolerance level on the luminance and chrominance of the compared images is applied.

The following formula is applied for example to compare point by point the displayed image and the reference images.

captured images (x, y, t) - Stock Irn (x, y) - 0 (1) To temporarily locate the position of the reference pictures in the queue sequence captured and displayed, the following formula is applied:

TJPJJRn T_L_Avideo = + (1 / ps PJmgn x) (2) where T_P_I_Rn corresponds to the instant when the reference image "n" is found among the captured images, PJmgn corresponds to the reference image position "n "in all the captured images, fps corresponds to the capturing frequency (e.g. 50 or 60 Hz), TJ_A_video corresponds to the start time of the video acquisition. T_P_I_Rm corresponds to the time when the reference image "m" is found among the captured images. This image m is found the last reference image.

In the event that one or more images ( "Im." To "Im.Irn") usually exist in the sequence of the waiting content and the reference images are found all over the analysis period the captured sequence, we can consider that there is no reference images captured in sequence. In this case, the acceptability rating of zapping time, can not be determined.

In the illustration of Figure 5 compares the reference images with captured images 502 and 503 to set the images in the waiting content between times T_P_I_Rn and T_P_I_Rm.

In the case where a reference image sensing is performed (output O 503), this information is kept in memory to select the good parametric model in step 206 of Figure 2.

If no reference image is detected in the content waiting or the normally expected sequence is not found any (Exit 503 N), it means that the waiting content is not consisting of those images. We can not choose at this stage of parametric model adapted to the standby content. Another information detection (Detect. + 1) on the waiting content will be implementing to find its composition and select the correct parametric model.

Another possible detection on the waiting content is that of the loss of fluidity of the video stream. This detection is to check if the video sequence is performed on one or more stable frame rates or not. The fluidity of detection must be done before the channel change command. Detection is launched before time T_C_C_1 to whether there is a video loss of fluidity of the existing program until the arrival of the new program.

The analysis of the video detection time may be equal to D_A_Vidéo. The detection of loss of fluidity will be performed on the images of the current channel and the images of the new requested channel. In order to predict any possible loss of fluidity, it is necessary that the number of images captured of the current channel is greater than or equal to the number of images over a period of one second (the inverse of the frame rate). For example, if the number of image sets regime is 25 frames per second, it must capture at least 25 images on the current channel.

The reasoning is identical to the images of the chain B. The images of A and B chains are sometimes interspersed waiting images with or without the presence of an animated logo, a fixed logo or related information the name of the channel and associated program. This search is performed by the detection of reference images or by detection of still or animated logos.

The loss of fluidity is due to the presence of a temporary freeze. The gel image is detected by calculating if the derivative of the luminance and chrominance levels with respect to time is zero, according to the following equations:

Image stringA (x, y, t) - Chain picture A (x, y, t + i) = 0. (3)

Image Station Channel (x, y, t) - chain picture B (x, y, t + I) = 0. (4)

Another example of detection information about the standby content is illustrated with reference to Figure 6. In this figure, the audio samples of the A chain are shown from the start of audio acquisition T_L_A_audio. The start time zapping is shown in this figure by T_P_I_Am. From this moment, the sound samples of the queue are shown in ContAudio ContAudio 1 and n.

When the zapping time has passed, samples the B chain of audio are available.

To measure the loss of noise samples sound or stop of the sound ( "mute") type of queue content, a measure of comparison to an audio threshold (Sax) is performed over N noise samples. it can in fact be considered are mute when detecting N samples below a threshold is greater than a predetermined time, for example 30ms. This time also corresponds to a number of samples M.

The following formula is then applied to detect the moment of non presence of sound and thus the beginning of zapping time:

T_P_I_ARn = T_L_A_a dio + Mx (l / audio sampling frequency) (5) where M is the number of samples over which the sound is stopped. T_P_I_ARm corresponds to the time where the sound level passes above Sax audio threshold, i.e. that is detected N samples above the threshold to a period longer than a fixed period, for example 30ms.

For an information strip for detecting the waiting content, a method as described with reference to Figure 5 can be used with a spatial location on the image area.

It can also detect the occurrence logos and animated images on the waiting content by locating the comparison of reference images in a well defined area of ​​the display. We can define the times of appearance of the logo and disappearance of this same logo. With this information on audio and video content of the wait, it is possible to choose the parametric models for this type of content and the measured waiting time.

For this, the determination of parametric models is by learning from a series of subjective tests with a panel of users testing different content and different waiting zapping time.

Thus, the parametric models are defined both against time zapping between audiovisual scenes but also taking into account information on the waiting content displayed between these two scenes.

In general, a parametric model taking into account these two elements is of the following form:

Q = 100 / (l + (a DZ) -b) (6)

With Q, the acceptability rating, DZ, the zapping time measured and a and b, the coefficients to be determined depending waiting content.

The coefficient a is for example between 1000 and 4000 and the coefficient b is between 4 and 10.

Particular examples of parametric models, with coefficients a and b determined as a function of waiting content, are illustrated with reference to Figures 7a and 7b.

If the wait is a black picture content without video sequence and the sound is not present, moments of late detection of reference images (here black image) (T_P_I_Rm Figure 5) and absence detection end moments of his T_P_I_Arm {Figure 6) are substantially equivalent.

The zapping time is then determined as follows:

DZ = T_PJ_Rm - T_C_C_1 (7) Acceptability of the zapping time for this type of standby content is defined by the following parametric model:

Q is here the acceptability rating of zapping time. It is expressed as a percentage.

Figure 7a and Figure 7b graphically illustrate this parametric model based on the zapping time measured.

If the wait is a frozen image content without video sequence and the sound is not present, the zapping time is determined as in equation (7). The end time of the frozen image detection corresponding to when the equations (3) and (4) are not verified.

The acceptability of the zapping time for this type of standby content is defined by the following parametric model:

Q is here the acceptability rating of zapping time. It is expressed as a percentage.

Figure 7b illustrates graphically the parametric model depending on the zapping time measured.

If the waiting content is an animated logo displayed for example on a black image and the sound is not present, moments of late detection of reference images (here black image) (T_P_I_Rm on Figure 5) and the moments of absence end detecting sound (T_PJ._Arm in Figure 6), are substantially equivalent. display start detecting moments of animated logo and display end are indeed between these two moments of early detection of the black image T_P_I_Rn and end detection of the black image T_P_I_Rm, duration zapping is then determined in the same way as in equation

(7).

The acceptability of the zapping time for this type of standby content is defined by the following parametric model:

7a graphically illustrates this parametric model based on the zapping time measured.

These parametric models are non-exhaustive examples of models that characterize the waiting content. Many models can thus be determined whether for content with only one type of information displayed for content that contain sequences of several information displayed.

Similarly, the models used as an example here are determined by the total time of zapping. Other models can also be expressed in terms of intermediate time display different information content of the wait.

8 shows an example of a device for determining an acceptability rating. This device comprises a processor PROC cooperating with a memory block BM comprising a storage memory and / or working MEM.

The memory block can advantageously comprise a computer program comprising code instructions for implementing the steps of the method for determining an acceptability rating within the meaning of the invention, when these instructions are executed by the processor PROC, and including the extent of zapping time between first and second audiovisual scene audiovisual scene after an zapping and taking account of at least information on the waiting content displayed between the first and second audiovisual scene. Typically, the description of Figure 2 shows the steps of an algorithm of such a computer program. The computer program can also be stored on a memory medium readable by a device or downloaded from the reader in the memory space of the device.

The device comprises an input module (E) adapted to receive an audio signal

(A) and video (V) as well as zapping control information (Zap.). This information comes from either a box or external drive or an internal module when the device is incorporated into such a decoder or player.

The device comprises an output module (S) adapted to transmit an acceptability rating Q to a communications network or other data transmission device.

Claims

1. A method of determining an acceptability rating (Q) of a zapping time from a measurement (E201) of the zapping time between a first audiovisual scene (Sc.l) and a second audiovisual scene ( sc.2) after an zapping (E200), characterized in that it comprises the following steps:
- sensing (E202) of the queue content information displayed between the first broadcast and the second audiovisual scene scene;
- selecting (E206) parametric model of a plurality of parametric models stored based on the information detected and the measured time;
- determining (E207) a note exit acceptability of the selected parametric model.
2. The method of claim I, characterized in that detection of information displayed includes one or more steps of detecting the following group:
- still image detection on all or part of the display,
- black image detection,
- Image sensor frost of the first or second audiovisual scene,
- detection of a strip of textual information,
- detection of moving pictures
- no sound detection.
3. A method according to claim 1, characterized in that it further comprises a measure of the display time between different information detected, the measured time being used for the selection of the parametric model.
4. A method according to claim 1, characterized in that the parametric models are determined by learning, from zapping time measurements and information on waiting content.
5. A device for determining a note acceptability zapping time comprising a measuring module zapping time between a first scene and a second audiovisual audiovisual scene after an zapping, characterized in that it comprises also a module for obtaining least information on the waiting content displayed between the first and second audiovisual scene for determining the acceptability rating of zapping time.
6 audio / video decoder comprising a device as claimed in claim 5.
7. A computer program comprising code instructions for implementing a control method according to any one of claims 1 to 4, when they are executed by a processor.
PCT/FR2010/052697 2009-12-18 2010-12-13 Device and method for determining an acceptability score for zapping time WO2011073559A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FR0959247 2009-12-18
FR0959247 2009-12-18

Publications (1)

Publication Number Publication Date
WO2011073559A1 true true WO2011073559A1 (en) 2011-06-23

Family

ID=42260329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FR2010/052697 WO2011073559A1 (en) 2009-12-18 2010-12-13 Device and method for determining an acceptability score for zapping time

Country Status (1)

Country Link
WO (1) WO2011073559A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073534A2 (en) * 2001-03-09 2002-09-19 Sarnoff Corporation Spatio-temporal channel for images
WO2005032145A1 (en) * 2003-08-29 2005-04-07 Nielsen Media Research, Inc. Audio based methods and apparatus for detecting a channel change event
KR20080000862A (en) * 2006-06-28 2008-01-03 주식회사 케이티 Channel zapping time measurement apparatus and method in internet protocol television
US20080244637A1 (en) * 2007-03-28 2008-10-02 Sony Corporation Obtaining metadata program information during channel changes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002073534A2 (en) * 2001-03-09 2002-09-19 Sarnoff Corporation Spatio-temporal channel for images
WO2005032145A1 (en) * 2003-08-29 2005-04-07 Nielsen Media Research, Inc. Audio based methods and apparatus for detecting a channel change event
KR20080000862A (en) * 2006-06-28 2008-01-03 주식회사 케이티 Channel zapping time measurement apparatus and method in internet protocol television
US20080244637A1 (en) * 2007-03-28 2008-10-02 Sony Corporation Obtaining metadata program information during channel changes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GODANA B E ET AL: "Impact of Advertisements during Channel Zapping on Quality of Experience", NETWORKING AND SERVICES, 2009. ICNS '09. FIFTH INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 20 avril 2009 (2009-04-20), pages 249-254, XP031461375, ISBN: 978-1-4244-3688-0 *
None

Similar Documents

Publication Publication Date Title
US5995941A (en) Data correlation and analysis tool
US6556767B2 (en) Video capture device
Winkler et al. The evolution of video quality measurement: from PSNR to hybrid metrics
ITU-T RECOMMENDATION Subjective video quality assessment methods for multimedia applications
US20070126884A1 (en) Personal settings, parental control, and energy saving control of television with digital video camera
US20120093481A1 (en) Intelligent determination of replays based on event identification
US20140075465A1 (en) Time varying evaluation of multimedia content
US20080052624A1 (en) Systems and methods for modifying content based on a positional relationship
US20140337871A1 (en) Method to measure quality of experience of a video service
US5734422A (en) Digital video error analyzer
US20030093784A1 (en) Affective television monitoring and control
US20090009532A1 (en) Video content identification using ocr
US20100104146A1 (en) Electronic apparatus and video processing method
US20050183016A1 (en) Apparatus, method, and computer product for recognizing video contents, and for video recording
US5899575A (en) Video capture device, video recording/playing apparatus having the video capture device attached thereto, and video input device
US6778224B2 (en) Adaptive overlay element placement in video
US20130042262A1 (en) Platform-independent interactivity with media broadcasts
Ou et al. Modeling the impact of frame rate on perceptual quality of video
US20070237227A1 (en) Temporal quality metric for video coding
Winkler et al. Video quality evaluation for Internet streaming applications
US5748229A (en) System and method for evaluating video fidelity by determining information frame rate
EP1074926A2 (en) Method of and apparatus for retrieving text data from a video signal
US20070073904A1 (en) System and method for transrating based on multimedia program type
US20070113182A1 (en) Replay of media stream from a prior change location
US20090089056A1 (en) Electronic apparatus and display process method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10807466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 10807466

Country of ref document: EP

Kind code of ref document: A1