GB2478148A - Method of identifying a specific frame in a video stream so that additional information relating to the object in the frame can be displayed - Google Patents

Method of identifying a specific frame in a video stream so that additional information relating to the object in the frame can be displayed Download PDF

Info

Publication number
GB2478148A
GB2478148A GB201003250A GB201003250A GB2478148A GB 2478148 A GB2478148 A GB 2478148A GB 201003250 A GB201003250 A GB 201003250A GB 201003250 A GB201003250 A GB 201003250A GB 2478148 A GB2478148 A GB 2478148A
Authority
GB
United Kingdom
Prior art keywords
frame
video stream
video
content
supplementary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB201003250A
Other versions
GB2478148B (en
GB201003250D0 (en
Inventor
Stephen Henesy
James Bailey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PIP INTERACTIVE Ltd
Original Assignee
PIP INTERACTIVE Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PIP INTERACTIVE Ltd filed Critical PIP INTERACTIVE Ltd
Priority to GB201003250A priority Critical patent/GB2478148B/en
Publication of GB201003250D0 publication Critical patent/GB201003250D0/en
Publication of GB2478148A publication Critical patent/GB2478148A/en
Application granted granted Critical
Publication of GB2478148B publication Critical patent/GB2478148B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information

Abstract

A method of adding timing information to a video stream by adding additional content 10 to each frame 12, wherein the content is based on the position of the frame within the video stream 14. The supplementary information may be a timeline 10 superimposed on the existing video along the top or bottom edge of the frame, the timeline varying cyclically during the video with constant time period. The time period may be greater than twice the maximum error in the position in a video stream by a multimedia video player. The timeline may be analysed to determine the time at which the frame appears in the video and identify the frame so that information associated with the identified position and time 22 can be displayed. This method allows additional information relating to an object in the frame to be selected without accessing the wrong information as the scene changes because of the error margin in the frame position of the video player.

Description

Computer video display method and system This invention relates to a computer video display method and system. In particular, it relates to a computer video display method and system that can determine which individual frame of a video sequence is being displayed to a user at any given time.
When video content is being displayed to a user, there are situations where it is of advantage to be able to determine what the user is seeing at any given instant. This can allow additional content to be presented to a user that is determined by the context of the video on display. For example, this can be used to allow a user to click on the image of an object depicted in the video and thereby obtain information related to that object.
Using commonly available video playback components, such as the Adobe Flash Player FlvPlayback component for online streaming, such an accurate measure is not available. Information about the position in the video stream being displayed has an error margin of approximately four frames. When a video cuts from one scene to another, the content of sequential frames can be entirely different. If a user happens to request additional content at the moment of a scene change, this can lead to the additional content being displayed in entirely the wrong context.
An aim of this invention is to allow the position in a video stream that is being displayed to a resolution of a single frame.
To this end, from a first aspect, the invention provides a method of adding timing information to a video stream comprising adding supplementary content to each frame of the video stream, the supplementary content being determined algorithmically in dependence upon the position of the frame to which it is added in the video stream.
Adding supplementary content to the video frames, rather than as associated metadata within the stream, ensures that the stream remains compatible with existing video players.
The position of the frame in the video stream may advantageously be determined as an offset from a frame that is displayed at a datum time.
In preferred embodiments, the supplementary content has the form of a line that is superimposed on pre-existing video content within the frame. The line may be split into two regions of contrasting appearance, with the relative lengths of the two regions changing by a known amount from one frame to the next. Therefore, the position of a frame within the video can be determined by calculating the lengths of the regions of the line. For example, the line may extend across an edge of the frame, typically at the bottom or the top, with a width of a small number of pixels (e.g., 2-5 pixels).
A method according to any preceding claim in which form of the supplementary content varies cyclically during the course of the video stream with a constant period. The period is most advantageously greater than the maximum error in the position in a video stream that is reported by a video stream player. Preferably, the period is greater than twice the maximum error in the position in a video stream that is reported by a video stream player.
From a second aspect, this invention provides a method of determining the position within a playing video stream of a frame being displayed, where the video stream has had timing information added to it by a method according to the first aspect of the invention, comprising analysing the content of a video frame to identify supplementary content added to it, and assessing the supplementary content to determine the time at which the frame appears in the video stream.
The step of assessing the supplementary content may include determining the size of a region of the supplementary content.
A method of determining the position within a playing video stream may comprising an initial step of obtaining an approximate position of a frame from a player that is reproducing the stream and correcting the approximate position using a method according to the second aspect of the invention.
From a third aspect, the invention provides a system for providing context-dependent information to a user comprising identifying a spatial position in a video display at which a user has indicated that information is requested, identifying a frame being displayed in the video display using a method embodying the second aspect of the invention, and displaying information associated with the identified position and time.
From a fourth aspect, the invention provides a computer system programmed to adding timing information to a video stream using a method embodying the first aspect of the invention.
An embodiment of the invention will now be described in detail, by way of example, and with reference to the accompanying drawings, in which: Figures 1 to 5 show highly diagrammatically five sequential frames of a video that has been processed in accordance with an embodiment of the invention, with a scene change taking place between Frames 3 and 4; and Figure 6 is a diagram of a timeline used in an embodiment of the invention.
The first step in implementing the invention is to process the video that is to be streamed to a user.
Video editing software, such as Adobe After Effects, is used to re-render the entire video. In this embodiment, a horizontal timeline 10 is overlaid the top of the content.
The timeline is 3 pixels high, extends to the full width of the video frame 12, and is positioned at the bottom of the frame. In this position, it can be easily masked or hidden when played back in the player, and it is in a known position that the player application can readily identify. The size of the timeline 10 is exaggerated in the figures; in practice, it will occupy a much smaller portion of the frame.
The timeline 10 is specifically constructed to make it easy to analyse. It is shown separately from the frame in Figure 6. In this example, a the timeline 10 starts on a black background and extends as a white bar from left to right at a constant speed, so that the length of the white bar is directly proportional to time. By finding the length of the white bar, an application can determine the time that has elapsed since the bar was first drawn.
This progression is shown in Figures 1 to 5. As shown in Figure 1, at a start datum frame, the timeline 10 is entirely black.
In the subsequent frame, shown in Figure 2, there is a short length of white bar shown at 14 within the timeline 10. In each subsequent frame, the length of the white bar 14 grows steadily, with a constant length being added to the white bar 14 frame-by-frame.
Therefore, by measuring the length of the white bar 14, it is possible to deduce how many frames has elapsed since the datum frame was shown. Once a period known as the "timeline loop period" and denoted P has elapsed, the white bar 14 has grown to the full length of the timeline 10, the timeline reverts to all black in a new timing datum frame.
The speed of the timeline bar needs to be chosen to suit the player it is intended to work on. Assume that there is an error margin of time associated with reading the video time direct from the player, the timeline loop period P should be substantially larger than the error margin E, where typically P=k E, where k? 2.
It will be clearly understood that this embodiment can be subject to multiple variations without changing its principle of operation. The colours may be different (for example, the role of the dark and light segments may be reversed); the position of the timeline may vary (for example, it may be at a vertical screen edge or across the top of the frame]; amongst many other alternatives. The shape need not be a line -if could be a curve or an area that is filled.
An example application of the invention will now be described. Video that is displayed in a player includes images of a number of products. When a user moves a cursor over the video display using a pointing device such as a mouse, the video is frozen. A user can then obtain information about a product by moving the mouse pointer over its image and clicking on it.
Consider situation that can arise where video is being displayed in a player that has a margin of error of four frames when asked to report its position within a playing video.
Assume first, the video freezes with Frame 1 being displayed and a user clicks at point A. The correct reaction would be to display information about the bicycle shown at 20.
However, if the software reports that Frame 2 or later is being displayed within its margin for error, this information will not be displayed because the bicycle 20 has gone out of the frame.
Assume then that the video freezes with Frame 2 or frame 3 is being displayed and a user clicks at point B. The correct reaction would be to display information about the bicycle shown at 22. If the software reports that Frame 1 is being displayed, this information will not be displayed because the bicycle 22 has not yet reached point B. Worse, if the software reports that Frame 4 or 5 is being displayed (well within its margin for error), information about a camera 24 that appears at point B in Frames 4 or 5, following the change of scene, will be displayed instead.
Once a video is prepared as described above, it can be streamed to a user for viewing.
The process by which this can be achieved will now be described.
The player should be built with the ability to scan or read the image that is displayed to the user. With some players, security features will need to be addressed and permissions added in order to allow for image sampling.
Given that the timeline is visual, it might potentially distract from the intended content.
To avoid this, it is advisable to add an overlay or mask to hide the unwanted timeline.
This must be done in such a way that the timeline remains available in the video for analysis.
A program that runs on the platform on which the video is to be displayed processes the video to analyse the timeline performs the following steps.
Take a frame of the video and cast it to an object such as a bitmap so that it can be analysed on a pixel-by-pixel basis.
1. Determine the size of the bitmap and position a pointer to start at the bottom left corner.
2. Move from left to right across the timeline, read each pixel and measure its luminosity (the average value of the red, blue and green channel).
3. Find the x ordinate value of the point at which the timeline changes from being very bright, to very dark. This analysis must be aware that the compression process may offset the pixels from their original colour, so the code checks that the pixel is dark or light, rather that whether the pixel pure white or pure black.
4. Determine the width of the bright part of the timeline Wbright and divide it by the total width of the video Wvideo to give the fraction f=Wvideo/Wbright of the elapsed time through the timeline loop period P. This means that the actual time t within the video stream has a value t=(n-i-f)P, where n is an integer.
5. Read the approximate value tapprox of the current time from the video player using functions provided in its standard API to obtain an approximate value, with error E, for the time within the video stream.
6. It is known that tapprox £? t? tapprox +& Since P> , there is only one possible solution for t=(n-i-f)P so the value oft can be determined unambiguously.
ADOBE, AFTER EFFECTS and FLASH are registered trade marks of Adobe Systems Incorporated.

Claims (16)

  1. Claims 1. A method of adding timing information to a video stream comprising adding supplementary content to each frame of the video stream, the form of the supplementary content being determined algorithmically in dependence upon the position of the frame to which it is added in the video stream.
  2. 2. A method according to claim 1 in which the position of the frame in the video stream is determined as an offset from a frame that is displayed at a datum time.
  3. 3. A method according to claim 1 or claim 2 in which the supplementary content has the form of a line that is superimposed on pre-existing video content within the frame.
  4. 4. A method according to claim 3 in which the line is split into two regions of contrasting appearance, with the relative sizes of the two regions changing by a known amount from one frame to the next.
  5. 5. A method according to claim 3 or claim 4 in which the line may extend across an edge of the frame.
  6. 6. A method according to claim 5 in which the line extends across the the bottom or the top edge of the frame 7. A method according to any one of claims 3 to 6 in which the line has a width of a small number of pixels.8. A method according to any preceding claim in which form of the supplementary content varies cyclically during the course of the video stream with a constant period.9. A method according to claim 8 in which the period is greater than the maximum error in the position in a video stream that is reported by a video stream player.1O.A method according to claim 9 in which the period is greater than twice the maximum error in the position in a video stream that is reported by a video stream player.11.A method of adding timing information to a video stream substantially as herein described with reference to the accompanying drawings.12. A method of determining the position within a playing video stream of a frame being displayed, where the video stream has had timing information added to it by a method according to any preceding claim, comprising analysing the content of a video frame to identify supplementary content added to it, and assessing the supplementary content to determine the time at which the frame appears in the video stream.13. A method of determining the position within a playing video stream according to claim 12 in which the step of assessing the supplementary content includes determining the size of a region of the supplementary content.14. A method of determining the position within a playing video stream comprising a step of obtaining an approximate position of a frame from a player that is reproducing the stream and correcting the approximate position using a method according to claim 12 or claim 13.15.A method of determining the position within a playing video stream of a frame being displayed substantially as herein described with reference to the accompanying drawings.16.A system for providing context-dependent information to a user comprising identifying a spatial position in a video display at which a user has indicated that information is requested, identifying a frame being displayed in the video display using a method according to any one of claims 12 to 15, and displaying information associated with the identified position and time.17. A computer system programmed to adding timing information to a video stream using a method according to any one of claims 1 to 11.Amended claims have been filed as follows:-Claims 1. A method of adding timing information to a video stream comprising adding supplementary video content to be superimposed upon pre-existing video content within each frame of the video stream, the supplementary video content including two regions of contrasting appearance, with the relative sizes of the two regions changing by a known amount from one frame to the next, the relative sizes of the regions of the supplementary video content being (0 determined algorithmically in dependence upon the position within the video Q stream of the frame to which it is added. C)Q 2. A method according to claim 1 in which the position of the frame in the video stream is determined as an offset from a frame that is displayed at a datum time.3. A method according to claim 1 or claim 2 in which the supplementary video content has the form of a line that is superimposed on pre-existing video content within the frame.4. A method according to claim 3 in which the line extends across an edge of the frame.5. A method according to claim 4 in which the line extends across the bottom or the top edge of the frame 6. A method according to any one of claims 3 to 5 in which the line has a width of 2 to 5 pixels.
  7. 7. A method according to any preceding claim in which the form of the supplementary video content varies cyclically during the course of the video stream with a constant period.
  8. 8. A method according to claim 7 in which the period is greater than the maximum error in the position in a video stream that is reported by a video stream player.
  9. 9. A method according to claim 8 in which the period is greater than twice the maximum error in the position in a video stream that is reported by a video stream player.
  10. 10. A method of adding timing information to a video stream substantially as herein described with reference to the accompanying drawings.
  11. 11.A method of determining the position within a playing video stream of a frame (0 being displayed, where the video stream has had timing information added to it Q by a method according to any preceding claim, comprising analysing the content cy) of a video frame to identify supplementary video content added to it, and Q 15 assessing the supplementary video content to determine the time at which the frame appears in the video stream.
  12. 12. A method of determining the position within a playing video stream according to claim 11 in which the step of assessing the supplementary video content includes determining the size of a region of the supplementary content.
  13. 13.A method of determining the position within a playing video stream comprising a step of obtaining an approximate position of a frame from a player that is reproducing the stream and correcting the approximate position using a method according to claim 11 or claim 12.
  14. 14. A method of determining the position within a playing video stream of a frame being displayed substantially as herein described with reference to the accompanying drawings.
  15. 15.A system for providing context-dependent information to a user comprising identifying a spatial position in a video display at which a user has indicated that information is requested, identifying a frame being displayed in the video display using a method according to any one of claims 11 to 14, and displaying information associated with the identified position and time.
  16. 16. A computer system programmed to adding timing information to a video stream using a method according to any one of claims 1 to 10. (0 C)
GB201003250A 2010-02-26 2010-02-26 Computer video display method and system Expired - Fee Related GB2478148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201003250A GB2478148B (en) 2010-02-26 2010-02-26 Computer video display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201003250A GB2478148B (en) 2010-02-26 2010-02-26 Computer video display method and system

Publications (3)

Publication Number Publication Date
GB201003250D0 GB201003250D0 (en) 2010-04-14
GB2478148A true GB2478148A (en) 2011-08-31
GB2478148B GB2478148B (en) 2012-01-25

Family

ID=42125670

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201003250A Expired - Fee Related GB2478148B (en) 2010-02-26 2010-02-26 Computer video display method and system

Country Status (1)

Country Link
GB (1) GB2478148B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0982947A2 (en) * 1998-08-24 2000-03-01 Sharp Kabushiki Kaisha Audio video encoding system with enhanced functionality
US6173317B1 (en) * 1997-03-14 2001-01-09 Microsoft Corporation Streaming and displaying a video stream with synchronized annotations over a computer network
US6320588B1 (en) * 1992-06-03 2001-11-20 Compaq Computer Corporation Audio/video storage and retrieval for multimedia workstations
WO2007047645A1 (en) * 2005-10-14 2007-04-26 Microsoft Corporation Clickable video hyperlink
US20080037954A1 (en) * 2006-05-15 2008-02-14 Microsoft Corporation Automatic Video Glitch Detection and Audio-Video Synchronization Assessment
US20080080841A1 (en) * 2006-09-29 2008-04-03 Yuuichiro Aso Information Processing Apparatus and Audio/Video Data Reproducing Method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6320588B1 (en) * 1992-06-03 2001-11-20 Compaq Computer Corporation Audio/video storage and retrieval for multimedia workstations
US6173317B1 (en) * 1997-03-14 2001-01-09 Microsoft Corporation Streaming and displaying a video stream with synchronized annotations over a computer network
EP0982947A2 (en) * 1998-08-24 2000-03-01 Sharp Kabushiki Kaisha Audio video encoding system with enhanced functionality
WO2007047645A1 (en) * 2005-10-14 2007-04-26 Microsoft Corporation Clickable video hyperlink
US20080037954A1 (en) * 2006-05-15 2008-02-14 Microsoft Corporation Automatic Video Glitch Detection and Audio-Video Synchronization Assessment
US20080080841A1 (en) * 2006-09-29 2008-04-03 Yuuichiro Aso Information Processing Apparatus and Audio/Video Data Reproducing Method

Also Published As

Publication number Publication date
GB2478148B (en) 2012-01-25
GB201003250D0 (en) 2010-04-14

Similar Documents

Publication Publication Date Title
US9807384B1 (en) Method, apparatus and computer program product for testing a display
US6392710B1 (en) Graphical user interface for field-based definition of special effects in a video editing system
US8711198B2 (en) Video conference
US20180018510A1 (en) Information processing apparatus, method and computer program product
JP6834353B2 (en) Image processing equipment, image processing systems, image processing methods and programs
RU2007108767A (en) VIDEO PROCESSING
JP6756338B2 (en) Image processing equipment, image processing systems, image processing methods and programs
US10148946B2 (en) Method for generating test patterns for detecting and quantifying losses in video equipment
CN108632666B (en) Video detection method and video detection equipment
JP2019067130A (en) Image processing device, image processing system, image processing method, and program
US9773523B2 (en) Apparatus, method and computer program
MX2013013874A (en) Systems and methods for testing video hardware by evaluating output video frames containing embedded reference characteristics.
JP2804949B2 (en) Moving image processing method
JP4244584B2 (en) Important image detection apparatus, important image detection method, program and recording medium, and important image detection system
CN105376511B (en) Image processing apparatus, image processing system and image processing method
GB2478148A (en) Method of identifying a specific frame in a video stream so that additional information relating to the object in the frame can be displayed
US8587518B2 (en) Disparity cursors for measurement of 3D images
JP7080614B2 (en) Image processing equipment, image processing system, image processing method, and program
Hong-cai et al. A shot boundary detection method based on color space
US20220078388A1 (en) Image capture apparatus, electronic apparatus, and chroma suppression program
JPH04235589A (en) Display of contents of animated picture
KR102596137B1 (en) Method for determining the start of relaxation after a burn-in process at optical display devices controllable pixel by pixel
CN115834952A (en) Video frame rate detection method and device based on visual perception
CN113596348B (en) Image processing method and device
CN111325804B (en) Method and device for determining YUV image display parameters

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20150226