GB2559983A - Entertainment device and system - Google Patents
Entertainment device and system Download PDFInfo
- Publication number
- GB2559983A GB2559983A GB1702913.3A GB201702913A GB2559983A GB 2559983 A GB2559983 A GB 2559983A GB 201702913 A GB201702913 A GB 201702913A GB 2559983 A GB2559983 A GB 2559983A
- Authority
- GB
- United Kingdom
- Prior art keywords
- video
- event
- processing device
- video content
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47214—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for content reservation or setting reminders; for requesting event notification, e.g. of sport results or stock market
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Marketing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A device and method for displaying at least a first video to a user via a display in which the occurrence of a real world event shown in a second video is identified 400. In response to identifying such an event video content comprising a portion of the second video corresponding to the event in addition to the currently viewed first video is generated 410, such that the portion of the second video is overlaid upon the first video. The generated video content is shown 420 on the display. The second video may be inserted so that it overlaps the first video as in a picture in picture arrangement. The events may be detected by image processing, use of metadata accompanying the video content, or use of event data not supplied with the content. This event data may come from social media or a second user. The user may select types of events which are to be identified.
Description
(54) Title of the Invention: Entertainment device and system
Abstract Title: Displaying video corresponding to a detected event (57) A device and method for displaying at least a first video to a user via a display in which the occurrence of a real world event shown in a second video is identified 400. In response to identifying such an event video content comprising a portion of the second video corresponding to the event in addition to the currently viewed first video is generated 410, such that the portion of the second video is overlaid upon the first video. The generated video content is shown 420 on the display. The second video may be inserted so that it overlaps the first video as in a picture in picture arrangement. The events may be detected by image processing, use of metadata accompanying the video content, or use of event data not supplied with the content. This event data may come from social media or a second user. The user may select types of events which are to be identified.
Fig. 4
1/3
110 120
130 loo Figure 1
Event | Time | Reference |
Kick off | 0' | A |
Goal for red team by player X | 17' | B |
Red card for player Y of blue team | 39' | C |
Half time | 45'+4' | D |
Second half kick off | 45' | E |
Goal for blue team by player Z | 58' | F |
Goal for red team by player W | 71' | G |
Goal for red team by player X | 72' | H |
Full time | 90'+3' | 1 |
Figure 2A
2/3
39’ 45+4’ 45’ 58’ 71’ 72’ 90+3’
K
Φ ω
Q
Ο
CQ
ο
Ο
Γ4 ο
kO ο
Figure 2Β
3/3
Video Receiving Unit 310 | Event Identifying Unit 320 | |||
Video Generation Unit 330 | Display Unit 340 | |||
Processing Device 300 |
400
410
Fig. 4
Display the generated video content
420
ENTERTAINMENT DEVICE AND SYSTEM
This invention relates to an entertainment device and system.
When viewing video content, it is often the case that a viewer is interested in more than one piece of content that is being broadcast simultaneously. The viewer may therefore miss out on viewing desired pieces of content due to the clash in programme times. Some solutions to this have become widespread in the art, such as recording a programme to view it at a later, more convenient time.
Recording a programme for later viewing may be unsatisfactory for many reasons; for example, a convenient time may be many days away and the viewer would likely hear or see information about the programme in the meantime - and thus the programme would be spoiled for the viewer. Similarly, for example for sporting events, it may be that being up-to-date with the programme as it happens is of utmost importance and therefore recording the programme may be entirely unsatisfactory - this is particularly true when a number of games are being played concurrently to determine the outcome of a stage of a tournament.
The use of picture-in-picture (PiP) methods is an alternative that may also be used to alleviate this problem. PiP arrangements may be a significant barrier to a viewer’s enjoyment of either piece of video content, however. This is because generally both videos are displayed at a reduced size, or one is arranged so as to cover a portion of the other. In order to use a PiP method, the user must either be able to receive two video sources at once and display them together, or the user must receive a standard video from a broadcaster in which the broadcaster manages the PiP content. The latter of these can often result in content that is not of interest to the viewer being displayed, while the former means that a viewer has to manage the content that is displayed manually.
Therefore there is a need to be able to allow a viewer to keep up-to-date with the events occurring in multiple pieces of video content without having a significant and detrimental impact on their enjoyment of the content.
Various aspects and features of the present invention are defined in the appended claims and within the text of the accompanying description and include at least a processing device, a method of operating a processing device, and a computer program.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 schematically illustrates a display arrangement;
Figure 2A schematically illustrates an event information feed;
Figure 2B schematically illustrates video content timings;
Figure 3 schematically illustrates a processing device; and
Figure 4 schematically illustrates a processing method.
The present disclosure addresses the problems described above by providing a display system and method for displaying relevant portions of a second piece of content to a viewer as an overlay to a first piece of content. The fact that only portions of the second piece of content that are of interest are displayed as an overlay means that the disruption to a user’s viewing experience is reduced. This is because the second picture is not present for such long periods of time, and the viewer does not actively have to monitor the other pieces of content to determine when an event of interest has occurred.
Many of the embodiments described below are provided in the context of video content that corresponds to sporting events. This should not be regarded as being restrictive, as the techniques described below may be equally well applied to music charts or news coverage, or any type of video content; such embodiments may be useful whenever a viewer has an interest in more than one piece of content, regardless of what the content is.
Figure 1 schematically illustrates a display arrangement according to the present disclosure. The main video 100 occupies the display area, with alert area 110, second video content 120 and information area 130 each being overlaid upon the main video 100.
The size, shape and placement of each of these overlaid elements is of course entirely exemplary; they may be varied in accordance with the content of the main video 100, user preferences, or any other factors. While these areas may be present at all times (for example, showing the most recent piece of content or information until a new one supersedes it), in some embodiments these areas are only visible when new information or content is to be provided to the viewer. The content of each of these overlaid elements may also be varied, for example a user may specify particular teams or competitions that they are interested in, and updates are limited only to these interests.
The main video 100 is the piece of content that the viewer has selected for viewing. This may, for example, be live content, streamed or on-demand content, or content stored on a removable storage medium.
The alert area 110 is operable to display alerts to a user. For example, this could comprise the time, scorer, and team information for a recent goal in a football match. Any combination of text and graphics may be appropriate for displaying such information to a user.
The second video content 120 is operable to display video content from another source to a user, or from any number of sources in succession in the case of a viewer being interested in events in more than one other piece of content. In one example, the second video content 120 comprises a clip of the recent goal described in the alert area 110. The content of the second video content is described in more detail below.
The information area 130 may comprise more general information - for example, a score ticker that keeps a viewer informed of scores of each other game being played in a same competition at the same time, or league standings relating to the games being played. This may be provided at all times, at predetermined time intervals, or in response to any events in the other video content being monitored, for example.
The main video content 100 and second video content 120 may be obtained from any suitable source. In some embodiments each piece of content is obtained via an online content provider, while in others each is obtained from a standard television broadcast. In some embodiments, a mixture of source types is used to obtain the plurality of pieces of video content.
While using broadcast television content may be advantageous in reducing the amount of bandwidth required to stream the desired video content, this may require the broadcast content to be recorded locally in order to have it available for display as a second video content 120. This is because even starting the display of the second video content 120 at the exact moment the event happens, the viewer will have missed it - this is particularly true in the case of a goal scored in a game, as not only would the goal already have been scored but the buildup play leading to the goal would also be missing from the second video content 120. This is comparable to an event in any type of programme; without the context in which the event happens being presented to the viewer, the viewer will not be able to appreciate the event fully.
This issue may be avoided in the context of a streamed source of video content, as it may be possible to obtain video content beginning from a desired time from the server that stores the content.
Regardless of the source of the video content, it may generally be considered useful to provide video content that begins before the event happens and/or ends after the event happens as is appropriate. For example, whereas if an event were the starting of a particular scene in a programme then it would not necessarily be useful to provide any video content before this event, by contrast for video content relating to an event later in this scene, preceding material in the scene may be of considerable interest and assistance to the viewer.
In order to provide such a system, the viewing device should be able to identify events that are shown in other pieces of video content. In response to identifying such an event, the system can then generate video content comprising a portion of the second video corresponding to the event in addition to the currently viewed first video, such that the portion of the second video is overlaid upon the first video.
Events that are identified could be real-world events, such as a goal in a football match that is occurring, or could be events that do not occur in the real world. An example of the latter is if the user is interested in an e-sports event; the event that is identified could be an enemy being killed in a shooting game. It is clear that in this case the event is not a real-world event, despite being caused by a controlling player’s real-world actions. As noted above, an event that could be identified in the present arrangement may be shown in the second video such that the event is viewable by a user who is viewing that content.
The occurrence of an event may be identified by one or more of: performing image processing on the second video, reading metadata supplied with the video content, and reading a source of event information not supplied with the video content. These are discussed in more detail below.
In one embodiment, image processing is performed on one or more pieces of video content other than the main video 100 so as to determine when events happen. Using the example of a football match, a small area of the video content could be monitored (such as the overlaid scoreboard) for changes that indicate that an event (in this case, a goal) has happened. Other graphical overlays could be monitored, such as a graphic that appears when a player has been awarded a yellow card, to determine when events happen. This may be less useful for non-sporting applications, in which the events are not so well defined by identifiers, but it could be possible to identify different scenes or the like by image processing despite this.
Alternatively, or in addition, metadata could be provided with the main video 100 or the secondary video content 120 that identifies events. This can be much less processor-intensive for a viewer’s processing device than the image processing method, but it does require the information to be inserted by the content provider. However, this may be undesirable in that the user is then limited to events that the content provider defines and has less control over the operation of the present arrangement.
As a further alternative or additional method, event information may be obtained from a source other than the video content itself. In one example, the content provider publishes event information as a live feed that can be accessed. Alternatively, or in addition, a third party may provide such information.
For example, a number of third-party score update services exist that inform users of important events in games. These often contain at least information about the type of event (such as ‘goal’), who was involved in the event (such as ‘goal scorer’), in-game time of the event (for example, ‘seventeenth minute’). These pieces of information are sufficient to characterise any event and locate it in a piece of video content. Such services could exist for any programme; either provided by the content provider or as a fan-made feed or the like.
Another possible source of event information is a social media platform; by monitoring any posts that are tagged with a specific programme’s name it would be possible to identify the occurrence of an event in the programme. Further to this, the posts themselves could provide sufficient context to identify both the nature of the event and the timing of the event. Keyword recognition could be used to implement this - for example, a post saying ‘great goal, player X’ could be used to infer that a goal was scored by player X recently.
Users are often very quick to react to events on social media, and as such the updates by users are almost in real-time and therefore suitable to keep a viewer informed of such events - if there is a common or mean response time between an event occurring and people posting comments, it could be straightforward to identify the event in the video content. In order to add a suitable margin for error, a longer video clip may be shown when using a source such as social media, in which the timing information may be less reliable.
A further example of a source of event information is a second user. The second user may be able to communicate recommendations for second video content in a programme that they are currently viewing. For example, if friends are watching different programmes simultaneously on different displays (or in different locations altogether) then one may wish to share a portion of content with another user. The second user could then provide an input to their processing device and define a duration of a clip prior to the input to share (although this may be predefined, of course). Information about this input could be provided to a first user’s processing device, and then used to identify a portion of a particular piece of video content to present to the first user in a manner according to the present disclosure. An example application of this is two friends who support different teams and therefore watch different games that may be shown at the same time - each user may wish to share any action in their game with the other user, so as to make the viewing a more social experience.
The user of a processing device may be able to select one or more sources of event information, including but not limited to the sources described above. For example, the user could select a preferred events feed, select a particular social media service (or a portion thereof - for example, a football team’s fan page that provides match updates), and/or select particular friends who are allowed to provide recommended content.
Figure 2A shows an example of a feed that provides event information for a single football match. Similar feeds could be provided for any programme, and may be used to track more than one if a field exists that identifies a particular programme.
The first column of the feed defines the events that have occurred, while the second column defines the timing of the events. In this example the timing is provided with respect to an in-game clock, rather than using the actual time at which the event happened, but any suitable system for event timing could be used. A third column is shown here for ease of reference in Figure 2A, but would not be present in a live event feed.
Figure 2B is a schematic illustration of the relationship between the time of the events listed in Figure 2A and the elapsed time of a programme 200 showing those events. The elapsed time of the programme 200 is shown below the programme content, whilst the events are listed above the content with their in-game times and a letter corresponding to the third column of Figure 2A.
Figure 2B illustrates the need for a mapping between the two sets of timings, as it is apparent that minute zero occurs at a different point on each scale (for instance, due to prematch build-up). As a result, simply showing the seventeenth minute of the broadcast would not show the goal at seventeen minutes; instead it would show the pre-match content which in this example lasts for 25 minutes.
It should also be noted that an event (half time) occurs forty-nine minutes into the match - but the match restarts fifteen minutes later at the forty-five minute mark. Therefore it would appear that the forty-ninth minute actually occurs before the forty-fifth minute. Any mapping between the two sets of timing information should account for this discrepancy.
One way in which this may be managed is to set a particular event or events as the ‘zero point’ against which all other events are timed; this is an example of event information for a predetermined event being used to identify a correspondence between an event time described by the event information and an elapsed time in the second video. In the above example, this could be the start of each half of the game (events A and E). This also addresses the problem of a first half not usually being exactly 45 minutes; if all times were defined with reference to kick-off, then the added time in the first half would cause the events of the second half to not match up with times in the programme correctly.
In such an arrangement, event A would be assigned the time of twenty-five minutes (as this is the time into the video content 200 at which kick-off occurs). Each subsequent event until E (the second half) would then have a conversion factor of twenty-five minutes added to its time in order to arrive at the time in the video content 200 at which the event occurs. Similarly, the event E could be assigned a time of eighty-nine minutes. Each of the events F, G, H and I would then have forty-five subtracted from their event time, before adding this value to the eighty-nine minutes to obtain the time of the event in the video content 200. This is merely exemplary, and any method of relating event times to elapsed time of the corresponding video content may be used.
Alternatively, or in addition, if the delay between the event identification and the video content showing the event is known then it may be possible to request video content for a particular time period relative to the event - for example, if there were a delay of thirty seconds then upon receiving the identification then video content from forty-five seconds ago until fifteen seconds ago could be requested so as to provide a thirty second clip that encompasses the event.
As noted above, the second video content 120 may be a thirty second clip that encompasses the corresponding event. Of course, any suitable time may be selected; even a still image may be a suitable amount of content to show to give context to the event. The duration of the clip could be selected by the skilled person freely. Alternatively, a user could indicate their preferences to the processing system. The timing could also depend on the event that occurs - for example, a goal by a favoured team could result in a longer clip being displayed than a non-favoured team’s goal.
In more general terms, the portion of video content comprises N seconds of video before the event and M seconds after the event, wherein N and M are greater than or equal to 0 (wherein if N and M are each 0, this corresponds to a still image of the event). N and M may be determined in dependence upon user preferences, or N and M may be determined in dependence upon the manner in which the occurrence of the event is identified (for example, a less reliable source may result in the display of a longer clip, so as to ensure that the clip contains the event).
Events G and H as shown in Figure 2A are sufficiently close that an event identifying unit could identify this as a single event (for example, because the calculated clip times before and after the events overlap or are within a predetermined threshold period from each other) and thus show second video content that comprises both events without a break in between. This may be advantageous in that the user is not subjected to as many changes in the layout of the screen during viewing, and the increased continuity may be desirable for the user’s viewing enjoyment.
In some embodiments, the event itself may not be shown in the second video content 120; for example, if the event is ‘a free kick has been awarded’ then it is possible that a viewer has no interest in seeing the foul or the award of the free kick - instead, the viewer may be presented with a clip of the free kick being taken without the precursor events being shown in the clip.
The events that are shown to the user could be determined in a number of manners. For example, a user may indicate events that may be identified by the event identifying unit; this means that the user can identify which events are of interest to them (such as goals from a particular competition or team) so as to avoid being shown undesirable content.
Alternatively, or in addition, such an arrangement may simply differentiate between events based on a programme type - for example, ‘football events’, ‘sport events’ or ‘soap opera events’. In some embodiments, the events shown are dependent upon the programme that the user is currently viewing; as a result, events are only displayed to a user if they occur in second video content that is of the same type as the first video content. This means that a viewer is not interrupted by sporting highlights when watching a movie, for example.
Figure 3 schematically illustrates a processing device 300. This could be, for example, a Sony® PlayStation® 4. This device comprises a video receiving unit 310, an event identifying unit 320, a video generation unit 330 and a display unit 340. While shown in a single processing device 300, it would be appreciated by the skilled person that the location of each component may be selected freely - for example, the display unit 340 may be an entirely separate device to the processing device 300, and the processing device 300 may not comprise a display for images at all.
The video receiving unit 310 is operable to receive video content via the internet, a television tuner, removable storage medium, or any other suitable video source. In some embodiments, one or more of the first and second (or more) videos are obtained via the internet, while in other embodiments one or more of the first and second (or more) videos are obtained via a television broadcast. In some embodiments, a combination of videos from different sources is used.
The event identifying unit 320 is operable to identify the occurrence of an event corresponding to a second video, or indeed any number of other videos. The only restriction on the number of videos that may be monitored may simply be the number of videos that are available for obtaining clips to be shown to a viewer.
The video generation unit 330 is operable to generate video content comprising a portion of the second video corresponding to the event and a currently watched first video, such that the portion of the second video is overlaid upon the first video. The video generation unit 330 may be operable to generate video content in which display text information relating to the event is also displayed overlaying the first video. The video generation unit is then arranged to output the generated video content for display by the display unit 340. Where this unit is separate to the processing device 300, then the display unit may be operably coupled to the processing device via a wired link (e.g. HDMI®) or by a wireless connection (e.g. the PlayStation ® Remote Play feature). Meanwhile when the display unit 340 is integral to the processing device 300 (for example in the case of a mobile phone or tablet), then the display unit may be operably coupled to the video generation unit via a bus (not shown).
In any event, the display unit 340 is operable to display a first video and, when an event is identified, the generated video content that is generated by the video generation unit 330.
Figure 4 schematically illustrates a display method that may be performed by the processor 300 of Figure 3.
A step 400 comprises identifying the occurrence of an event corresponding to a second video.
A step 410 comprises generating video content comprising a portion of the second video corresponding to the event in addition to a currently viewed first video, such that the portion of the second video is overlaid upon the first video.
A step 420 comprises outputting the generated video content for display to the viewer 15 via a display device.
It will be appreciated that embodiments of the present invention may be implemented in hardware, programmable hardware, software-controlled data processing arrangements or combinations of these. It will also be appreciated that computer software or firmware used in such embodiments, and providing media for providing such software or firmware (such as storage media, for example a machine-readable non-transitory storage medium such as a magnetic or optical disc or a flash memory) are considered to represent embodiments of the present invention.
Claims (15)
1. A processing device for displaying at least a first video to a user via a display device, the processing device comprising:
an event identifying unit operable to identify the occurrence of an event shown in a second video; and a video generation unit operable, in response to identifying such an event, to generate video content comprising a portion of the second video corresponding to the event in addition to the currently viewed first video, such that the portion of the second video is overlaid upon the first video, the video generation unit being further operable to output the generated video content for display via the display device.
2. A processing device according to claim 1, wherein the occurrence of an event is identified by one or more of the following list:
performing image processing on the second video;
reading metadata supplied with the video content; and reading a source of event information not supplied with the video content.
3. A processing device according to claim 2, wherein the source of event information is a social media platform.
4. A processing device according to claim 2, wherein the source of event information is a second user.
5. A processing device according to claim 2, wherein the user selects one or more sources of event information.
6 . A processing device according to claim 2, wherein event information for a predetermined event is used to identify a correspondence between an event time described by the event information and an elapsed time in the second video.
7. A processing device according to claim 1, wherein the portion of video content comprises N seconds of video before the event and M seconds after the event, wherein N and M are greater than or equal to 0.
8. A processing device according to claim 7, wherein N and M are determined in dependence upon user preferences.
9. A processing device according to claim 7, wherein N and M are determined in dependence upon the manner in which the occurrence of the event is identified.
10. A processing device according to claim 1, wherein events are only displayed to a user if 5 they occur in second video content that is of the same type as the first video content.
11. A processing device according to claim 1, wherein one or more of the first and second videos are obtained via the internet.
10
12. A processing device according to claim 1, wherein one or more of the first and second videos are obtained via a television broadcast.
13. A processing device according to claim 1, wherein a user indicates events that may be identified by the event identifying unit.
14. A display method comprising:
identifying the occurrence of an event shown in a second video;
generating, in response to identifying such an event, video content comprising a portion of the second video corresponding to the event in addition to a currently viewed first video, such
20 that the portion of the second video is overlaid upon the first video; and outputting the generated video content for display.
15. A computer program which, when executed by a computer, causes the computer to perform the method of claim 14.
Intellectual
Property
Office
Application No: GB1702913.3 Examiner: Mr George Mathews
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1702913.3A GB2559983A (en) | 2017-02-23 | 2017-02-23 | Entertainment device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1702913.3A GB2559983A (en) | 2017-02-23 | 2017-02-23 | Entertainment device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201702913D0 GB201702913D0 (en) | 2017-04-12 |
GB2559983A true GB2559983A (en) | 2018-08-29 |
Family
ID=58544084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1702913.3A Withdrawn GB2559983A (en) | 2017-02-23 | 2017-02-23 | Entertainment device and system |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2559983A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2578234A (en) * | 2015-09-23 | 2020-04-22 | Rovi Guides Inc | Systems and methods to detect events in programming from multiple channels |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6320623B1 (en) * | 1998-11-13 | 2001-11-20 | Philips Electronics North America Corporation | Method and device for detecting an event in a program of a video and/ or audio signal and for providing the program to a display upon detection of the event |
US20040064835A1 (en) * | 2002-09-26 | 2004-04-01 | International Business Machines Corporation | System and method for content based on-demand video media overlay |
US20050086687A1 (en) * | 1999-12-16 | 2005-04-21 | Microsoft Corporation | Methods and systems for managing viewing of multiple live electronic presentations |
WO2007072369A2 (en) * | 2005-12-20 | 2007-06-28 | Koninklijke Philips Electronics, N.V. | Notification of a live event on television |
GB2544605A (en) * | 2015-09-23 | 2017-05-24 | Rovi Guides Inc | Systems and methods to detect events in programming from multiple channels |
-
2017
- 2017-02-23 GB GB1702913.3A patent/GB2559983A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6320623B1 (en) * | 1998-11-13 | 2001-11-20 | Philips Electronics North America Corporation | Method and device for detecting an event in a program of a video and/ or audio signal and for providing the program to a display upon detection of the event |
US20050086687A1 (en) * | 1999-12-16 | 2005-04-21 | Microsoft Corporation | Methods and systems for managing viewing of multiple live electronic presentations |
US20040064835A1 (en) * | 2002-09-26 | 2004-04-01 | International Business Machines Corporation | System and method for content based on-demand video media overlay |
WO2007072369A2 (en) * | 2005-12-20 | 2007-06-28 | Koninklijke Philips Electronics, N.V. | Notification of a live event on television |
GB2544605A (en) * | 2015-09-23 | 2017-05-24 | Rovi Guides Inc | Systems and methods to detect events in programming from multiple channels |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2578234A (en) * | 2015-09-23 | 2020-04-22 | Rovi Guides Inc | Systems and methods to detect events in programming from multiple channels |
GB2578234B (en) * | 2015-09-23 | 2020-08-19 | Rovi Guides Inc | Systems and methods to detect events in programming from multiple channels |
Also Published As
Publication number | Publication date |
---|---|
GB201702913D0 (en) | 2017-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7551865B2 (en) | Video distribution device, video distribution method, video distribution program, and video distribution system | |
US9832441B2 (en) | Supplemental content on a mobile device | |
US9814977B2 (en) | Supplemental video content on a mobile device | |
US9776075B2 (en) | Systems and methods for indicating events in game video | |
US9432629B2 (en) | Interactive viewing of sports video | |
WO2015049810A1 (en) | Multi-viewpoint moving image layout system | |
US20030101450A1 (en) | Television chat rooms | |
US12086845B2 (en) | Systems and methods for dynamically modifying video game content based on non-video gaming content being concurrently experienced by a user | |
US10158925B2 (en) | Techniques for backfilling content | |
US9924148B2 (en) | Highlight program | |
US20220368964A1 (en) | Server, and advertisement setting method | |
JP6748323B1 (en) | Movie distribution device, movie distribution method, and movie distribution program | |
CN113132747B (en) | Live broadcast processing method and system based on big data | |
CN112135159B (en) | Public screen broadcasting method and device, intelligent terminal and storage medium | |
US20160269788A1 (en) | Indexing, advertising, and compiling sports recordings | |
GB2559983A (en) | Entertainment device and system | |
JP2004260297A (en) | Personal digest distribution apparatus, distribution method thereof, program thereof, and personal digest distribution system | |
US12010371B2 (en) | Information processing apparatus, video distribution system, information processing method, and recording medium | |
CN114288645A (en) | Picture generation method, system, device and computer storage medium | |
JP2016004566A (en) | Presentation information control device, method and program | |
US12088880B2 (en) | Movie delivery apparatus, movie delivery method, and movie delivery program | |
JP7336684B2 (en) | Game system and equipment | |
US20240075393A1 (en) | Recording medium and information processing device | |
JP7572651B2 (en) | Game system and device | |
US20240075384A1 (en) | Recording medium and information processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |