US9271048B2 - Systems and methods for immersive viewing experience - Google Patents
Systems and methods for immersive viewing experience Download PDFInfo
- Publication number
- US9271048B2 US9271048B2 US14/106,242 US201314106242A US9271048B2 US 9271048 B2 US9271048 B2 US 9271048B2 US 201314106242 A US201314106242 A US 201314106242A US 9271048 B2 US9271048 B2 US 9271048B2
- Authority
- US
- United States
- Prior art keywords
- frame
- point
- video content
- metadata
- focal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000004044 response Effects 0.000 claims abstract description 11
- 230000033001 locomotion Effects 0.000 claims description 87
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims 3
- 239000000872 buffer Substances 0.000 description 18
- 238000004891 communication Methods 0.000 description 16
- 238000009877 rendering Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 7
- 238000003860 storage Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
Definitions
- a television broadcast system provides video, audio, and/or other data transport streams for each television program.
- a consumer system such as a tuner, a receiver, or a set-top box, receives and processes the transport streams to provide appropriate video/audio/data outputs for a selected television program to a display device (e.g., a television, projector, laptop, tablet, smartphone, etc.).
- a display device e.g., a television, projector, laptop, tablet, smartphone, etc.
- the transport streams may be encoded.
- some broadcast systems utilize the MPEG-2 format that includes packets of information, which are transmitted one after another for a particular television program and together with packets for other television programs.
- Metadata related to particular television programs can be included within a packet header section of an MPEG-2 packet.
- Metadata can also be included in separate packets of an MPEG-2 transmission (e.g., in MPEG-2 private section packets, and and/or in an advanced program guide transmitted to the receiver). This metadata can be used by the consumer system to identify, process, and provide outputs of the appropriate video packets for each selectable viewing option.
- Example embodiments may help to provide selectable television viewing options; for example, by allowing a user to zoom in on different points of interest in a television program.
- a video stream that is broadcast on a particular television channel may provide a wide field of view of a baseball game, such as an overhead view of the playing field.
- a user may be able to select a point interest in the video stream, such as the current batter, and the receiver will zoom in on the point of interest and provide a video output with the point of interest featured or centered in the display.
- a user interface may allow a user to select the particular points of interest that the user would like to zoom in on.
- the user interface in one example, can be a graphical user interface that is provided on the display, although, other examples are also possible.
- a television service provider's system may insert focus-point metadata into the video stream.
- a focus point may be a coordinate pair in the video frame that is updated to follow the point of interest as it moves within the video frame.
- a coordinate pair for a given frame of the video content may indicate a sub-frame within the given frame, such that the receiver can determine an appropriate area in each frame to zoom in on.
- a consumer system such as a set-top box, may process the video content to generate video content that is zoomed in on a sub-frame surrounding the point of interest.
- an example method involves receiving a television video transport stream with video content associated with a particular television channel, where the television video transport stream includes focal-point metadata regarding one or more focus points that follow the point of interest, where a focus point is a coordinate that follows the point of interest and indicates a sub-frame within a frame of the video content.
- the video content is processed, and a television signal is generated with video content that is zoomed to the sub-frame.
- an example method involves receiving two or more television video transport streams with video content for a particular program.
- the two or more television video transport streams include video content associated with two or more different camera views of the particular television program, and at least one of the television video transport streams includes a focus point that follows a point of interest and indicates a sub-frame at least one frame of the video content.
- a camera selection request is received or a zoom request is received.
- the video content is processed and a television video output signal is generated that is associated with one or both of the camera selection request or the zoom request.
- an example method involves receiving two or more television video transport streams with video content for a particular program.
- the two or more television video transport streams include video content associated with two or more different camera views of the particular television program.
- a camera selection request is received.
- the video content is processed and a television video output signal is generated with video content that is associated with the camera selection request.
- an example method involves receiving streaming data comprising video content associated with at least one live stream for a particular television program and generating a focus point that follows a point of interest and indicates a sub-frame within at least one frame of the video content. Then, a television video transport stream is generated with video content that includes the focus point that follows the point of interest, and the television video transport stream is transmitted by way of a single television channel.
- FIG. 1 is a block diagram illustrating a television system, according to an example embodiment.
- FIG. 2 is another block diagram illustrating a television system, according to an example embodiment.
- FIG. 3A illustrates an example embodiment of a broadcast system 120
- FIG. 3B illustrates an example embodiment of a consumer system 130 ;
- FIG. 4 illustrates a metadata structure providing data regarding a focus point and movement data and, in particular, for Ultra HD video streams
- FIG. 5 illustrates a data format for identifying metadata within a packetized system and, in particular, a data format for standard definition and high definition video streams;
- FIGS. 6A , 6 B, and 6 C illustrate an example display with a graphical user interface for zooming in on different points of interest of a video stream
- FIGS. 7A and 7B illustrate an example display with a graphical user interface for zooming in on different points of interest of a video stream.
- FIG. 8 illustrates another method designed for implementation with an MPEG-2 transport stream.
- FIG. 9 illustrates a simplified block diagrammatic view of a receiver, according to an example embodiment.
- FIG. 10 illustrates a simplified block diagrammatic view of a server, according to an example embodiment.
- Example embodiments may help television content broadcasters and/or satellite or cable television providers to provide a user with selectable viewing options for a television program. For example, example embodiments may allow viewers to selectively track different points of interest in the television program. As a specific example, example embodiments may allow a user to track particular players in a sporting event or to track a particular item or object, such as a football, hockey puck, soccer ball, etc., which is used in the sporting event.
- the viewing options can also or alternatively include video from different camera locations and angles. Other examples are possible.
- the metadata can include video coordinates for different focus points within the video stream, where the focus points follow a point of interest and correspond to a sub-frame within a frame of video content.
- focus points can be defined by the broadcasters and/or by the satellite/cable television operators as video coordinates.
- the video coordinates for the focus point can take various forms, such as a pair of (X 1 , Y 1 ), (X 2 , Y 2 ) coordinates that define opposing corners of a video box, or a single coordinate (X 1 , Y 1 ) that defines a center of the focus point and where the video box can be a predefined or adjustable size.
- Metadata can also track movement of the focus point.
- This movement metadata may include X-Y direction and magnitude data (e.g., X and Y vector data).
- the receiver can generate the vector data by processing subsequent video frames to determine the direction and magnitude of movement for the point of interest.
- the receiver can use such movement metadata to track the point of interest and provide a smooth video output with the point of interest featured or centered in the display.
- the viewing options can also or alternatively provide a selection of multiple camera views; for instance, multiple views of the playing field in a sporting event.
- separate focus points corresponding to the same point of interest may be provided for video content from multiple cameras, such that a user can zoom in on the same point of interest from multiple different camera views.
- a point of interest e.g., Player 2
- each camera may have different focus points, or coordinates, that follow the respective point of interest and correspond to a sub-frame within a frame of the video content.
- a graphical user interface may be displayed that provides a selection of all cameras capable of focusing on that particular player.
- Such selectable viewing options can be provided through different video streams that are provided synchronously via a single television channel.
- the television program can be a football game and the video streams for the football game can include a first camera view from behind an end zone, a second camera view from midfield, a third camera view that focuses on the football, and one or more other camera views that focus on specific players, player positions, or others (e.g., cornerbacks, the quarterback, running backs, coaches, band members, people in the stands, etc.).
- the video packets for each video stream are associated with camera view metadata so that the receiver can retrieve the appropriate video packets to display.
- the present disclosure contemplates a user interface through which a user can select one or more of the video streams to display.
- the selected video stream(s) can be displayed in a number of different ways, such as displaying a single selected video stream on the entire display or displaying different video streams in a picture-in-picture (PIP) arrangement or a split-screen arrangement.
- PIP picture-in
- a television program 110 may also be referred to as a television show, and may include a segment of content that can be broadcast on a television channel.
- television programs 110 There are may types of television programs 110 , such as animated programs, comedy programs, drama programs, game show programs, sports programs, and informational programs.
- Television programs 110 can be recorded and broadcast at a later date.
- Television programs 110 may also be considered live television, or broadcast in real-time, as events happen in the present.
- Television programs 110 may also be distributed, or streamed, over the Internet.
- the television programs may further include various points of interest, such as actors, athletes, and stationary objects such as a football, baseball, and goal posts, which can be zoomed in on using focus points that correspond to each point of interest.
- Television programs 110 are generally provided to consumers by way of a broadcast system 120 and consumer system 130 .
- broadcast systems 120 such as cable systems, fiber optic systems, satellite systems, and Internet systems.
- consumer systems 130 including set-top box systems, integrated television tuner systems, and Internet-enabled systems. Other types of broadcast systems and/or consumer systems are also possible.
- the broadcast system 120 may be configured to receive video, audio, and/or data streams related to a television program 110 .
- the broadcast system 120 may also be configured to process the information from that television program 110 into a transport stream 225 .
- the transport stream 125 may include information related to more than one television program 110 (but could also include information about just one television program 110 ).
- the television programs 110 are generally distributed from the broadcast system 120 as different television channels.
- a television channel may be a physical or virtual channel over which a particular television program 110 is distributed and uniquely identified by the broadcast system 120 to the consumer system 130 .
- a television channel may be provided on a particular range of frequencies or wavelengths that are assigned to a particular television station.
- a television channel may be identified by one or more identifiers, such as call letters and/or a channel number.
- a broadcast system 120 may transmit a transport stream to the consumer system 130 in a reliable data format, such as the MPEG-2 transport stream.
- a transport stream may specify a container format for encapsulating packetized streams (such as encoded audio or encoded video), which facilitates error corrections and stream synchronization features that help to maintain transmission integrity when the signal is degraded.
- FIGS. 3A and 3B illustrate methods according to example embodiments.
- the methods 300 and 350 may be implemented by one or more components of the system 100 shown in FIG. 1 , such as broadcasting system 120 and/or consumer system 130 .
- program data associated with a television program 110 is created.
- the program data is in the form of audio, video, and/or data associated with a television program 110 .
- Examples of data associated with a television program include electronic programming guide information and closed captioning information.
- the broadcasting system 130 receives program data for a particular television program 110 .
- focal-point metadata is generated for a focus point that follows a point of interest and indicates a sub-frame within a frame of the video content from the program data.
- the focal-point metadata may indicate a pair of coordinates that defines a sub-frame that is centered on the focal point.
- the focal-point metadata may be defined as (X 1 , Y 1 ), (X 2 , Y 2 ).
- the focal-point metadata may indicate a focal point (X 1 , Y 1 ), such that a sub-frame of a pre-defined size can be centered on the focal point.
- the broadcast system 120 may also generate movement data to anticipate movement of the point of interest and, correspondingly, the focus point. Such movement data may help to improve the picture quality when a user is viewing a sub-frame for a particular focus point that follows a particular point of interest.
- the movement data may be transmitted by the broadcast system 120 to the consumer system 130 in the form of a motion vector for the available points of interest.
- the point of interest may, for example, become the center or focus of the zoomed picture while the motion vector will provide direction on where the zoomed video will move until additional packets of video content are received by the consumer system 130 .
- vector metadata may be generated that indicates movement of a focus point that follows a point of interest in the sub-frame relative to the larger video frame of the video content.
- Such vector metadata may be generated by comparing a current focus point to a previous focus point, and determining the direction and magnitude of movement of the focus point. For example, if the focus point is defined as a center point, a direction of movement in the x-plane may be determined by subtracting a current focus point x-coordinate X t from a previous focus pint x-coordinate X t-1 , where a positive result means the focus point is moving in the positive x-direction.
- a magnitude of movement in the x-plane may be determined by taking the absolute value of the difference in the current focus point x-coordinate and the previous focus point x-coordinate. This approach can also be used to measure direction and magnitude of movement in other planes and for other types of metadata.
- the broadcast system 120 generates a television video transport stream that includes video content for one or more television programs 110 and includes focal-point metadata.
- the television video transport stream also includes vector metadata such as a direction of movement and a magnitude of movement.
- the broadcast system 120 transmits the television video transport stream.
- the broadcast system may transmits a television video transport stream that includes video content for one television program 110 , including focal-point metadata and/or other metadata, by way of a single television channel.
- the method 350 may be implemented by one or more components of a television system, such as the broadcasting system 120 and/or the consumer system 130 shown in FIG. 1 .
- the consumer system 130 receives one or more television video transport streams with video content associated with a particular television program 110 .
- Each television video transport stream may include focal-point metadata, which indicates at least one focus point that follows a point of interest and indicates a sub-frame within a frame of the video content in the stream.
- the consumer system 130 receives focal-point input data indicating a zoom request for a point of interest. For example, if the television program is a football game, the consumer system 130 may display a graphical user interface with a list of the football player's names. The user may select the desired name from the graphical display, thus indicating a zoom request for a point of interest (i.e., the football player whose name was selected), and the consumer system 130 would associate the request with the focal-point input data.
- the consumer system 130 receives movement metadata for a focus point that indicates a direction of movement and/or a magnitude of movement, as described above, from the broadcast system 120 .
- the consumer system 130 may generate movement metadata as described above.
- the consumer system processes 130 the video content in response to the focal-point input data and/or the movement metadata. Then, at block 360 the consumer system 130 generates a television video output signal with video content zoomed to the sub-frame associated with the focal-point metadata. In a further aspect, the consumer system 130 may improve the quality of the television video output signal by utilizing the movement metadata in combination with the focal-point input data.
- the consumer system 130 transmits the television video output signal with zoomed video content.
- the television video output signal can be configured to display the signal on a graphic display in various configurations.
- the zoomed video content could be displayed as a full-screen arrangement.
- the zoomed video content could be displayed as a split-screen arrangement or as a picture-in-picture arrangement.
- Higher-resolution programs for instance UltraHD resolutions, provide even more opportunities for interesting configurations.
- UltraHD resolutions include resolutions for displays with an aspect ratio of at least 16:9 and at least one digital input capable of carrying and presenting native video at a minimum resolution of 3,840 pixels by 2,160 pixels.
- UltraHD may also be referred to as UHD, UHDTV, 4K UHDTV, 8K UHDTV, and/or Super Hi-Vision.
- FIG. 4 is a block diagram illustrating packet-data formatting for a transport stream, according to an exemplary embodiment.
- FIG. 400 shows a data structure for a single packet 400 in a MPEG-2 transport stream, which may include standard definition or high definition video content 402 (not shown).
- packet 400 includes 1 byte of data that indicates a table identifier.
- Packet 400 further includes 1 bit of data as a section syntax indicator.
- the section syntax indicator may correspond to different packet structure formats. For example, a section syntax indicator value of ‘1’ may correspond to the data format for a packet 400 as illustrated in FIG. 4 , while a section syntax indicator value of ‘0’ may correspond to a different data format for a packet 400 that may include different data syntax.
- the different data syntax for a packet 400 may include blocks of data corresponding to a table identifier extension, a version number, a current next indicator, a section number, a last section number, and/or error correction data as provided by the MPEG-2 standard, other standards, or other formats.
- the packet 400 may further include 1 bit of data that designates a private indicator.
- the packet 400 may further include 2 bits of data that are reserved.
- the packet 400 may further include 12 bits of data that designate a private section of length N bytes.
- the packet 400 may further include a private section 410 of length N bytes. Within the private section 410 , two portions of data may also be included; private section item metadata 420 and private section event metadata 430 .
- the private section 410 of packet 400 may be utilized to facilitate selectable viewing options at a consumer unit.
- private section 410 includes focal-point metadata and/or vector metadata corresponding to one or more points of interest in the video content 402 (not shown) included in the transport stream.
- focal-point metadata and/or vector metadata may be used to facilitate a consumer system zooming in on and/or following one or more points of interest in Ultra HD video content, although other forms of video content may also be utilized.
- the private section item metadata 420 section of packet 400 may include 1 byte of data that corresponds to an identifier, and may include 32 bytes of data corresponding to an item name, such as a point of interest, a player's name, or an actor's name.
- the item name may also be presented to the user as part of a graphical user interface that allows selection of the item name.
- Private section item metadata 420 may further include 1 byte of data that corresponds to the video source type. Examples of video source type may include the Internet, satellite, recorded content, cable, and/or others. Private section item metadata 420 may also include 32 bytes of data that indicate focal-point metadata.
- focal-point metadata may include coordinates (X, Y) corresponding to a point of interest in the video content 402 .
- a consumer system 130 may be configured to zoom in on a sub-frame of a predetermined size that surrounds the focal point.
- the focal-point metadata in the private section 410 may indicate dimensions of the sub-frame.
- the focal-point metadata may specify opposing corners of a sub-frame that includes a point of interest (e.g., as two coordinate pairs (X 1 , Y 1 ) and (X 2 , Y 2 ).
- Private section item metadata 420 may also include vector metadata that indicates movement (or predicted movement) of a point of interest in video content 402 .
- private section item metadata 420 may include 32 bytes of data that correspond to a direction of movement in the x-direction, 32 bytes of data that correspond to a magnitude of movement in the x-direction, 32 bytes of data that correspond to a direction of movement in the y-direction, and 32 bytes of data that correspond to a magnitude of movement in the y-direction.
- focal-point metadata and/or vector metadata may be included as part of an electronic programming guide or advanced programming guide, which is sent to the consumer system 130 , instead of being included in an MPEG-2 transport stream.
- the private section 410 may further include private section event metadata 430 .
- the private section event metadata 430 may include 1 byte of data that corresponds to an identifier and 32 bytes of data that indicate an event name.
- Private section event metadata 430 may further include 4 bytes of data that designate the length X of a description, followed by X bytes of data that provide the description of the video content (e.g., a name of and/or a plot description of a television program).
- Private section event metadata 430 may also include 1 byte of data that indicates an event type. Examples of event types for television include a movie, sports event, and news, among other possibilities.
- private section event metadata 430 may include 1 byte of data that indicates a camera angle type.
- packet 400 may include data that facilitates error detection.
- the last 32 bytes of packet 400 may include data that facilitates a cyclic redundancy check (CRC), such as points of data sampled from packet 400 .
- CRC cyclic redundancy check
- a CRC process may then be applied at the consumer system, which uses an error-detecting code to analyze the sampled data and detect accidental changes to the received packet 400 .
- private section 510 may include additional or different data.
- the packet 500 has the same general structure as packet 400 but packet 500 may include data related to identification of multiple camera views and a selection of one of those views.
- FIG. 5 shows a private section 510 with 32 bytes of data indicating the video location (e.g., a video location corresponding to a particular television channel, television frequency, or website universal resource locator).
- FIGS. 6A to 6C illustrate a scenario in which an exemplary graphical user interface may be provided, which allows a user to select viewing options corresponding to different points of interest in a television program.
- an icon 610 is displayed in order to notify a viewer that different interactive viewing options are available.
- icon 610 may be displayed over video content 620 in a display 630 when focal-point metadata and/or vector metadata is available, such that the viewer can access a graphical user interface (GUI) to select particular points of interest in the video content 620 to zoom in on and follow.
- GUI graphical user interface
- the user may provide input (e.g., by clicking a button on a remote control) in order to access a GUI for selectable viewing options.
- the consumer system may display the GUI on the display 630 .
- the GUI 640 for selectable viewing options is being displayed on the display 630 .
- the GUI 640 provides a user with the option of selecting a point of interest from multiple points of interest.
- the points of interest include different football players and the football.
- each point of interest i.e., each football player and the football
- the user may navigate through the GUI 640 to select particular points of interest using, e.g., buttons on a remote control for the consumer system 130 .
- Other types of user-interface devices may also be utilized to receive such input.
- the selection of a particular point of interest via the GUI 640 may be referred to as a zoom request.
- a zoom request has been received for Player 2 .
- the consumer system 130 processes the video content 620 based on the zoom request (i.e., focal-point input data) and generates a television video output signal that is zoomed in on the point of interest (i.e., Player 2 ).
- the consumer system may initially display a box 650 in the display 630 , which indicates the sub-frame surrounding a selected point of interest.
- the sub-frame may be defined by a coordinate pair that indicates opposing corners of the sub-frame (X 1 , Y 1 ), (X 2 , Y 2 ) within the particular frame of video content, which include the point of interest.
- a box 650 indicating the surrounding sub-frame may or may not be displayed before zooming in on the point of interest, depending upon the particular implementation.
- the box 650 indicating the sub-frame with the selected point of interest may be displayed momentarily, before zooming in on the sub-frame, or until further input is received from the user to confirm the zoom request.
- the point of interest may be zoomed in on, without ever displaying a box 650 indicating the sub-frame, within the larger frame of video content.
- FIG. 6C illustrates the display 630 after the consumer system 130 has received a zoom request, and responsively zoomed in on Player 2 .
- the consumer system 130 may begin processing the video content 620 in order to crop the full frames in the transport stream, and generate sub-frames as indicated by the focus-point metadata and/or vector metadata in the transport stream, which correspond to the selected point of interest (e.g., Player 2 ).
- the display 630 may display a portion of each frame of video content that is zoomed in on Player 2 , effectively providing a view that follows the movements of Player 2 within the frames of the video content 620 .
- a picture-in-picture display arrangement 660 may be provided.
- a picture-in-picture display arrangement 660 may include the nearly full-screen display of the zoomed-in view of video content 620 of Player 2 670 , and the overlaid picture-in-picture display of the full-frame of video content 620 including a larger area of the playing field.
- the zoomed-in view of video content 620 of Player 2 670 is sized to fill the display, and the full-frame view of video content 620 including the larger area of the playing field is displayed in a picture-in-picture format that is overlaid on the zoomed-in view of video content 620 of Player 2 670 .
- the zoomed-in view of video content 620 of Player 2 670 may be displayed in the smaller picture-in-picture format, which can be overlaid on the full-frame view of video content 620 including the larger area of the playing field.
- similar content may be provided using split-screen arrangements, other types of picture-in-picture arrangements, full screen arrangements, and/or other types of arrangements.
- each point of interest may correspond to a different camera view, which is provided on a different television channel. Accordingly, when a zoom request is received that indicates to zoom in on one of the points of interest indicated in GUI 640 , the consumer system 130 may responsively tune to the channel providing the camera view that is focused on the selected point of interest.
- FIGS. 7A and 7B illustrate a scenario in which an exemplary graphical user interface may be provided, which allows a user to select various viewing options from a GUI corresponding to different points of interest in a television program.
- FIG. 7A illustrates an exemplary GUI 710 for zooming in on different points of interest of a video stream.
- FIG. 7A shows a television that is displaying a GUI 710 for interacting with a football game that is being broadcast live on a particular television channel.
- the signal stream for the particular channel may include data that can be used to provide a GUI 710 overlaid on the video of the football game.
- the GUI 710 may have a first selection level 712 that is associated with metadata.
- the first selection level 712 may be associated with focal-point type metadata (e.g., cornerbacks, quarterback, running backs, coaches, band members, people in the stands).
- the consumer system 130 may receive a selection request for the first selection level 712 from the user, for example, by the user pressing a button on a remote control.
- FIG. 7A represents a selection request of cornerback for the first selection level 712 .
- the GUI 710 may have a second selection level 714 that may be associated with metadata.
- the second selection level 714 may be associated with focal-point metadata.
- the consumer system 130 may receive a selection request for the second selection level 714 from the user, for example, by the user pressing a button on a remote control.
- the second selection level 714 is associated with focal-point metadata corresponding to points of interest (e.g., Player 1 , Player 2 ).
- FIG. 7A illustrates a selection request of Player 1 for the second selection level 714 .
- the GUI 710 may have a third selection level 716 that may be associated with metadata.
- the third selection level 716 be associated with camera selection metadata.
- FIG. 7A illustrates different cameras placed around the field with reference numerals C 1 , C 8 , C 9 , and C 10 .
- the consumer system 130 may receive a selection request for the third selection level 716 from the user, for example, by the user pressing a button on a remote control.
- the third selection level 716 is associated with camera selection metadata corresponding to different camera views of the event (e.g., camera 1 , camera 2 , camera 8 , camera 9 , and camera 10 ).
- FIG. 7A illustrates a selection request of Camera 9 for the third selection level 716 .
- FIG. 7B illustrates the result of the three selection requests of FIG. 7A ; namely, the first selection level 712 of cornerback, the second selection level 714 of Player 1 , and the third selection level 716 of Camera 9 .
- Player 1 is centered or featured in a zoomed-in display from the view of Camera 9 .
- method 800 illustrates an implementation of methods 300 and 350 , which utilizes an MPEG-2 transport stream. More specifically, at block 810 , uncompressed video is created, such as a live stream for a television program.
- a television program content provider or a head-end operator, specifies initial coordinates of one or more focus points that follow a point of interest and movement metadata indicating movement of the point of interest.
- a head-end operator is a facility for receiving television program signals for processing and distribution.
- the television program content provider may also specify additional data, such as data related to different camera views, data related to types or classifications of points of interest, or other data.
- the uncompressed video may be received by the broadcast system 120 , and the broadcast system 120 may specify initial coordinates, and perform other functions of block 820 .
- the broadcast system 120 identifies the initial coordinates of one or more focus points that follow one or more points of interest in a television program 110 .
- the broadcast system 120 may use a coder-decoder, or codec, to encode the focal-point metadata, movement metadata, and/or other data.
- the broadcast system 120 compresses the uncompressed video and appends data related to the point of interest, such as focal-point metadata and movement metadata, in the private section of the MPEG-2 transport stream.
- the broadcast system 120 may use a codec compress the uncompressed video and to append the data related to the point of interest in the private section.
- the broadcast system 120 transmits the compressed MPEG-2 transport stream.
- the broadcast system 120 may transmit via a satellite television system.
- the consumer system 130 decodes the MPEG-2 transport stream and extracts the private section data, such as the data related to one or more points of interest.
- the consumer system 130 may be a set-top receiver that decodes the transport stream using a codec or other software.
- the consumer system 130 provides a television video output signal that is configured to focus on the point of interest and follow its motion.
- a receiver 900 may be one portion of a consumer system 130 , as illustrated in FIGS. 1-2 .
- a receiver 900 may be a set-top box of a consumer system.
- the receiver 900 may include various component modules for use within the local area network and for displaying signals. The display of signals may take place by rendering signals provided from the network.
- the receiver 900 may comprise various different types of devices or may be incorporated into various types of devices.
- receiver 900 may be a standalone device that is used to intercommunicate between a local area network and the broadcast system 120 (e.g., a server), as illustrated in FIGS. 1-2 .
- the receiver 900 may also be incorporated into various types of devices such as a television, a video gaming system, a hand-held device such as a phone or personal media player, a computer, or any other type of device capable of being networked.
- the receiver 900 may include various component modules such as those illustrated below. It should be noted that some of the components may be optional components depending on the desired capabilities of the receiver 900 . It should also be noted that the receiver 900 may equally apply to a mobile user system.
- a mobile user system may include a tracking antenna to account for the mobility of a mobile user system. This is in contrast to a fixed user system that may have an antenna that may be fixed in a signal direction.
- the mobile user system may include systems in airplanes, trains, buses, ships, and/or other situations where it may be desirable to have mobility.
- the receiver 900 may include an interface module 910 .
- the interface module 910 may control communication between the local area network and the receiver 900 .
- the receiver 900 may be integrated within various types of devices or may be a standalone device.
- the interface module 910 may include a rendering module 912 .
- the rendering module 912 may receive formatted signals through the local area network that are to be displayed on the display.
- the rendering module 912 may place pixels in locations as instructed by the formatted signals. By not including a decoder, the rendering module 912 will allow consistent customer experiences at various consumer systems 130 .
- the rendering module 912 communicates rendered signals to the display of the device or an external display.
- the rendering module 912 may receive content, such as a video transport stream that includes video content associated with a particular television program.
- the video transport stream may include metadata, for example, metadata described above such as metadata related to a point of interest.
- the rendering module 912 may receive data indicating a zoom request. For example, a user of the consumer system 130 may view a graphical user interface on the display and push a button on a remote control associated with the consumer system to indicate a zoom request for a point of interest. Upon receipt of the zoom request, the rendering module 912 may process the video content in response to the zoom request. For example, the rendering module 912 may generate a television video output signal that is configured to be viewable on a graphic display and includes video content that is zoomed to the point of interest chosen by the user of the consumer system 130 .
- the receiver 900 may receive and process the video transport stream using different components or methods.
- the receiver 900 may include a separate video processing system or component (not shown) to receive the video transport stream, to receive the data associated with the zoom request, and/or to generate a zoomed television video output signal.
- a boot-up acquisition module 914 may provide signals through the interface module 910 during boot-up of the receiver 900 .
- the boot-up acquisition module 914 may provide various data that is stored in memory 916 through the interlace module 910 .
- the boot-up acquisition module 914 may provide a make identifier, a model identifier, a hardware revision identifier, a major software revision, and/or a minor software revision identifier. Additionally or alternatively, a download location for the server to download a boot image may also be provided.
- a unique identifier for each device may also be provided. However, the server device is not required to maintain a specific identity of each device. Rather, the non-specific identifiers may be used such as the make, model, etc. described above.
- the boot-up acquisition module 914 may obtain each of the above-mentioned data from memory 916 .
- the memory 916 may include various types of memory that are either permanently allocated or temporarily allocated.
- the on-screen graphics display buffer 916 A may be either permanently allocated or temporarily allocated.
- the on-screen graphics display buffer 916 A is used for directly controlling the graphics to the display associated with the receiver 900 .
- the on-screen graphics display buffer 916 A may have pixels therein that are ultimately communicated to the display associated with the consumer system 130 .
- An off-screen graphics display 916 B may be a temporary buffer.
- the off-screen graphics display buffer 916 B may include a plurality of off-screen graphics display buffers.
- the off-screen graphics display buffer 916 B may store the graphics display data prior to communication with the onscreen graphics display 916 A.
- the off-screen graphics display buffer 916 B may store more data than that being used by the on-screen graphics display buffer 916 A.
- the off-screen graphics display buffer 916 B may include multiple lines of programming guide data that are not currently being displayed through the on-screen graphics display buffer 916 A.
- the off-screen graphics display buffer 916 B may have a size that is controlled by the server device as will be described below.
- the off-screen graphics display buffer 916 B may also have a pixel format designated by the server device.
- the off-screen graphics display buffer may vary in size from, for example, hundreds of bytes to many megabytes such as 16 megabytes.
- the graphics buffers may be continually allocated and deallocated even within a remote user
- a video buffer memory 916 C may also be included within the memory 916 .
- the remote user interface may provide the server with information about, but not limited to, the video capabilities of the consumer system 130 , the aspect ratio of the consumer system 130 , the output resolution of the consumer system 130 , and the resolution or position of the buffer in the display of the consumer system 130 .
- a closed-caption decoder module 918 may also be included within the receiver 900 .
- the closed-caption decoder module 918 may be used to decode closed-captioning signals.
- the closed-captioning decoder module 918 may also be in communication with rendering module 912 so that the closed-captioning display area may be overlaid upon the rendered signals from the rendering module 912 when displayed upon the display associated with the receiver 900 .
- the closed-captioning decoder module 918 may be in communication with the closed-captioning control module 920 .
- the closed-captioning control module 920 may control the enablement and disablement of the closed-captioning as well as closed-captioning setup such as font style, position, color and opacity.
- the closed-captioning control module 920 may generate a closed-captioning menu.
- the closed captioning control module 920 may receive an input from a user interface such as a push button on the receiver 900 or on a remote-control device associated with the receiver 900 .
- the server device may pass control of the display to the receiver 900 for the closed-captioning menu to be displayed.
- the menus may be local and associated with the closed captioning control module 920 .
- the menus may actually be stored within a memory associated with the closed-captioning control module 920 or within the memory 916 of the receiver 900 .
- the server device passes control to the receiver 900 , the closed-captioning menu will appear on the display associated with the receiver 900 .
- Parameters for closed captioning including turning on the closed-captioning and turning off the closed-captioning may be performed by the system user.
- the control is passed back from the receiver 900 to the server device which maintains the closed-captioning status.
- the server device may then override the receiver 900 when the closed-captioning is turned on and the program type does not correspond to a closed-captioning type.
- the server device may override the closed-captioning when the closed-captioning is not applicable to a program-type display such as a menu or program guide.
- the HTTP client module 930 may provide formatted HTTP signals to and from the interface module 910 .
- a remote user interface module 934 allows receivers 900 associated with the media server to communicate remote control commands and status to the server.
- the remote user interface module 934 may be in communication with the receiving module 936 .
- the receiving module 936 may receive the signals from a remote control associated with the display and convert them to a form usable by the remote user interface module 934 .
- the remote user interface module 934 allows the server to send graphics and audio and video to provide a full featured user interface within the receiver 900 .
- the remote user interface module may also receive data through the interface module 910 .
- modules such as the rendering module 912 and the remote user interface module 934 may communicate and render both audio and visual signals.
- a clock 940 may communicate with various devices within the system so that the signals and the communications between the server and receiver 900 are synchronized and controlled.
- the server 1000 is used for communicating with all or part of consumer systems 130 , such as the receiver 900 .
- the server 1000 may be part of the broadcast system 120 , as illustrated in FIGS. 1-2 , and, as mentioned above, may also be used for communication directly with a display.
- the server 1000 may be a standalone device or may be provided within another device.
- the server 1000 may be provided within or incorporated with a standard set top box.
- the server 1000 may also be included within a video gaming system, a computer, or other type of workable device.
- the functional blocks provided below may vary depending on the system and the desired requirements for the system.
- the server 1000 may be several different types of devices.
- the server 1000 may act as a set top box for various types of signals such as satellite signals or cable television signals.
- the server 1000 may also be part of a video gaming system. Thus, not all of the components are required for the server device set forth below.
- server 1000 may be in communication with various external content sources such as satellite television, cable television, the Internet or other types of data sources.
- a front end 1008 may be provided for processing signals, if required.
- the front end 1008 of the server device may include a tuner 1010 , a demodulator 1012 , a forward error correction (FEC) decoder module 1014 and any buffers associated therewith.
- FEC forward error correction
- the front end 1008 of the server 1000 may thus be used to tune and demodulate various channels for providing live or recorded television ultimately to the consumer system 130 .
- a conditional access module 1020 may also be provided. The conditional access module 1020 may allow the device to properly decode signals and prevent unauthorized reception of the signals.
- a format module 1024 may be in communication with a network interface module 1026 .
- the format module 1024 may receive the decoded signals from the decoder 1014 or the conditional access module 1020 , if available, and format the signals so that they may be rendered after transmission through the local area network through the network interface module 1026 to the consumer system 130 .
- the format module 1024 may generate a signal capable of being used as a bitmap or other types of renderable signals. Essentially, the format module 1024 may generate commands to control pixels at different locations of the display.
- the server 1000 receives and processes a video transport stream.
- the format module 1024 may receive content, such as a video transport stream that includes video content associated with a particular television program.
- the video transport stream may include metadata, for example, metadata described previously, such as metadata related to a point of interest.
- the format module 1024 may generate a zoomed television video output signal, based on the received metadata, and transmit the generated television video output signal, for example, to a consumer system 130 .
- the format module 1024 may generate metadata, for example, metadata described previously, such as metadata related to a point of interest.
- the format module 1024 may generate a television video output signal that is configured to be viewable on a graphic display and includes video content that is zoomed to a point of interest.
- the server 1000 may receive and process the video transport stream using different components or methods.
- the server 1000 may include a separate video processing system or component (not shown) to receive the video transport stream, to receive the data associated with the zoom request and/or to generate a zoomed television video output signal.
- the server 1000 may also be used for other functions including managing the software images for the client.
- a client image manager module 1030 may be used to keep track of the various devices that are attached to the local area network or attached directly to the server device.
- the client image manager module 1030 may keep track of the software major and minor revisions.
- the client image manager module 1030 may be a database of the software images and their status of update.
- a memory 1034 may also be incorporated into the server 1000 .
- the memory 1034 may be various types of memory or a combination of different types of memory. These may include, but are not limited to, a hard drive, flash memory, ROM, RAM, keep-alive memory, and the like.
- the memory 1034 may contain various data such as the client image manager database described above with respect to the client image manager module 1030 .
- the memory may also contain other data such as a database of connected clients 1036 .
- the database of connected clients may also include the client image manager module 1030 data.
- a trick play module 1040 may also be included within the server 1000 .
- the trick play module 1040 may allow the server 1000 to provide renderable formatted signals from the format module 1024 in a format to allow trick play such as rewinding, forwarding, skipping, and the like.
- An HTTP server module 1044 may also be in communication with the network interface module 1026 .
- the HTTP server module 1044 may allow the server 1000 to communicate with the local area network. Also, the HTTP server module may also allow the server 1000 to communicate with external networks such as the Internet.
- a remote user interface (RUI) server module 1046 may control the remote user interfaces that are provided from the server 1000 to the consumer system 130 .
- a clock 1050 may also be incorporated within the server 1000 .
- the clock 1050 may be used to time and control various communications with various consumer systems 130 .
- a control point module 1052 may be used to control and supervise the various functions provided above within the server device.
- the server 1000 may support multiple consumer systems 130 within the local area network. Each consumer system 130 may be capable of receiving a different channel or data stream. Each consumer system 130 may be controlled by the server 1000 to receive a different renderable content signal.
- a closed-captioning control module 1054 may also be disposed within the server 1000 .
- the closed-captioning control module 1054 may receive inputs from a program-type determination module 1056 .
- the program-type determination module 1056 may receive the programming content to be displayed at a consumer system 130 and determine the type of program or display that the consumer system 130 will display.
- the programming-type determination module 1056 is illustrated as being in communication with the format module 1024 . However, the program-type determination module 1056 may be in communication with various other modules such as the decoder module 1014 .
- the program-type determination module 1056 may make a determination as to the type of programming that is being communicated to the consumer system 130 .
- the program-type determination module 1056 may determine whether the program is a live broadcasted program, a time-delayed or on-demand program, or a content-type that is exempt from using closed-captioning such as a menu or program guide.
- a closed-captioning disable signal may be provided to the closed-captioning control module 1054 to prevent the closed-captioning from appearing at the display associated with the consumer system 130 .
- the closed-captioning disable signal may be communicated from the closed-captioning control module 1054 through the format module 1024 or network interface module 1026 to the consumer system 130 .
- the consumer system 130 may disable the closed-captioning until a non-exempt programming-type, content-type, or a closed-captioning enable signal is communicated to the consumer system 130 .
- the consumer system 130 may disable the closed-captioning through the closed-captioning control module 920 illustrated in FIG. 9 as part of a receiver 900 .
- the closed-captioning control module 1054 may also be in communication with a closed-captioning encoder 1058 .
- the closed-captioning encoder 1048 may encode the closed-captioning in a format so that the closed-captioning decoder module 918 of FIG. 9 may decode the closed-captioning signal.
- the closed-captioning encoder module 1058 may be optional since a closed-captioning signal may be received from the external source.
- any of the methods described herein may be provided in a form of instructions stored on a non-transitory, computer readable medium, that when executed by a computing device, cause the computing device to perform functions of the method. Further examples may also include articles of manufacture including tangible computer-readable media that have computer-readable instructions encoded thereon, and the instructions may comprise instructions to perform functions of the methods described herein.
- the computer readable medium may include non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM).
- the computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example.
- the computer readable media may also be any other volatile or non-volatile storage systems.
- the computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage medium.
- circuitry may be provided that is wired to perform logical functions in any processes or methods described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (25)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/106,242 US9271048B2 (en) | 2013-12-13 | 2013-12-13 | Systems and methods for immersive viewing experience |
MX2016007550A MX2016007550A (en) | 2013-12-13 | 2014-11-18 | Systems and methods for immersive viewing experience. |
PCT/US2014/066202 WO2015088719A1 (en) | 2013-12-13 | 2014-11-18 | Systems and methods for immersive viewing experience |
UY0001035883A UY35883A (en) | 2013-12-13 | 2014-12-12 | SYSTEMS AND METHODS FOR SURROUNDING VISUALIZATION EXPERIENCE |
ARP140104653A AR098751A1 (en) | 2013-12-13 | 2014-12-12 | SYSTEMS AND METHODS FOR SURROUNDING VISUALIZATION EXPERIENCE |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/106,242 US9271048B2 (en) | 2013-12-13 | 2013-12-13 | Systems and methods for immersive viewing experience |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150172775A1 US20150172775A1 (en) | 2015-06-18 |
US9271048B2 true US9271048B2 (en) | 2016-02-23 |
Family
ID=52101586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/106,242 Active US9271048B2 (en) | 2013-12-13 | 2013-12-13 | Systems and methods for immersive viewing experience |
Country Status (5)
Country | Link |
---|---|
US (1) | US9271048B2 (en) |
AR (1) | AR098751A1 (en) |
MX (1) | MX2016007550A (en) |
UY (1) | UY35883A (en) |
WO (1) | WO2015088719A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160098180A1 (en) * | 2014-10-01 | 2016-04-07 | Sony Corporation | Presentation of enlarged content on companion display device |
US20160182771A1 (en) * | 2014-12-23 | 2016-06-23 | Electronics And Telecommunications Research Institute | Apparatus and method for generating sensory effect metadata |
US10404964B2 (en) | 2017-01-17 | 2019-09-03 | Nokia Technologies Oy | Method for processing media content and technical equipment for the same |
US11102543B2 (en) | 2014-03-07 | 2021-08-24 | Sony Corporation | Control of large screen display using wireless portable computer to pan and zoom on large screen display |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11153656B2 (en) * | 2020-01-08 | 2021-10-19 | Tailstream Technologies, Llc | Authenticated stream manipulation |
US10405046B2 (en) * | 2014-06-09 | 2019-09-03 | Lg Electronics Inc. | Service guide information transmission method, service guide information reception method, service guide information transmission device, and service guide information reception device |
CA2998482A1 (en) * | 2014-09-12 | 2016-03-17 | Kiswe Mobile Inc. | Methods and apparatus for content interaction |
US10735823B2 (en) * | 2015-03-13 | 2020-08-04 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for optimized delivery of live ABR media |
US10432688B2 (en) | 2015-03-13 | 2019-10-01 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for optimized delivery of live ABR media |
US10791285B2 (en) * | 2015-10-05 | 2020-09-29 | Woncheol Choi | Virtual flying camera system |
US10187687B2 (en) * | 2015-11-06 | 2019-01-22 | Rovi Guides, Inc. | Systems and methods for creating rated and curated spectator feeds |
US20170171495A1 (en) * | 2015-12-15 | 2017-06-15 | Le Holdings (Beijing) Co., Ltd. | Method and Electronic Device for Displaying Live Programme |
CN105760238B (en) * | 2016-01-29 | 2018-10-19 | 腾讯科技(深圳)有限公司 | The treating method and apparatus and system of graphics instructional data |
WO2017196670A1 (en) | 2016-05-13 | 2017-11-16 | Vid Scale, Inc. | Bit depth remapping based on viewing parameters |
US10102423B2 (en) * | 2016-06-30 | 2018-10-16 | Snap Inc. | Object modeling and replacement in a video stream |
EP4336850A3 (en) | 2016-07-08 | 2024-04-17 | InterDigital Madison Patent Holdings, SAS | Systems and methods for region-of-interest tone remapping |
WO2018017936A1 (en) * | 2016-07-22 | 2018-01-25 | Vid Scale, Inc. | Systems and methods for integrating and delivering objects of interest in video |
US20180310066A1 (en) * | 2016-08-09 | 2018-10-25 | Paronym Inc. | Moving image reproduction device, moving image reproduction method, moving image distribution system, storage medium with moving image reproduction program stored therein |
WO2018035133A1 (en) | 2016-08-17 | 2018-02-22 | Vid Scale, Inc. | Secondary content insertion in 360-degree video |
CN108124167A (en) * | 2016-11-30 | 2018-06-05 | 阿里巴巴集团控股有限公司 | A kind of play handling method, device and equipment |
CN110301136B (en) | 2017-02-17 | 2023-03-24 | 交互数字麦迪逊专利控股公司 | System and method for selective object of interest scaling in streaming video |
CN110383848B (en) | 2017-03-07 | 2022-05-06 | 交互数字麦迪逊专利控股公司 | Customized video streaming for multi-device presentation |
JP6463826B1 (en) * | 2017-11-27 | 2019-02-06 | 株式会社ドワンゴ | Video distribution server, video distribution method, and video distribution program |
US20190253751A1 (en) * | 2018-02-13 | 2019-08-15 | Perfect Corp. | Systems and Methods for Providing Product Information During a Live Broadcast |
JP2020005038A (en) * | 2018-06-25 | 2020-01-09 | キヤノン株式会社 | Transmission device, transmission method, reception device, reception method, and program |
US11082752B2 (en) * | 2018-07-19 | 2021-08-03 | Netflix, Inc. | Shot-based view files for trick play mode in a network-based video delivery system |
CN109343923B (en) * | 2018-09-20 | 2023-04-07 | 聚好看科技股份有限公司 | Method and equipment for zooming user interface focus frame of intelligent television |
US11012750B2 (en) * | 2018-11-14 | 2021-05-18 | Rohde & Schwarz Gmbh & Co. Kg | Method for configuring a multiviewer as well as multiviewer |
US11589094B2 (en) * | 2019-07-22 | 2023-02-21 | At&T Intellectual Property I, L.P. | System and method for recommending media content based on actual viewers |
JP7384008B2 (en) * | 2019-11-29 | 2023-11-21 | 富士通株式会社 | Video generation program, video generation method, and video generation system |
US11966500B2 (en) * | 2020-08-14 | 2024-04-23 | Acronis International Gmbh | Systems and methods for isolating private information in streamed data |
CN114637890A (en) * | 2020-12-16 | 2022-06-17 | 花瓣云科技有限公司 | Method for displaying label in image picture, terminal device and storage medium |
US11985389B2 (en) * | 2021-07-12 | 2024-05-14 | Avago Technologies International Sales Pte. Limited | Object or region of interest video processing system and method |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002047393A1 (en) | 2000-12-07 | 2002-06-13 | Thomson Licensing S.A. | Coding process and device for the displaying of a zoomed mpeg2 coded image |
US20030208771A1 (en) | 1999-10-29 | 2003-11-06 | Debra Hensgen | System and method for providing multi-perspective instant replay |
WO2004040896A2 (en) | 2002-10-30 | 2004-05-13 | Nds Limited | Interactive broadcast system |
US20040119815A1 (en) | 2000-11-08 | 2004-06-24 | Hughes Electronics Corporation | Simplified interactive user interface for multi-video channel navigation |
WO2005107264A1 (en) | 2004-04-30 | 2005-11-10 | British Broadcasting Corporation | Media content and enhancement data delivery |
US20050283798A1 (en) * | 2004-06-03 | 2005-12-22 | Hillcrest Laboratories, Inc. | Client-server architectures and methods for zoomable user interfaces |
US20070061862A1 (en) | 2005-09-15 | 2007-03-15 | Berger Adam L | Broadcasting video content to devices having different video presentation capabilities |
WO2007057875A2 (en) | 2005-11-15 | 2007-05-24 | Nds Limited | Digital video zooming system |
WO2007061068A1 (en) | 2005-11-28 | 2007-05-31 | Matsushita Electric Industrial Co., Ltd. | Receiver and line video distributing device |
US20080079754A1 (en) | 2006-07-27 | 2008-04-03 | Yoshihiko Kuroki | Content Providing Method, a Program of Content Providing Method, a Recording Medium on Which a Program of a Content Providing Method is Recorded, and a Content Providing Apparatus |
US20080172692A1 (en) * | 2007-01-16 | 2008-07-17 | Sony Corporation | Program distribution system and recording and reproduction device |
US20080172709A1 (en) | 2007-01-16 | 2008-07-17 | Samsung Electronics Co., Ltd. | Server and method for providing personal broadcast content service and user terminal apparatus and method for generating personal broadcast content |
US20090199232A1 (en) * | 2008-01-31 | 2009-08-06 | Panasonic Corporation | Recording and playing system, client terminal and server terminal |
EP2117231A1 (en) | 2008-05-06 | 2009-11-11 | Sony Corporation | Service providing method and service providing apparatus for generating and transmitting a digital television signal stream and method and receiving means for receiving and processing a digital television signal stream |
US20100067865A1 (en) * | 2008-07-11 | 2010-03-18 | Ashutosh Saxena | Systems, Methods and Devices for Augmenting Video Content |
US20100077441A1 (en) * | 2005-07-22 | 2010-03-25 | Genevieve Thomas | Buffering content on a handheld electronic device |
US20110299832A1 (en) * | 2010-06-02 | 2011-12-08 | Microsoft Corporation | Adaptive video zoom |
US20110302308A1 (en) * | 2010-06-04 | 2011-12-08 | Rich Prodan | Method and System for Providing User-Generated Content Via a Gateway |
US20110314496A1 (en) | 2010-06-22 | 2011-12-22 | Verizon Patent And Licensing, Inc. | Enhanced media content transport stream for media content delivery systems and methods |
US20130081082A1 (en) | 2011-09-28 | 2013-03-28 | Juan Carlos Riveiro Insua | Producing video bits for space time video summary |
US8644620B1 (en) * | 2011-06-21 | 2014-02-04 | Google Inc. | Processing of matching regions in a stream of screen images |
US20140150042A1 (en) * | 2012-11-29 | 2014-05-29 | Kangaroo Media, Inc. | Mobile device with location-based content |
-
2013
- 2013-12-13 US US14/106,242 patent/US9271048B2/en active Active
-
2014
- 2014-11-18 MX MX2016007550A patent/MX2016007550A/en unknown
- 2014-11-18 WO PCT/US2014/066202 patent/WO2015088719A1/en active Application Filing
- 2014-12-12 AR ARP140104653A patent/AR098751A1/en unknown
- 2014-12-12 UY UY0001035883A patent/UY35883A/en active IP Right Grant
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030208771A1 (en) | 1999-10-29 | 2003-11-06 | Debra Hensgen | System and method for providing multi-perspective instant replay |
US20040119815A1 (en) | 2000-11-08 | 2004-06-24 | Hughes Electronics Corporation | Simplified interactive user interface for multi-video channel navigation |
WO2002047393A1 (en) | 2000-12-07 | 2002-06-13 | Thomson Licensing S.A. | Coding process and device for the displaying of a zoomed mpeg2 coded image |
WO2004040896A2 (en) | 2002-10-30 | 2004-05-13 | Nds Limited | Interactive broadcast system |
WO2005107264A1 (en) | 2004-04-30 | 2005-11-10 | British Broadcasting Corporation | Media content and enhancement data delivery |
US20050283798A1 (en) * | 2004-06-03 | 2005-12-22 | Hillcrest Laboratories, Inc. | Client-server architectures and methods for zoomable user interfaces |
US20100077441A1 (en) * | 2005-07-22 | 2010-03-25 | Genevieve Thomas | Buffering content on a handheld electronic device |
US20070061862A1 (en) | 2005-09-15 | 2007-03-15 | Berger Adam L | Broadcasting video content to devices having different video presentation capabilities |
WO2007057875A2 (en) | 2005-11-15 | 2007-05-24 | Nds Limited | Digital video zooming system |
WO2007061068A1 (en) | 2005-11-28 | 2007-05-31 | Matsushita Electric Industrial Co., Ltd. | Receiver and line video distributing device |
US20080079754A1 (en) | 2006-07-27 | 2008-04-03 | Yoshihiko Kuroki | Content Providing Method, a Program of Content Providing Method, a Recording Medium on Which a Program of a Content Providing Method is Recorded, and a Content Providing Apparatus |
US20080172692A1 (en) * | 2007-01-16 | 2008-07-17 | Sony Corporation | Program distribution system and recording and reproduction device |
US20080172709A1 (en) | 2007-01-16 | 2008-07-17 | Samsung Electronics Co., Ltd. | Server and method for providing personal broadcast content service and user terminal apparatus and method for generating personal broadcast content |
US20090199232A1 (en) * | 2008-01-31 | 2009-08-06 | Panasonic Corporation | Recording and playing system, client terminal and server terminal |
EP2117231A1 (en) | 2008-05-06 | 2009-11-11 | Sony Corporation | Service providing method and service providing apparatus for generating and transmitting a digital television signal stream and method and receiving means for receiving and processing a digital television signal stream |
US20100067865A1 (en) * | 2008-07-11 | 2010-03-18 | Ashutosh Saxena | Systems, Methods and Devices for Augmenting Video Content |
US20110299832A1 (en) * | 2010-06-02 | 2011-12-08 | Microsoft Corporation | Adaptive video zoom |
US20110302308A1 (en) * | 2010-06-04 | 2011-12-08 | Rich Prodan | Method and System for Providing User-Generated Content Via a Gateway |
US20110314496A1 (en) | 2010-06-22 | 2011-12-22 | Verizon Patent And Licensing, Inc. | Enhanced media content transport stream for media content delivery systems and methods |
US8644620B1 (en) * | 2011-06-21 | 2014-02-04 | Google Inc. | Processing of matching regions in a stream of screen images |
US20130081082A1 (en) | 2011-09-28 | 2013-03-28 | Juan Carlos Riveiro Insua | Producing video bits for space time video summary |
US20140150042A1 (en) * | 2012-11-29 | 2014-05-29 | Kangaroo Media, Inc. | Mobile device with location-based content |
Non-Patent Citations (6)
Title |
---|
International Search Report and Written Opinion dated May 19, 2015 in International Application No. PCT/US2014/066202 filed Nov. 18, 2014 by Woei-Shyang Yee et al. |
Invitation to Pay Additional Fees and, Where Applicable, Protest Fee and Communication Relating to the Results of the Partial International Search dated Feb. 20, 2015 in International Application No. PCT/US2014/066202 filed Nov. 18, 2014 by Woei-Shyang Yee et al. |
Kang, Kyeongok et al.; "Metadata Broadcasting for Personalized Service: A Practical Solution"; ETRI Journal, Electronics and Telecommunications Research Institute; Korea; vol. 26, No. 5; Oct. 1, 2004; pp. 452-466; XP002513087; ISSN: 1225-6463; DOI:10.4218/ETRIJ.04.0603.0011. |
Kyeongok et al., "Metadata broadcasting for personalized service: a practical solution," ETRI Journal, Electronics and Telecommunications Research Institute, vol. 26, No. 5, Oct. 1, 2004, pp. 452-466. |
Schreer et al., "Ultrahigh-Resolution Panoramic Imaging for Format-Agnostic Video Production," Proceedings of the IEEE, vol. 101, No. 1, Jan. 1, 2013, pp. 99-114. |
Schreer, Oliver et al.; "Ultrahigh-Resoluation Panamoramic Imaging for Format-Agnostic Video Production"; Proceedings of the IEEE, IEEE; New York, US; vol. 101, No. 1; Jan. 1, 2013; pp. 99-114; XP011482309; ISSN:0018-9219; DOI:10.1109/JPROC.2012.2193850. |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11102543B2 (en) | 2014-03-07 | 2021-08-24 | Sony Corporation | Control of large screen display using wireless portable computer to pan and zoom on large screen display |
US20160098180A1 (en) * | 2014-10-01 | 2016-04-07 | Sony Corporation | Presentation of enlarged content on companion display device |
US20160182771A1 (en) * | 2014-12-23 | 2016-06-23 | Electronics And Telecommunications Research Institute | Apparatus and method for generating sensory effect metadata |
US9936107B2 (en) * | 2014-12-23 | 2018-04-03 | Electronics And Telecommunications Research Institite | Apparatus and method for generating sensory effect metadata |
US10404964B2 (en) | 2017-01-17 | 2019-09-03 | Nokia Technologies Oy | Method for processing media content and technical equipment for the same |
Also Published As
Publication number | Publication date |
---|---|
UY35883A (en) | 2015-06-30 |
US20150172775A1 (en) | 2015-06-18 |
WO2015088719A1 (en) | 2015-06-18 |
AR098751A1 (en) | 2016-06-08 |
MX2016007550A (en) | 2016-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9271048B2 (en) | Systems and methods for immersive viewing experience | |
US9747723B2 (en) | Augmented reality for video system | |
AU2003269448B2 (en) | Interactive broadcast system | |
US10574933B2 (en) | System and method for converting live action alpha-numeric text to re-rendered and embedded pixel information for video overlay | |
US20130113996A1 (en) | Method of picture-in-picture for multimedia applications | |
CN110035316B (en) | Method and apparatus for processing media data | |
JP2007150747A (en) | Receiving apparatus and main line image distribution apparatus | |
JP6441247B2 (en) | Display control method | |
US20220141314A1 (en) | Methods, systems, and apparatus for presenting participant information associated with a media stream | |
JP2019180103A (en) | Display control method | |
US9860600B2 (en) | Display apparatus and control method thereof | |
US20150163445A1 (en) | User interface techniques for television channel changes | |
EP2228985A1 (en) | Combined television data stream, method for displaying television channel and method for generating combined television data stream | |
KR101452902B1 (en) | Broadcasting receiver and controlling method thereof | |
CN109391779B (en) | Method of processing video stream, content consumption apparatus, and computer-readable medium | |
US20180359503A1 (en) | Method And System For Communicating Inserted Material To A Client Device In A Centralized Content Distribution System | |
Sotelo et al. | Experiences on hybrid television and augmented reality on ISDB-T | |
JP2021016187A (en) | Display device | |
US10322348B2 (en) | Systems, methods and apparatus for identifying preferred sporting events based on fantasy league data | |
US10477283B2 (en) | Carrier-based active text enhancement | |
US20170318340A1 (en) | Systems, Methods And Apparatus For Identifying Preferred Sporting Events Based On Viewing Preferences | |
JP5192325B2 (en) | Video playback apparatus and video playback method | |
JP6616211B2 (en) | Broadcast receiver | |
EP3528505A1 (en) | Apparatus and method for operating a media device to select from plural instances of content for play back | |
JP2014192756A (en) | Video receiver and video receiving method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE DIRECTV GROUP, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YEE, WOEI-SHYANG;YANG, CHRISTOPHER;HUIE, WESLEY W.;AND OTHERS;REEL/FRAME:032433/0378 Effective date: 20140313 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: DIRECTV, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE DIRECTV GROUP, INC.;REEL/FRAME:057021/0221 Effective date: 20210728 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:DIRECTV, LLC;REEL/FRAME:057695/0084 Effective date: 20210802 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. AS COLLATERAL AGENT, TEXAS Free format text: SECURITY AGREEMENT;ASSIGNOR:DIRECTV, LLC;REEL/FRAME:058220/0531 Effective date: 20210802 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY AGREEMENT;ASSIGNOR:DIRECTV, LLC;REEL/FRAME:066371/0690 Effective date: 20240124 |