US20180007422A1 - Apparatus and method for providing and displaying content - Google Patents
Apparatus and method for providing and displaying content Download PDFInfo
- Publication number
- US20180007422A1 US20180007422A1 US15/280,947 US201615280947A US2018007422A1 US 20180007422 A1 US20180007422 A1 US 20180007422A1 US 201615280947 A US201615280947 A US 201615280947A US 2018007422 A1 US2018007422 A1 US 2018007422A1
- Authority
- US
- United States
- Prior art keywords
- bit rate
- content
- content item
- rate version
- high bit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000004590 computer program Methods 0.000 claims abstract description 15
- 230000007704 transition Effects 0.000 claims description 15
- 238000002156 mixing Methods 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 12
- 230000006835 compression Effects 0.000 claims description 8
- 238000007906 compression Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 8
- 230000003190 augmentative effect Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 description 11
- 210000003128 head Anatomy 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000002567 electromyography Methods 0.000 description 2
- 230000004886 head movement Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000005043 peripheral vision Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000010926 purge Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/44029—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display for generating different versions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
-
- H04L65/4069—
-
- H04L65/602—
-
- H04L65/604—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/762—Media network packet handling at the source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/764—Media network packet handling at the destination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/378—Image reproducers using viewer tracking for tracking rotational head movements around an axis perpendicular to the screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/23439—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4621—Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- the present invention relates generally to video processing and display.
- Video streaming is increasingly becoming one of the main ways that media contents are delivered and accessed. Video streaming traffic also accounts for a large portion of Internet bandwidth consumption.
- One embodiment provides a method for displaying content, comprising: determining a focal area of a viewer of a content item displayed on a display device, retrieving a low bit rate version of the content item, retrieving a portion of a high bit rate version of the content item corresponding to the focal area, combining the portion of the high bit rate version of the content with the low bit rate version of the content item to generate a combined image, and causing the combined image to be displayed to the viewer via the display device.
- Another embodiment provides a system for displaying content, comprising: a display device, a sensor device, and a processor coupled to the display device and the sensor device.
- the processor being configured to: determine, with the sensor device, a focal area of a viewer of a content item displayed on the display device, retrieve a low bit rate version of the content item, retrieve a portion of a high bit rate version of the content item corresponding to the focal area, combine the portion of the high bit rate version of the content with the low bit rate version of the content item to generate a combined image, and cause the combined image to be displayed to the viewer via the display device.
- Another embodiment provides a non-transitory computer readable storage medium storing one or more computer programs configured to cause a processor based system to execute steps comprising: determining a focal area of a viewer of a content item displayed on a display device, retrieving a low bit rate version of the content item, retrieving a portion of a high bit rate version of the content item corresponding to the focal area, combining the portion of the high bit rate version of the content with the low bit rate version of the content item to generate a combined image; and causing the combined image to be displayed to the viewer via the display device.
- Another embodiment provides a method for providing content, comprising: receiving a content item, generating a low bit rate version of the content item, receiving a content request from a playback device, the content request comprises an indication of a viewer focal area, selecting a portion of the high bit rate version of the content item based on the viewer focal area, and providing the low bit rate version of the content item and the portion of the high bit rate version of the content item to the playback device in response to the content request.
- Another embodiment provides a system for providing content comprising: a memory device, a communication device; and a processor coupled to the memory device and the communication device.
- the processor being configured to: receive a content item, generate a low bit rate version of the content item, store the high bit rate version of the content item and the low bit rate version of the content item on the memory device, receive, via the communication device, a content request from a playback device, the content request comprises an indication of a viewer focal area, select a portion of the high bit rate version of the content item based on the viewer focal area, and providing the low bit rate version of the content item and a portion of the high bit rate version of the content item to the playback device in response to the content request.
- FIG. 1 is a process diagram illustrating a process for providing content in accordance with some embodiments of the present invention
- FIG. 2 is a flow diagram illustrating a method for providing content in accordance with some embodiments of the present invention
- FIG. 3 is a flow diagram illustrating a method for displaying content in accordance with some embodiments of the present invention.
- FIGS. 4A and 4B are illustrations of a content display area in accordance with some embodiments of the present invention.
- FIG. 5 is an illustration of image blending in accordance with some embodiments of the present invention.
- FIGS. 6A and 6B are illustrations of image cells in accordance with some embodiments.
- FIGS. 7A and 7B are illustrations of focal areas in accordance with some embodiments.
- FIG. 8 is a block diagram illustrating a system in accordance with some embodiments of the present invention.
- Digital video content may be stored and transmitted in a variety of formats. Factors such as the video's resolution, frame rate, coding format, compression scheme, and compression factor can affect the total size and bit rate of the video file.
- bit rate generally refers to the number of bits used per unit of playback time to represent a continuous medium such as audio or video.
- the encoding bit rate of a multimedia file may refer to the size of a multimedia file divided by the playback time of the recording (e.g. in seconds).
- the bit rate of a video content file affects whether the video can be streamed without interruptions under network bandwidth constraints between a streaming server and a playback device.
- step 111 video content is captured by a camera system.
- the camera system may comprise one or more of a conventional camera system, a stereoscopic camera system, a panoramic camera system, a surround view camera system, a 360-degree camera system, and an omnidirectional camera system, and the like.
- step 112 the captured video is encoded and transmitted to a server.
- the encoding performed in step 112 may comprise lossy or lossless video encoding.
- the video may comprise a live-streaming or a prerecorded video content.
- the camera may communicate with the server via wireless or wired means by way of a network, such as for example the Internet.
- the camera performing steps 111 and 112 may comprise a segmented video capture device such as those described in U.S. Provisional Patent Application No. 62/357,259, filed on Jun. 30, 2016, entitled “APPARATUS AND METHOD FOR CAPTURING AND DISPLAYING SEGMENTED CONTENT”, the entire disclosure of which is hereby fully incorporated by reference herein in its entirety.
- a segmented video capture device With a segmented video capture device, each captured video stream be provided as separate video streams to the server or may be combined into a single video stream prior to step 112 .
- the server decodes the video content received from the camera.
- the decoded video may comprise a video in the originally captured resolution, frame rate, and/or bit rate.
- the server reduces the bit rate of the decoded video stream.
- the bit rate of the video content may be reduced by one or more of: reducing the resolution of the video, reducing the frame rate of the video, and compressing the video with a compression algorithm.
- the reduced bit rate video is encoded and prepared for streaming to a playback device.
- steps 122 and 123 may comprise a single step. For example, an encoding algorithm may be used reduce the bit rate of the received content.
- one or more portions of the received video are extracted from received video.
- Portions of a content item may generally refer to a spatial section of the video content display area.
- a portion of the content may comprise an area of the content displayed area spanning one or more frames.
- the extraction in step 125 may be performed by partially decoding the received content.
- step 125 may be performed in response to receiving a viewer focal area from a playback device and the extracted portion may correspond to the location of the viewer's focal area in the content.
- step 125 may be performed on the content preliminarily and one or more portions may be extracted and stored for later retrieval by playback devices.
- the extracted portion is encoded and prepared for streaming to the playback device.
- high and low bit rates are relative terms referring to the relative bit rates of the at least two versions of a video content item provided from the server to a playback device.
- the server may generate at least one low bit rate version of the received video and extract at least a portion of a version of the content item having a higher bit rate as compared to the low bit rate version.
- multiple versions of a video content item having different bit rates may be created by the servers.
- bit rate reduction may also be performed on the received video prior to extracting portions of the content in step 125 and/or performed on the portion extracted in step 125 .
- a high bit rate version of the content item has a higher average bit rate than the low bit rate version of the content item over the duration of the video content.
- the bit rate of the high bit rate version of the content item may be higher than the low bit rate version of the content item for some or all of temporal segments of the video content.
- the video stream containing the extracted portion of the high bit rate version of the content item may have a lower bit rate as compared to the video stream comprising the low bit rate version of the content item.
- the portion of the high bit rate version of the content item may cover a significantly smaller display area of the content as compared to the low bit rate version, resulting in the lower bit rate of the extracted portion.
- the low bit rate version of the content item may comprise lower one or more of resolution, frame rate, and compression quality as compared to the high bit rate version of the content item. In some embodiments, the low bit rate version of the content item may comprise a lower video quality and/or definition as compare to the high bit rate version of the content item. In some embodiments, the low and high bit rate versions of the content may comprise constant bit rate (CBR) or variable bit rate (VBR) video streams.
- CBR constant bit rate
- VBR variable bit rate
- the server may communicate with the playback device by way of a network, such as for example the Internet.
- the playback device receives and decodes a low bit rate version of the video content and a portion of a high bit rate portion of the video content.
- the portion of the high bit rate portion of the video content may be selected based on the focal area of a viewer viewing the content via the playback device.
- the focal area of a viewer refers an area of the viewer's field of vision that is or is likely to be in focus while the viewer views the content.
- the focal area may correspond to one or more of the central, paracentral, macular, near peripheral, and mid peripheral areas of the viewer's field of vision.
- the focal area of the viewer may be detected by a sensor device coupled to the playback device.
- IMU Inertial Measurement Unit
- data recorded by a capture device of the content item may be compared to the viewer's eye and/or head direction to determine the portion of the high bit rate video content to extract for the playback device.
- the low bit rate version of the video content and the portion of the high bit rate portion of the video content may be transmitted as separate video streams from the server to the playback device.
- step 132 the low bit rate version of the video content and the portion of the high bit rate portion of the video content are combined.
- combining the video streams comprises combining the low bit rate version of the content item with the portion of the high bit rate version at the location of the content displayed area from which the high bit rate portion was extracted.
- step 132 comprises blending the two video streams by including a transition area between the high and low bit rate areas of the image to reduce the noticeability of the border between the two versions of the video content.
- step 132 further comprises scaling the low bit rate version of the video content to the resolution and/or frame rate of the high bit rate version of the content prior to combining the images.
- the combined image is displayed to the viewer.
- the combined image may be displayed via one or more of a flat screen display, a curved display, a dome display device, a head-mounted display device, an augmented reality display device, a virtual reality display device, and the like.
- the combined image may be viewed by a head mounted display such as the systems and devices described in U.S. patent application Ser. No. 15/085,887, filed on Mar. 30, 2016, entitled “Head-Mounted Display Tracking,” the entire disclosure of which is hereby fully incorporated by reference herein in its entirety.
- the high bit rate portion of the video content may be combined with the low bit rate version of the content at the server and encoded as a single video stream for transmission. While the resolution and the frame rate of such video streams may not be reduced as compared to a full high bit rate version, the overall size of the transmitted video stream may still be reduced by processing the area of the content outside of the focal area with a more lossy video compression algorithm before recombining the images.
- the portion of the content item corresponding to the user's focal area is provided in a relatively high bit rate and the remaining area of the content are provided in a relatively low bit rate.
- the network bandwidth demand for achieving interruption-free video streaming may be reduced by decreasing the overall bit rate of the streaming video content while maintaining the video quality in the focal area of the viewer's field of vision.
- FIG. 2 a method for providing content is shown.
- the steps in FIG. 2 may generally be performed by a processor-based device such as one or more of a computer system, a server, a cloud-based server, a content host, a streaming service host, a media server, and the like.
- the steps in FIG. 2 may be performed by one or more of the content server 810 and the playback device 820 described with reference to FIG. 8 , the server described with reference to FIG. 1 , and/or other similar devices.
- the system receives a content item.
- the content item may comprise one or more of a movie, a TV show, a video clip, prerecorded video content, streaming video content, live-streamed video content, and the like.
- the video content may comprise a single video stream or a plurality of video streams captured by one or more of a stereoscopic camera system, a panoramic camera system, a surround view camera system, a 360-degree camera system, an omnidirectional camera system, and the like.
- the content item may be encoded via any encoding scheme such as MPEG, WMV, VP8, and the like.
- the system may further be configured to decode the received content item according to various encoding schemes in step 310 .
- the system In step 220 , the system generates a low bit rate version of the content item.
- the bit rate of the received content may be reduced by one or more of: reducing the resolution of the video, reducing the frame rate of the video, and compressing the video with a lossy compression algorithm.
- a lossy compression generally means that the compressed video lacks some information present in the original video.
- multiple low bit rate versions of the content item may be generated in step 220 and stored for retrieval by playback devices.
- the system receives a content request.
- the content request may be received from a playback device such as a game console, a personal computer, a tablet computer, a television, a head mounted display (“HMD”), an augmented reality device, a virtual reality device, a wearable device, a portable user device, a smartphone, etc.
- the content request may identify one or more of the content item being requested, the requested temporal segment, an indication of the viewer's focal point and/or area, and/or other authentication information.
- the content request may be similar to a conventional streaming content request.
- the content request may comprise an indication of the viewer's focal area which may correspond to a point or an area in the content display area.
- the indication of the viewer's focal area may comprise a coordinate or a set of coordinates within the dimension of a frame of the content.
- the indication of the viewer's focal area may be represented by a viewing angle.
- the focal area may be determined based on a sensor device associated with the playback device comprising one or more of an eye tracking sensor and a head tracking sensor.
- the low bit rate version of the content is provided to the playback device in response to the content request received in step 230 .
- multiple low bit rate versions of the content item may be generated in step 220 .
- the system may select from among the multiple low bit rate versions of the content item based on one or more of: the current or estimated network throughput between the playback device and the server, the available bandwidth at the server and/or the playback device, the requested video quality specified in the content request, the playback device's processing capacity, user settings, etc.
- the selection of the low bit rate version of the content item from a plurality of versions may be similar to conventional adaptive bit rate streaming methods.
- the system selects a portion of the high bit rate version of the content item based on the content request.
- the high bit rate version of a content item generally refers a version of the content with a higher bit rate as compared to the low bit rate content provided in step 240 .
- the high bit rate version of the content item may comprise a higher average bit rate than the low bit rate version of the content over the duration of the video content.
- the bit rate of the high bit rate version of the content item may be higher than the low bit rate version of the content.
- the high bit rate version of the content may comprise the original content received in step 210 .
- the high bit rate version of the content item may also comprise a reduced bit rate version of the originally received content item.
- the portion of the content selected in step 250 may be selected based on the viewer's focal area comprising one or more of a detected focal point and a predicted future focal point.
- the predicted future focal point may be predicted by the server and/or the playback device.
- the future focal point may be predicted based on one or more of the viewer's gaze path history, a gaze path profile associated with the viewer, gaze path data collected from a plurality of viewers, and a content provider provided standard gaze path. Examples of predicting the viewer's future focal point are described in U.S. patent application Ser. No. ______, filed on the same date as this application, entitled “APPARATUS AND METHOD FOR GAZE TRACKING”, by inventor Dennis D. Castleman, and identified by Attorney Docket No. 138627 [SCEA16004US00], the entire disclosure of which is hereby fully incorporated by reference herein in its entirety.
- a portion of the content may generally refer to a spatial portion of the display content area such as a set pixels within a frame. In some embodiments, a portion may comprise the same part of the display content area spanning a plurality of frames. In some embodiments, the portion selected in step 250 may generally correspond to the location of a viewer's focal area in the content display area. In some embodiments, the displayed area of the content may be divided into a plurality of sections. For example, the displayed area of the content may be divided into quadrants, 3 ⁇ 3 grids, 5 ⁇ 5 grids, etc. In some embodiments, one or more sections of the content display area that overlaps the focal area of the viewer may be selected to comprise the portion of the high bit rate version of the content item provided to the playback device. In some embodiments, the focal area and/or the extracted portion of the content may comprise any shape and size. Examples of focal areas and portions extracted from content items are described in more detail with references to FIGS. 4A-4B and FIGS. 7A-7B herein.
- the system may further select from a plurality of original and/or reduced bit rate versions of the content to extract the selected portion based on one or more of: the current or estimated network throughput between the playback device and the, the available bandwidth at the server and/or the playback device, a requested video quality specified in the content request, the playback device's processing capacity, and user settings.
- the portion of the high bit rate version may be extracted from one of the reduced bit rate versions generated in step 220 .
- the high bit rate version of the content item may generally be selected from versions of the content item with higher bit rate as compared to the low bit rate version of the content item selected in step 240 .
- the system may be configured to provide two or more portions of the high bit rate version of the content item in step 270 .
- the system and/or the playback device may predict two or more likely future focal areas of the viewer.
- the system may then select two or more portions of the high bit rate version of the content item based on the two or more likely future focal areas of the viewer in step 250 .
- the playback device may be configured to select from among the provided portions shortly before playback based on the detected focal area.
- the system determines whether the selected portion has been previously cached in the system. In some embodiments, when a portion of the high bit rate version of the content is extracted, the system may cache the portion for later use. In some embodiments, the system may preliminarily generate a plurality of extracted portions of the high bit rate version of the content item based on predicting the locations that viewers are likely to focus on in the displayed content. For example, preliminarily extracted portions may correspond to high activity areas and/or foreground areas of the displayed content. In some embodiments, the cached portions may each comprise an encoded video stream. In some embodiments, the system may be configured to automatically purge extracted portions that have not been used for a set period of time (e.g. hours, days, etc.).
- a set period of time e.g. hours, days, etc.
- each cached portion of the high bit rate portion may be identified and retrieved with an area identifier and a time stamp identifier (e.g. section 3B, time 00:30:20-00:30:22).
- portions of the high bit rate version of the content may be stored in an encoded form in the cache and be made directly available for streaming to playback devices. If the selected portion has been previously cached, the system may provide the cached portion to the playback device in step 270 .
- the system extracts a portion of the high bit rate version of the content in step 280 .
- the portion may be extracted from the content received in step 210 .
- the portion may be extracted from one of the reduced bit rate versions of the originally received content.
- the portion of may be extracted by first decoding the received content.
- the system may be configured to partially decode and extract a portion of the content from an encoded version of the content item.
- step 280 may further comprise processing the extracted portion to include a plurality of empty/transparent pixels or cells around the edge of the extracted portion.
- step 280 may further comprise separately encoding the extracted portion for streaming.
- the encoded portion of the high bit rate version of the content item may then be provided to the playback device in step 270 .
- the portion of the high bit rate version of the content item may be provided in a plurality of encoded video streams each corresponding to a predefined area (e.g. a cell in a grid) of the content display area.
- steps 270 and 240 may occur at substantially the same time to provide corresponding temporal segments of the same content item to the playback device.
- the low bit rate version of the content may be provided and buffered at the playback device prior to the corresponding high bit rate portion of the content item being provided in step 270 .
- the portion of the high bit rate version of the content item and the low bit rate version of the content item may be provided as two separately encoded and transmitted video streams.
- portions of the high bit rate version of the content item and the low bit rate version of the content item may be provided from different parts of a server system.
- a central server may be configured to stream low bit rate versions of content items to playback devices while a plurality of geographically dispersed server devices may be configured to extract and/or provide portions of the high bit rate versions of the same content item to nearby playback devices.
- steps 210 through 270 may be repeated for multiple content items.
- steps 250 - 270 may be repeated periodically as a viewer views a content item at the playback device.
- the playback device may periodically (e.g. every few milliseconds, seconds, frames, etc.) update the focal area of the viewer at the server, and the system may select a different portion of the high bit rate version of the content item based on the updated focal area of the viewer.
- the playback device may be configured to detect a change in the focal area and only notify the server when the location of the focal area changes. In some embodiments, if no focal area is detected (e.g.
- the system may skip steps 250 - 270 and only provide the low bit rate version of the content item to the playback device.
- the system may further select the lowest bit rate version of the content item to provide to the playback device in step 240 to reduce network bandwidth usage.
- the system may adjust the bit rate of the low and/or high bit rate versions of the content provided to reduce interruptions.
- FIG. 3 a method for providing content is shown.
- the steps in FIG. 3 may generally be performed by a processor-based device such as one or more of a game console, a personal computer, a tablet computer, a television, a head mounted display (“HMD”), an augmented reality device, a virtual reality device, a wearable device, a portable user device, a smartphone, a mobile device, and the like.
- the steps in FIG. 3 may be performed by one or more of the content server 810 and the playback device 820 described with reference to FIG. 8 , the playback device described with reference to FIG. 1 , or other similar devices.
- the system determines a focal area of a viewer.
- the focal area may be determined based on a sensor device comprising one or more of an eye tracking sensor and a head tracking sensor.
- the head direction of the user may be determined by a head tracker device comprising one or more of an Inertial Measurement Unit (IMU), an accelerometer, gyroscope, an image sensor, and a range sensor.
- IMU Inertial Measurement Unit
- an IMU may comprise an electronic device that measures and reports a body's specific force, angular rate, and/or magnetic field surrounding the body, using a combination of accelerometers and gyroscopes, sometimes also magnetometers.
- the head tracker device may be coupled to a head mounted display (HMD) worn by the user.
- HMD head mounted display
- the gaze location of the user may be determined by an eye tracker device comprising one or more of an image sensor, an optical reflector sensor, a range sensor, an electromyography (EMG) sensor, and an optical flow sensor.
- EMG electromyography
- the focal area may be determined based on one or more of a detected focal point and a predicted future focal point. In some embodiments, the future focal point may be predicted based on one or more of the viewer's gaze point history, a gaze path profile associated with the viewer, gaze path data collected from a plurality of viewers, and a content provider provided standard gaze path. In some embodiments, the focal area may by represented by a point of focus in a 2D or 3D space. In some embodiments, the focal area may be represented as a 3D angle such as a direction represented by a spherical azimuthal angle ( ⁇ ) and polar angle ( ⁇ ). In some embodiments, the focal area may be represented by a 2D polar angle ( ⁇ ).
- the focal area may correspond the pitch, yaw, and roll of the viewer's head, eyes, and/or the display device.
- the system may compare the IMU data of the recorded content and the IMU data of the display device to determine the focal area of the view relative to the content.
- the size of the focal area may further be determined based on the viewer's distance from the display device. For example, for a television display, a smaller focal area may be associated with a viewer sitting 5 feet away from the screen while a larger focal are may be associated with a viewer sitting 10 feet away.
- the focal area may be approximated to an area of fixed size and shape around the user's focal point.
- step 320 the playback device retrieves a low bit rate version of a content item.
- a playback device sends a content request to a server hosting the content item in step 320 to retrieve the content item.
- the low bit rate version of the content item may comprise a reduced bit rate version of the content item generated by a content provider and/or the hosting service.
- step 320 may occur prior to step 310 and the low bit rate version of the content item may begin to be downloaded, buffered, and/or viewed prior to the focal area of the viewer being determined.
- step 320 may correspond to step 240 described with reference to FIG. 2 herein.
- the playback device retrieves a portion of a high bit rate version of the content item.
- the playback device sends a content request identifying the focal area of the viewer determined in step 310 to a server to retrieve the portion of the high bit rate version of the content item.
- the retrieved portion may comprise a spatial portion of the content selected based on the focal area of the viewer.
- the retrieved portion may comprise a short temporal segment of an area of the content item (e.g. milliseconds, seconds, frames, etc.).
- the portion of the high bit rate version of the content item may be retrieved in a video stream separately encoded from the low bit rate version of the content item retrieved in step 320 .
- the low bit rate version of the content item may buffer ahead of the retrieval of the high bit rate version of the content item.
- step 330 may correspond to step 270 described with reference to FIG. 2 herein.
- step 340 the system combines the portion of the high bit rate version of the content item with the low bit rate version of the content item to generate a combined image.
- the system first decodes the portion of the high bit rate version of the content item retrieved in step 330 and the low bit rate version of the content item retrieved in step 320 .
- the system may first adjust the resolution and/or frame rate of at least one of the versions prior to combining the images.
- the system may increase the resolution and/or frame rate of the low bit rate version of the content item to match the resolution and/or frame rate of the high bit rate portion by up-sampling and/or interloping the decoded low bit rate version of the content item.
- the system may combine the two versions of the content item by replacing the pixels in the frames of the low bit rate version of the content item with pixels from the corresponding frames of the portion of the high bit rate version of the content item.
- the frames may be identified and matched by time stamps.
- the image may further be blended to reducing the appearance of a border between the two versions of the content item.
- the system blends the versions of the content item by generating a transition area between the portion of the high bit rate version of the content and the low bit rate version of the content. In the transition area, the pixels containing information from the high bit rate version may gradually decrease from the high bit rate area towards the low bit rate area of the displayed content.
- blending the portion of the high bit rate version of the content items with the low bit rate version of the content item may comprise grouping pixels into triangular cells for blending. Examples of the transition areas and blending are described with reference to FIGS. 5 and 6A-6B herein.
- the high bit rate portion may be provided in a pre-blended form from the server. For example, edges of the high bit rate portion may comprise a plurality of empty/transparent pixels with graduated density. The playback device may then overlay the high bit rate portion with the transparent pixels onto the low bit rate version of the content item without further processing the images and archive the blended effect.
- the combined image is displayed on a display device.
- the display device may comprise one or more of a monitor, a television set, a projector, a head mounted display (HMD), a virtual reality display device, a wearable device, a display screen, a mobile device, and the like.
- the system may further adjust the combined image based on the display device's specifications. For example, for virtual reality display devices, the system may adjust for the warp and distortions associated with the device.
- steps 310 to 350 may be repeated continuously as a viewer views a content item.
- different portions of the high bit rate version of the content item may be retrieved in step 330 and combined with the low bit rate version in step 340 over time.
- step 320 may occur independently of steps 310 and 330 .
- the system may only retrieve the low bit rate version of the content item to display and skip steps 330 - 350 until a focal point is detected again.
- the system may further be configured to determine a view area of the viewer and retrieve only a portion of the low bit rate content based on a view area of the viewer in step 320 .
- the view area of the viewer may be determined based on one or more of eye tracking and head tracking similar to the determination of the focal area in step 310 .
- the view area of the viewer may generally refer to the area of the content that is visible to the user but may or may not be in focus in the viewer's field of vision.
- the view area may comprise an area surrounding the focal area.
- the portion of the low bit rate version of the content item retrieved may exclude areas of the content area not within the view area.
- the portion of the low bit rate version of the content item retrieved may further exclude the focal area and only include the area that is assumed to be visible to the viewer but not in focus.
- the retrieved portion of the low bit rate version of the content item may correspond to one or more of the near, mid, and far peripheral vision area of the viewer's field of vision.
- the content area 400 represents the entire image area of a content item. While the content area 400 is shown to be a rectangle, in some embodiments, the content area 400 may correspond to a cylinder, a sphere, a semi-sphere, etc. for immersive content and/or omnidirectional video content.
- the content area 400 may generally comprise any shape, aspect ratio, and size without departing from the spirit of the present disclosure.
- the focal point 410 represents the viewer's point of focus within the content. In some embodiments, the focal point 410 may correspond to a detected focal point and/or a predicted focal point.
- the focal area 412 represents an area around the focal point 410 that is likely to be in focus within the viewer's field of vision.
- the focal area may comprise one or more of the central, paracentral, macular, near peripheral, and mid peripheral areas of the viewer's field of vision.
- the size and shape of the focal area 412 are shown as examples only. The relative sizes of the focal area 412 and the content area 400 may also vary.
- the shape and size of the focal area 412 may be calibrated for each individual user and/or be estimated based on the viewer's profile containing one or more of viewer demographic information, viewing habits, user feedback, user settings, etc.
- the size of the focal area 412 may further be determined based on the viewer's distance from the display screen. In some embodiments, for display device types with a fixed distance between the eyes of the viewer and the display screen (e.g. HMDs), the size of the focal area 412 may generally be assumed to remain the same.
- the playback device may be configured to retrieve a portion of the high bit rate version of the content item corresponding to the focal area 412 .
- the content area 400 may be divided into a grid comprising a plurality of sections.
- sections of the content area 400 overlapping the focal are 421 may comprise the portion of the high bit rate version of the content item retrieved by the playback device.
- the high bit rate version of the content item may be displayed in the portion of the content area corresponding to the focal area 412 and the low bit rate version of the content item may be displayed in the remaining portion of the content area 400 .
- the high bit rate area may not be an exact match to the size and shape of the focal area 412 but may generally substantially cover the focal area 412 .
- the portion of the high bit rate version of the content item may be extracted to closely match the shape and size of the focal area 412 .
- FIG. 4B another illustration of a content display area is shown.
- the content area 400 , the focal point 410 , and the focal area 412 in FIG. 4B may generally be similar to the corresponding elements in FIG. 4A .
- the system may further determine a view area 411 surrounding the focal area 412 as shown in FIG. 4B .
- the view area 414 may generally refer to the area of the content that is visible to the user but may or may not be in focus in the viewer's field of vision.
- the portion of the low bit rate version of the content item retrieved may exclude areas of the content area 400 outside of the view area 414 .
- the portion of the low bit rate version of the content item retrieved may further exclude the focal area 412 and only include the area that is assumed to be visible to the viewer but not in focus.
- the view area may correspond to one or more of the near, mid, and far peripheral vision area of the viewer's field of vision.
- the content area 400 may correspond to an immersive video content and/or an omnidirectional video content captured by a plurality of image sensors.
- the view area 414 may be used to select and stitch a plurality of separately encoded video streams as described in U.S. Provisional Patent Application No. 62/357,259, filed on Jun. 30, 2016, entitled “APPARATUS AND METHOD FOR CAPTURING AND DISPLAYING SEGMENTED CONTENT” the entire disclosure of which is hereby fully incorporated by reference herein in its entirety.
- the view area 414 overlaps two of the four video streams captured by a multi-camera system, the low bit rate version of the content item retrieved may comprise only the two corresponding streams.
- the focal area 412 may also comprise data from a plurality of separately encoded video streams that are stitched at the playback device.
- FIG. 5 may represent a combined image displayed in step 350 of FIG. 3 .
- the displayed image comprises a low bit rate area 510 , a high bit rate area 512 , and a transition area 511 .
- pixels containing information from the high bit rate area 512 may gradually decrease from the high bit rate area 512 toward the low bit rate area 510 .
- blending the portion of the high bit rate version of the content with the low bit rate version of the content item comprises grouping pixels in the transition are 511 into cells for blending.
- each set of grouped pixels may contain data from one of the versions of the content item or the other.
- the size and shape of the transition area 511 is shown as an example only and the transition area 511 may be of any size, shape, and thickness.
- the transition area 511 surrounds the high bit rate area and includes interleaved data from both the high bit rate area 512 and the low bit rate area 510 to reduce the appearance of a border between the two areas.
- FIG. 6A shows a sphere divided into a plurality of triangular cells.
- the sphere may correspond to the content area of an omnidirectional and/or immersive video content.
- each cell may comprise a unit for blending images.
- triangular cells better adapt to the curvature of a sphere and are less noticeable to human eyes as compared to square or rectangular cells.
- the triangular cells may further be subdivided into smaller triangular cells to provide for adjustable granularity in blending.
- FIG. 6B illustrates blending using triangular cells.
- the cells in FIG. 6B may represent a section of a transition area between two versions of a content item. In FIG.
- cells labeled with “1” may contain data from one version of a content item and cells labeled with “2” may contain data from a different version of the content item.
- each cell in FIG. 6B may be subdivided into smaller triangular cells for more granular blending.
- a transition area may have any number of row or columns of triangular cells.
- each cell shown in FIGS. 6A and 6B may be merged or subdivided to form triangular cells of different sizes for blending images.
- the focal area of a viewer may be determined based on the area of the content that is likely to be in focus in a viewer's field of vision.
- the focal area is approximated to an oval.
- the focal area may be approximated to a circle, a square, etc. by the system.
- FIGS. 7A and 7B illustrate other shapes that may represent the shape of the focus area used by the system.
- the shape shown in FIG. 7A approximates the shape of human's field vision with two merged ovals having aligned major axes.
- the retrieved portion of the high bit rate content item discussed here may correspond one or more of the shapes shown in FIGS. 4A-4B, 7A-7B , a circle, a square, a rectangle, and the like.
- FIG. 8 there is shown a system for providing and displaying content that may be used to run, implement and/or execute any of the methods and techniques shown and described herein in accordance with some embodiments of the present invention.
- the system includes a content server 810 and a playback device 820 communicating over a data connection such as a network.
- the content server 810 includes a processor 812 , a memory 813 , and a communication device 814 .
- the content server 810 may generally comprise one or more processor-based devices accessible by the playback device via a network such as the Internet.
- the content server may comprise one or more of a cloud-based server, a content host, a streaming service host, a media server, a streaming video server, a broadcast content server, a social networking server, and the like.
- the processor 812 may comprise one or more of a control circuit, a central processor unit, a graphical processor unit (GPU), a microprocessor, a video decoder, a video encoder and the like.
- GPU graphical processor unit
- the memory 813 may include one or more of a volatile and/or non-volatile computer readable memory devices. In some embodiments, the memory 813 stores computer executable code that causes the processor 812 to provide content to the playback device 820 .
- the communication device 814 may comprise one or more of a network adapter, a data port, a router, a modem, and the like. Generally, the communication device 814 may be configured to allow the processor 812 to communicate with the playback device 820 . In some embodiments, the processor 812 may be configured to provide a low bit rate version of a content item and a portion of a high bit rate version of the content item to the playback device 820 based on a request from the playback device 820 .
- the request may comprise an identification of the requested content item and/or an indication of a focal area of the viewer of the content item.
- the processor 812 may be configured to generate and/or store at least one of the low bit rate version of the content item and one or more portions of the high bit rate version of the content item based on a received content item.
- the memory 813 and/or a separate content library may store one or more content items each comprising at least two versions of the content item having different bit rates.
- the content server 810 may be configured to stream the content recorded by a capture device to the playback device 820 in substantially real-time.
- the content server 810 may be configured to host a plurality of prerecorded content items for streaming and/or downloading to the playback devices 820 on-demand. While only one playback device 820 is shown in FIG. 8 , the content server 810 may be configured to simultaneously receive content from a plurality of capture devices and/or provide content to a plurality of playback devices 820 via the communication device 814 .
- the content server 810 may be configured to facilitate peer-to-peer transfer of video streams between capture devices and playback devices 820 .
- the low bit rate version of the content item may be transferred via a peer-to-peer network while portions of the high bit rate content item may be transferred via the content server 810 .
- the content server 810 may be configured to provide the low bit rate version of the content item and the portion of the high bit rate version of the content item in separately encoded video streams.
- the content server 810 may further be configured to pre-process the content item before providing the content item to the playback device 820 .
- the content server 810 may soften the edges of the extracted portion of the high bit rate version of the content server by including empty/transparent pixels on at the edges prior to providing the portion of the high bit rate content to the playback device 820 .
- the playback device 820 may blend the video streams by simply combining the pixel data from the two versions without performing further image processing.
- the content server 810 may be configured to combine a low bit rate version of a content item with a portion of the high bit rate version of the content prior to providing the combined content to the playback device 820 .
- While one content server 810 is shown, in some embodiments, functionalities of the content server 810 may be implemented on one or more processor-based devices. In some embodiments, the content servers 810 for providing low bit rate versions of contents and for providing high bit rate versions of contents may be separately implemented. For example, a central content server may be configured to provide low bit rate versions of contents while a plurality of geographically distributed content servers may be configured to provide portions of the high bit rate versions of contents to playback devices.
- the playback device 820 comprises a processor 821 , a memory 823 , a display device 825 , and a sensor device 827 .
- the playback device 820 may generally comprise a processor-based devices such as one or more of a game console, a media console, a set-top box, a personal computer, a tablet computer, a television, a head mounted display (“HMD”), an augmented reality device, a virtual reality device, a wearable device, a portable user device, a smartphone, etc.
- the processor 821 may comprise one or more of a control circuit, a central processor unit (CPU), a graphical processor unit (GPU), a microprocessor, a video decoder and the like.
- the memory 823 may include one or more of a volatile and/or non-volatile computer readable memory devices.
- the memory 823 stores computer executable code that cause the processor 821 to determine a focal area of a user and retrieve a content item from the content server 810 .
- the playback device 820 may be configured to retrieve a low bit rate version and a portion of a high bit rate version of the content item from the content server 810 and/or from a local storage and combine the two versions to generate a combined image to display to the user via the display device 825 .
- the memory 823 may comprise a buffer for buffering one or more versions of the content item retrieved from the content server 810 .
- the computer executable code stored in the memory 823 may comprise one or more of a computer program, a software program, a playback device firmware, a mobile application, a game and/or media console application, etc.
- the display device 825 may comprise a device for displaying content to a viewer.
- the display device 825 may comprise one or more of a monitor, a television, a head mounted display (HMD), a virtual reality display device, a wearable device, a display screen, a mobile device, and the like.
- the display device 825 may comprise a stereoscopic display having one or more screens.
- the sensor device 827 may comprise one or more sensors configured to determine a focal point and/or area a viewer of the display device 825 .
- the sensor device 827 may comprise one or more of an image sensor, an optical reflector sensor, a range sensor, an electromyography (EMG) sensor, and an optical flow sensor for detecting eye and/or head movement.
- the sensor device 827 may comprise an IMU that measures and reports a body's specific force, angular rate, and/or magnetic field surrounding the body, using a combination of accelerometers and gyroscopes, sometimes also magnetometers.
- the sensor device 827 may be coupled to an HMD and/or a wearable device that allows the sensor to detect the motion of the user's head or eyes via the motion of the HMD and/or wearable device.
- the sensor device 827 may comprise an optical sensor for detecting one or more of a head motion and eye-motion of the user.
- the sensor may be coupled to an HMD and/or a wearable device and/or be a relatively stationary device that captures data from the viewer from a distance.
- the display device 825 may comprise a separate device with or without a separate processor.
- the display device 825 may be coupled to the playback device 820 via a wired or wireless communication channel.
- the playback device 820 may comprise a PC or a game console and the display device 825 may comprise an HMD configured to display content from the playback device 820 .
- the sensor device 827 may be part of the playback device 820 , the display device 825 , and/or may be a physically separated device communicating with one or more of the playback device 820 and the display device 825 .
- one or more of the display device 825 and the sensor device 827 may be integrated with the playback device 820 .
- the display device 825 may further comprise a processor and/or a memory for at least partially storing the retrieved content and/or the viewer's eye or head movement detected by the sensor device 827 .
- the playback device 820 may further include a communication device such as a network adapter, a Wi-Fi transceiver, a mobile data network transceiver, etc. for requesting and downloading content items from the content server 810 and/or a capture device.
- the playback device 820 may further include one or more user input/output devices such as buttons, a controller, a keyboard, a display screen, a touch screen and the like for the user to control the selection and playback of content items.
- one or more of the embodiments, methods, approaches, and/or techniques described above may be implemented in one or more computer programs or software applications executable by a processor based apparatus or system.
- processor based apparatus or systems may comprise a computer, entertainment system, game console, workstation, graphics workstation, server, client, portable device, pad-like device, etc.
- Such computer program(s) may be used for executing various steps and/or features of the above-described methods and/or techniques. That is, the computer program(s) may be adapted to cause or configure a processor based apparatus or system to execute and achieve the functions described above.
- such computer program(s) may be used for implementing any embodiment of the above-described methods, steps, techniques, or features.
- such computer program(s) may be used for implementing any type of tool or similar utility that uses any one or more of the above described embodiments, methods, approaches, and/or techniques.
- program code macros, modules, loops, subroutines, calls, etc., within or without the computer program(s) may be used for executing various steps and/or features of the above-described methods and/or techniques.
- the computer program(s) may be stored or embodied on a computer readable storage or recording medium or media, such as any of the computer readable storage or recording medium or media described herein.
- the present invention provides a computer program product comprising a medium for embodying a computer program for input to a computer and a computer program embodied in the medium for causing the computer to perform or execute steps comprising any one or more of the steps involved in any one or more of the embodiments, methods, approaches, and/or techniques described herein.
- the present invention provides one or more non-transitory computer readable storage mediums storing one or more computer programs adapted or configured to cause a processor based apparatus or system to execute steps comprising: determining a focal area of a viewer of a content item displayed on a display device, retrieving a low bit rate version of the content item, retrieving a portion of a high bit rate version of the content item corresponding to the focal area, combining the portion of the high bit rate version of the content with the low bit rate version of the content item to generate a combined image, and causing the combined image to be displayed to the viewer via the display device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Controls And Circuits For Display Device (AREA)
- Television Signal Processing For Recording (AREA)
- Position Input By Displaying (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Information Transfer Between Computers (AREA)
Priority Applications (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/280,947 US20180007422A1 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for providing and displaying content |
PCT/US2017/035057 WO2018004933A1 (en) | 2016-06-30 | 2017-05-30 | Apparatus and method for gaze tracking |
PCT/US2017/035060 WO2018004936A1 (en) | 2016-06-30 | 2017-05-30 | Apparatus and method for providing and displaying content |
KR1020207037655A KR102294098B1 (ko) | 2016-06-30 | 2017-05-30 | 콘텐츠를 제공 및 디스플레이하기 위한 장치 및 방법 |
CN201780039518.8A CN109416931B (zh) | 2016-06-30 | 2017-05-30 | 用于视线跟踪的装置和方法 |
EP17820807.0A EP3479574A4 (en) | 2016-06-30 | 2017-05-30 | DEVICE AND METHOD FOR PROVIDING AND DISPLAYING CONTENT |
CN201780039760.5A CN109417624B (zh) | 2016-06-30 | 2017-05-30 | 用于提供和显示内容的装置和方法 |
JP2018568225A JP6686186B2 (ja) | 2016-06-30 | 2017-05-30 | 注視追跡のための装置及び方法 |
PCT/US2017/035058 WO2018004934A1 (en) | 2016-06-30 | 2017-05-30 | Apparatus and method for capturing and displaying segmented content |
JP2018568224A JP6859372B2 (ja) | 2016-06-30 | 2017-05-30 | コンテンツを表示するための方法及びシステム、並びにコンテンツを提供するための方法及びシステム |
KR1020197003058A KR20190022851A (ko) | 2016-06-30 | 2017-05-30 | 콘텐츠를 제공 및 디스플레이하기 위한 장치 및 방법 |
EP17820805.4A EP3479257A4 (en) | 2016-06-30 | 2017-05-30 | DEVICE AND METHOD FOR TRACKING THE VIEW |
JP2020065716A JP6944564B2 (ja) | 2016-06-30 | 2020-04-01 | 注視追跡のための装置及び方法 |
JP2021051166A JP7029562B2 (ja) | 2016-06-30 | 2021-03-25 | コンテンツを提供及び表示するための装置及び方法 |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662357259P | 2016-06-30 | 2016-06-30 | |
US201662374687P | 2016-08-12 | 2016-08-12 | |
US15/280,947 US20180007422A1 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for providing and displaying content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180007422A1 true US20180007422A1 (en) | 2018-01-04 |
Family
ID=60807030
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/280,962 Active 2037-02-27 US10805592B2 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for gaze tracking |
US15/280,933 Active 2037-01-23 US11089280B2 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for capturing and displaying segmented content |
US15/280,947 Abandoned US20180007422A1 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for providing and displaying content |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/280,962 Active 2037-02-27 US10805592B2 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for gaze tracking |
US15/280,933 Active 2037-01-23 US11089280B2 (en) | 2016-06-30 | 2016-09-29 | Apparatus and method for capturing and displaying segmented content |
Country Status (5)
Country | Link |
---|---|
US (3) | US10805592B2 (zh) |
EP (2) | EP3479574A4 (zh) |
JP (4) | JP6686186B2 (zh) |
KR (2) | KR102294098B1 (zh) |
CN (2) | CN109416931B (zh) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180176535A1 (en) * | 2016-12-19 | 2018-06-21 | Dolby Laboratories Licensing Corporation | View Direction Based Multilevel Low Bandwidth Techniques to Support Individual User Experiences of Omnidirectional Video |
US20190027120A1 (en) * | 2017-07-24 | 2019-01-24 | Arm Limited | Method of and data processing system for providing an output surface |
US10204658B2 (en) | 2014-07-14 | 2019-02-12 | Sony Interactive Entertainment Inc. | System and method for use in playing back panorama video content |
US20190064513A1 (en) * | 2017-08-31 | 2019-02-28 | Tobii Ab | Systems and methods for tracking a gaze of a user across a multi-display arrangement |
US10291845B2 (en) * | 2015-08-17 | 2019-05-14 | Nokia Technologies Oy | Method, apparatus, and computer program product for personalized depth of field omnidirectional video |
US10805592B2 (en) | 2016-06-30 | 2020-10-13 | Sony Interactive Entertainment Inc. | Apparatus and method for gaze tracking |
US10833945B2 (en) * | 2018-11-13 | 2020-11-10 | International Business Machines Corporation | Managing downloading of content |
US10911823B2 (en) * | 2016-12-12 | 2021-02-02 | Zte Corporation | Media information processing method, apparatus and system |
US20210055787A1 (en) * | 2019-08-22 | 2021-02-25 | Samsung Electronics Co., Ltd. | Immersive device and method for streaming of immersive media |
US10979663B2 (en) * | 2017-03-30 | 2021-04-13 | Yerba Buena Vr, Inc. | Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for VR videos |
US11048464B2 (en) * | 2018-07-31 | 2021-06-29 | Dell Products, L.P. | Synchronization and streaming of workspace contents with audio for collaborative virtual, augmented, and mixed reality (xR) applications |
CN113170234A (zh) * | 2018-11-29 | 2021-07-23 | 苹果公司 | 多向视频的自适应编码和流式传输 |
JP2022511838A (ja) * | 2018-12-14 | 2022-02-01 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド | フォービエイテッドコーディングのスライスサイズマップ制御 |
US11284141B2 (en) | 2019-12-18 | 2022-03-22 | Yerba Buena Vr, Inc. | Methods and apparatuses for producing and consuming synchronized, immersive interactive video-centric experiences |
US20220103655A1 (en) * | 2020-09-29 | 2022-03-31 | International Business Machines Corporation | Proactively selecting virtual reality content contexts |
US11656734B2 (en) | 2018-08-10 | 2023-05-23 | Sony Corporation | Method for mapping an object to a location in virtual space |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10905943B2 (en) * | 2013-06-07 | 2021-02-02 | Sony Interactive Entertainment LLC | Systems and methods for reducing hops associated with a head mounted system |
US10055892B2 (en) | 2014-11-16 | 2018-08-21 | Eonite Perception Inc. | Active region determination for head mounted displays |
JP6404196B2 (ja) | 2015-09-16 | 2018-10-10 | グリー株式会社 | 仮想画像表示プログラム、仮想画像表示装置及び仮想画像表示方法 |
US11017712B2 (en) | 2016-08-12 | 2021-05-25 | Intel Corporation | Optimized display image rendering |
US9928660B1 (en) * | 2016-09-12 | 2018-03-27 | Intel Corporation | Hybrid rendering for a wearable display attached to a tethered computer |
CN110178370A (zh) * | 2017-01-04 | 2019-08-27 | 辉达公司 | 使用用于立体渲染的光线步进和虚拟视图广播器进行这种渲染 |
EP3602244A1 (en) * | 2017-03-23 | 2020-02-05 | InterDigital CE Patent Holdings | Method and apparatus for providing immersive reality content |
CN107396081B (zh) * | 2017-06-19 | 2019-04-12 | 深圳市铂岩科技有限公司 | 针对全景视频的优化编码方法及装置 |
EP3725070A1 (en) * | 2017-12-15 | 2020-10-21 | PCMS Holdings, Inc. | A method for using viewing paths in navigation of 360° videos |
US10805653B2 (en) * | 2017-12-26 | 2020-10-13 | Facebook, Inc. | Accounting for locations of a gaze of a user within content to select content for presentation to the user |
KR20200106547A (ko) | 2018-01-18 | 2020-09-14 | 밸브 코포레이션 | 센서 집적 회로들을 포함하는 머리-착용 디스플레이들에 대한 위치 추적 시스템 |
TWI678920B (zh) * | 2018-05-23 | 2019-12-01 | 宏碁股份有限公司 | 影片處理裝置、其影片處理方法及電腦程式產品 |
CN110557652A (zh) * | 2018-05-30 | 2019-12-10 | 宏碁股份有限公司 | 视频处理装置及其视频处理方法 |
GB2576211A (en) | 2018-08-10 | 2020-02-12 | Sony Corp | A method for mapping an object to a location in virtual space |
GB2576904B (en) * | 2018-09-06 | 2021-10-20 | Sony Interactive Entertainment Inc | Content modification system and method |
GB2576910B (en) * | 2018-09-06 | 2021-10-20 | Sony Interactive Entertainment Inc | User profile generating system and method |
GB2576905B (en) * | 2018-09-06 | 2021-10-27 | Sony Interactive Entertainment Inc | Gaze input System and method |
US10855978B2 (en) * | 2018-09-14 | 2020-12-01 | The Toronto-Dominion Bank | System and method for receiving user input in virtual/augmented reality |
US11032607B2 (en) * | 2018-12-07 | 2021-06-08 | At&T Intellectual Property I, L.P. | Methods, devices, and systems for embedding visual advertisements in video content |
JP7219620B2 (ja) * | 2019-01-23 | 2023-02-08 | 株式会社近江デジタルファブリケーションズ | 配信画像生成方法 |
SE543121C2 (en) | 2019-02-04 | 2020-10-13 | Tobii Ab | Method and system for determining a current gaze direction |
CN112423108B (zh) * | 2019-08-20 | 2023-06-30 | 中兴通讯股份有限公司 | 码流的处理方法、装置、第一终端、第二终端及存储介质 |
US11307655B2 (en) | 2019-09-19 | 2022-04-19 | Ati Technologies Ulc | Multi-stream foveal display transport |
US11956295B2 (en) | 2019-09-27 | 2024-04-09 | Apple Inc. | Client-end enhanced view prediction for multi-view video streaming exploiting pre-fetched data and side information |
US20230011586A1 (en) * | 2019-12-09 | 2023-01-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Electronic device, server and methods for viewport prediction based on head and eye gaze |
WO2021172506A1 (ja) * | 2020-02-29 | 2021-09-02 | 株式会社近江デジタルファブリケーションズ | 動画配信方法、動画再生方法、動画配信装置、及び配信データ構造 |
GB2596541B (en) * | 2020-06-30 | 2023-09-13 | Sony Interactive Entertainment Inc | Video processing |
US11410272B2 (en) * | 2020-07-01 | 2022-08-09 | Facebook Technologies, Llc. | Dynamic uniformity correction |
US12107907B2 (en) * | 2020-08-28 | 2024-10-01 | Tmrw Foundation Ip S.Àr.L. | System and method enabling interactions in virtual environments with virtual presence |
US12034785B2 (en) | 2020-08-28 | 2024-07-09 | Tmrw Foundation Ip S.Àr.L. | System and method enabling interactions in virtual environments with virtual presence |
CN114513669A (zh) * | 2020-11-16 | 2022-05-17 | 华为云计算技术有限公司 | 视频编码及视频播放方法、装置和系统 |
US11630509B2 (en) * | 2020-12-11 | 2023-04-18 | Microsoft Technology Licensing, Llc | Determining user intent based on attention values |
JPWO2023095456A1 (zh) * | 2021-11-29 | 2023-06-01 |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030011619A1 (en) * | 1997-10-08 | 2003-01-16 | Robert S. Jacobs | Synchronization and blending of plural images into a seamless combined image |
US20070188521A1 (en) * | 2006-02-15 | 2007-08-16 | Miller Steven D | Method and apparatus for three dimensional blending |
US20090129693A1 (en) * | 2007-11-15 | 2009-05-21 | Bloebaum L Scott | System and method for generating a photograph with variable image quality |
US20090273710A1 (en) * | 2008-04-30 | 2009-11-05 | Larry Pearlstein | Image processing methods and systems for frame rate conversion |
US20120146891A1 (en) * | 2010-12-08 | 2012-06-14 | Sony Computer Entertainment Inc. | Adaptive displays using gaze tracking |
US20120170642A1 (en) * | 2011-01-05 | 2012-07-05 | Rovi Technologies Corporation | Systems and methods for encoding trick play streams for performing smooth visual search of media encoded for adaptive bitrate streaming via hypertext transfer protocol |
US20120265856A1 (en) * | 2011-04-18 | 2012-10-18 | Cisco Technology, Inc. | System and method for data streaming in a computer network |
US20130293672A1 (en) * | 2011-02-10 | 2013-11-07 | Panasonic Corporation | Display device, computer program, and computer-implemented method |
US20140123162A1 (en) * | 2012-10-26 | 2014-05-01 | Mobitv, Inc. | Eye tracking based defocusing |
US20150061995A1 (en) * | 2013-09-03 | 2015-03-05 | Tobbi Technology Ab | Portable eye tracking device |
US8990682B1 (en) * | 2011-10-05 | 2015-03-24 | Google Inc. | Methods and devices for rendering interactions between virtual and physical objects on a substantially transparent display |
US20160048964A1 (en) * | 2014-08-13 | 2016-02-18 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
US9876780B2 (en) * | 2014-11-21 | 2018-01-23 | Sonos, Inc. | Sharing access to a media service |
Family Cites Families (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4208811A (en) | 1978-11-30 | 1980-06-24 | Enrique Junowicz | Display with overlapping picture elements |
US6078349A (en) * | 1995-06-07 | 2000-06-20 | Compaq Computer Corporation | Process and system for increasing the display resolution of a point-to-point video transmission relative to the actual amount of video data sent |
US6331869B1 (en) | 1998-08-07 | 2001-12-18 | Be Here Corporation | Method and apparatus for electronically distributing motion panoramic images |
JPH10271499A (ja) | 1997-03-26 | 1998-10-09 | Sanyo Electric Co Ltd | 画像領域を用いる画像処理方法、その方法を用いた画像処理装置および画像処理システム |
JP3511462B2 (ja) * | 1998-01-29 | 2004-03-29 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 操作画像表示装置およびその方法 |
WO1999059026A2 (en) | 1998-05-13 | 1999-11-18 | Infinite Pictures Inc. | Panoramic movies which simulate movement through multidimensional space |
JP2001008232A (ja) | 1999-06-25 | 2001-01-12 | Matsushita Electric Ind Co Ltd | 全方位映像出力方法と装置 |
WO2001095513A1 (en) | 2000-06-09 | 2001-12-13 | Imove Inc. | Streaming panoramic video |
US6788333B1 (en) | 2000-07-07 | 2004-09-07 | Microsoft Corporation | Panoramic video |
US6559846B1 (en) | 2000-07-07 | 2003-05-06 | Microsoft Corporation | System and process for viewing panoramic video |
JP2002183212A (ja) | 2000-12-19 | 2002-06-28 | Fuji Xerox Co Ltd | 電子文書加工システム、電子文書加工方法、及び、コンピュータ読取り可能な記録媒体 |
EP1410621A1 (en) | 2001-06-28 | 2004-04-21 | Omnivee Inc. | Method and apparatus for control and processing of video images |
US7714880B2 (en) | 2001-11-16 | 2010-05-11 | Honeywell International Inc. | Method and apparatus for displaying images on a display |
JP2004056335A (ja) | 2002-07-18 | 2004-02-19 | Sony Corp | 情報処理装置および方法、表示装置および方法、並びにプログラム |
CN100588230C (zh) * | 2003-02-07 | 2010-02-03 | 夏普株式会社 | 便携式蜂窝电话和便携式终端设备 |
EP1602322A1 (en) | 2004-06-02 | 2005-12-07 | SensoMotoric Instruments GmbH | Method and apparatus for eye tracking latency reduction |
US8232962B2 (en) * | 2004-06-21 | 2012-07-31 | Trading Technologies International, Inc. | System and method for display management based on user attention inputs |
US7456377B2 (en) * | 2004-08-31 | 2008-11-25 | Carl Zeiss Microimaging Ais, Inc. | System and method for creating magnified images of a microscope slide |
JP2006171822A (ja) | 2004-12-13 | 2006-06-29 | Nippon Telegr & Teleph Corp <Ntt> | コンテンツ配信方法 |
US20060256133A1 (en) * | 2005-11-05 | 2006-11-16 | Outland Research | Gaze-responsive video advertisment display |
US20070153023A1 (en) | 2006-01-04 | 2007-07-05 | Computer Associates Think, Inc. | System and method of temporal anti-aliasing |
US9250703B2 (en) | 2006-03-06 | 2016-02-02 | Sony Computer Entertainment Inc. | Interface with gaze detection and voice input |
IL175835A0 (en) | 2006-05-22 | 2007-07-04 | Rafael Armament Dev Authority | Methods and systems for communicating and displaying points-of-interest |
US8446509B2 (en) | 2006-08-09 | 2013-05-21 | Tenebraex Corporation | Methods of creating a virtual window |
SE0602545L (en) | 2006-11-29 | 2008-05-30 | Tobii Technology Ab | Eye tracking illumination |
JP4863936B2 (ja) * | 2007-06-25 | 2012-01-25 | 株式会社ソニー・コンピュータエンタテインメント | 符号化処理装置および符号化処理方法 |
JP4897600B2 (ja) | 2007-07-19 | 2012-03-14 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、及びプログラム |
US20090074084A1 (en) | 2007-09-18 | 2009-03-19 | David Drezner | Method and System for Adaptive Preprocessing for Video Encoder |
US20090278921A1 (en) | 2008-05-12 | 2009-11-12 | Capso Vision, Inc. | Image Stabilization of Video Play Back |
WO2009141770A1 (en) * | 2008-05-21 | 2009-11-26 | Koninklijke Philips Electronics N.V. | Image resolution enhancement |
US7850306B2 (en) | 2008-08-28 | 2010-12-14 | Nokia Corporation | Visual cognition aware display and visual data transmission architecture |
JP4775671B2 (ja) | 2008-12-26 | 2011-09-21 | ソニー株式会社 | 情報処理装置および方法、並びにプログラム |
JP5595027B2 (ja) | 2009-12-11 | 2014-09-24 | 三菱電機株式会社 | 情報表示処理装置 |
US8914305B2 (en) * | 2010-06-30 | 2014-12-16 | Trading Technologies International, Inc. | Method and apparatus for motion based target prediction and interaction |
US20130125155A1 (en) | 2010-07-26 | 2013-05-16 | Thomson Licensing | Dynamic adaptation of displayed video quality based on viewers' context |
US8487959B1 (en) * | 2010-08-06 | 2013-07-16 | Google Inc. | Generating simulated eye movement traces for visual displays |
US9057587B2 (en) * | 2010-08-19 | 2015-06-16 | Evrio, Inc. | Display indicating aiming point using intermediate point in trajectory path |
US9232257B2 (en) | 2010-09-22 | 2016-01-05 | Thomson Licensing | Method for navigation in a panoramic scene |
CN103190156A (zh) * | 2010-09-24 | 2013-07-03 | 株式会社Gnzo | 视频比特流的传输系统 |
US8576276B2 (en) * | 2010-11-18 | 2013-11-05 | Microsoft Corporation | Head-mounted display device which provides surround video |
JP2012124784A (ja) | 2010-12-09 | 2012-06-28 | Canon Marketing Japan Inc | 動画再生システム |
US9690099B2 (en) * | 2010-12-17 | 2017-06-27 | Microsoft Technology Licensing, Llc | Optimized focal area for augmented reality displays |
EP2472867A1 (en) | 2010-12-30 | 2012-07-04 | Advanced Digital Broadcast S.A. | Coding and decoding of multiview videos |
US20120262540A1 (en) | 2011-04-18 | 2012-10-18 | Eyesee360, Inc. | Apparatus and Method for Panoramic Video Imaging with Mobile Computing Devices |
US20160286119A1 (en) * | 2011-04-18 | 2016-09-29 | 360fly, Inc. | Mobile Device-Mountable Panoramic Camera System and Method of Displaying Images Captured Therefrom |
WO2012147303A1 (ja) | 2011-04-25 | 2012-11-01 | Hayashi Mitsuo | 全方位画像編集プログラム及び全方位画像編集装置 |
JP5918618B2 (ja) | 2011-06-03 | 2016-05-18 | 任天堂株式会社 | 情報処理プログラム、情報処理装置、情報処理システム、および情報処理方法 |
US8184069B1 (en) | 2011-06-20 | 2012-05-22 | Google Inc. | Systems and methods for adaptive transmission of data |
US8847968B2 (en) * | 2011-07-12 | 2014-09-30 | Qualcomm Incorporated | Displaying static images |
US8636361B2 (en) | 2011-07-20 | 2014-01-28 | National Taiwan University | Learning-based visual attention prediction system and method thereof |
US9897805B2 (en) * | 2013-06-07 | 2018-02-20 | Sony Interactive Entertainment Inc. | Image rendering responsive to user actions in head mounted display |
WO2013032955A1 (en) * | 2011-08-26 | 2013-03-07 | Reincloud Corporation | Equipment, systems and methods for navigating through multiple reality models |
US8611015B2 (en) | 2011-11-22 | 2013-12-17 | Google Inc. | User interface |
FR2976149A1 (fr) * | 2011-12-13 | 2012-12-07 | Thomson Licensing | Dispositif d'obtention de contenus en mode streaming en fonction de la distance ecran/observateur, et recepteur de contenus associe |
EP2615834A1 (en) | 2012-01-16 | 2013-07-17 | Thomson Licensing | Dealiasing method and device for 3D view synthesis |
US8396983B1 (en) * | 2012-03-13 | 2013-03-12 | Google Inc. | Predictive adaptive media streaming |
US9082011B2 (en) | 2012-03-28 | 2015-07-14 | Texas State University—San Marcos | Person identification using ocular biometrics with liveness detection |
US20130271565A1 (en) | 2012-04-16 | 2013-10-17 | Qualcomm Incorporated | View synthesis based on asymmetric texture and depth resolutions |
US20150172544A1 (en) | 2012-07-04 | 2015-06-18 | Zhipin Deng | Panorama based 3d video coding |
US20150193395A1 (en) | 2012-07-30 | 2015-07-09 | Google Inc. | Predictive link pre-loading |
US9164580B2 (en) * | 2012-08-24 | 2015-10-20 | Microsoft Technology Licensing, Llc | Calibration of eye tracking system |
US20140087877A1 (en) | 2012-09-27 | 2014-03-27 | Sony Computer Entertainment Inc. | Compositing interactive video game graphics with pre-recorded background video content |
US9176581B2 (en) | 2012-09-28 | 2015-11-03 | Intel Corporation | System and method for inferring user intent based on eye movement during observation of a display screen |
JP2014072608A (ja) | 2012-09-28 | 2014-04-21 | Brother Ind Ltd | 情報処理システム、情報処理装置、表示装置、及びプログラム |
JP5923021B2 (ja) | 2012-10-05 | 2016-05-24 | 日本電信電話株式会社 | 映像視聴履歴解析装置、映像視聴履歴解析方法及び映像視聴履歴解析プログラム |
WO2014058233A1 (ko) * | 2012-10-11 | 2014-04-17 | 연세대학교 산학협력단 | 데이터 프리로드를 통한 사용자 디바이스의 gui 반응 속도 증대 방법 및 그 사용자 디바이스 |
US20150234457A1 (en) * | 2012-10-15 | 2015-08-20 | Umoove Services Ltd. | System and method for content provision using gaze analysis |
GB2509953B (en) | 2013-01-18 | 2015-05-20 | Canon Kk | Method of displaying a region of interest in a video stream |
US9665171B1 (en) | 2013-03-04 | 2017-05-30 | Tobii Ab | Gaze and saccade based graphical manipulation |
US9948970B2 (en) | 2013-03-15 | 2018-04-17 | Cox Communications, Inc. | Systems, methods, and apparatus for accessing recordings of content items on multiple customer devices |
AU2013206560A1 (en) | 2013-06-27 | 2015-01-22 | Canon Kabushiki Kaisha | Method, system and apparatus for rendering |
US20150142884A1 (en) | 2013-11-21 | 2015-05-21 | Microsoft Corporation | Image Sharing for Online Collaborations |
JP6407526B2 (ja) | 2013-12-17 | 2018-10-17 | キヤノンメディカルシステムズ株式会社 | 医用情報処理システム、医用情報処理方法及び情報処理システム |
EP2894852A1 (en) | 2014-01-14 | 2015-07-15 | Alcatel Lucent | Process for increasing the quality of experience for users that watch on their terminals a high definition video stream |
US9313481B2 (en) | 2014-02-19 | 2016-04-12 | Microsoft Technology Licensing, Llc | Stereoscopic display responsive to focal-point shift |
US10264211B2 (en) | 2014-03-14 | 2019-04-16 | Comcast Cable Communications, Llc | Adaptive resolution in software applications based on dynamic eye tracking |
US9530450B2 (en) * | 2014-03-18 | 2016-12-27 | Vixs Systems, Inc. | Video system with fovea tracking and methods for use therewith |
US9462230B1 (en) | 2014-03-31 | 2016-10-04 | Amazon Technologies | Catch-up video buffering |
EP3149937A4 (en) | 2014-05-29 | 2018-01-10 | NEXTVR Inc. | Methods and apparatus for delivering content and/or playing back content |
US10204658B2 (en) | 2014-07-14 | 2019-02-12 | Sony Interactive Entertainment Inc. | System and method for use in playing back panorama video content |
US9552062B2 (en) | 2014-09-05 | 2017-01-24 | Echostar Uk Holdings Limited | Gaze-based security |
WO2016073986A1 (en) * | 2014-11-07 | 2016-05-12 | Eye Labs, LLC | Visual stabilization system for head-mounted displays |
CN107439010B (zh) | 2015-05-27 | 2022-01-04 | 谷歌公司 | 流传输球形视频 |
US9877016B2 (en) | 2015-05-27 | 2018-01-23 | Google Llc | Omnistereo capture and render of panoramic virtual reality content |
IN2015CH02866A (zh) | 2015-06-09 | 2015-07-17 | Wipro Ltd | |
US9704298B2 (en) | 2015-06-23 | 2017-07-11 | Paofit Holdings Pte Ltd. | Systems and methods for generating 360 degree mixed reality environments |
US9681046B2 (en) * | 2015-06-30 | 2017-06-13 | Gopro, Inc. | Image stitching in a multi-camera array |
US9857871B2 (en) | 2015-09-04 | 2018-01-02 | Sony Interactive Entertainment Inc. | Apparatus and method for dynamic graphics rendering based on saccade detection |
US10099122B2 (en) | 2016-03-30 | 2018-10-16 | Sony Interactive Entertainment Inc. | Head-mounted display tracking |
US10462466B2 (en) * | 2016-06-20 | 2019-10-29 | Gopro, Inc. | Systems and methods for spatially selective video coding |
US10095937B2 (en) * | 2016-06-21 | 2018-10-09 | GM Global Technology Operations LLC | Apparatus and method for predicting targets of visual attention |
US10805592B2 (en) | 2016-06-30 | 2020-10-13 | Sony Interactive Entertainment Inc. | Apparatus and method for gaze tracking |
KR102560029B1 (ko) * | 2016-09-12 | 2023-07-26 | 삼성전자주식회사 | 가상 현실 콘텐트를 송수신하는 방법 및 장치 |
US10341658B2 (en) | 2017-01-30 | 2019-07-02 | Intel Corporation | Motion, coding, and application aware temporal and spatial filtering for video pre-processing |
-
2016
- 2016-09-29 US US15/280,962 patent/US10805592B2/en active Active
- 2016-09-29 US US15/280,933 patent/US11089280B2/en active Active
- 2016-09-29 US US15/280,947 patent/US20180007422A1/en not_active Abandoned
-
2017
- 2017-05-30 JP JP2018568225A patent/JP6686186B2/ja active Active
- 2017-05-30 KR KR1020207037655A patent/KR102294098B1/ko active IP Right Grant
- 2017-05-30 CN CN201780039518.8A patent/CN109416931B/zh active Active
- 2017-05-30 KR KR1020197003058A patent/KR20190022851A/ko active Application Filing
- 2017-05-30 JP JP2018568224A patent/JP6859372B2/ja active Active
- 2017-05-30 EP EP17820807.0A patent/EP3479574A4/en active Pending
- 2017-05-30 CN CN201780039760.5A patent/CN109417624B/zh active Active
- 2017-05-30 EP EP17820805.4A patent/EP3479257A4/en active Pending
-
2020
- 2020-04-01 JP JP2020065716A patent/JP6944564B2/ja active Active
-
2021
- 2021-03-25 JP JP2021051166A patent/JP7029562B2/ja active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030011619A1 (en) * | 1997-10-08 | 2003-01-16 | Robert S. Jacobs | Synchronization and blending of plural images into a seamless combined image |
US20070188521A1 (en) * | 2006-02-15 | 2007-08-16 | Miller Steven D | Method and apparatus for three dimensional blending |
US20090129693A1 (en) * | 2007-11-15 | 2009-05-21 | Bloebaum L Scott | System and method for generating a photograph with variable image quality |
US20090273710A1 (en) * | 2008-04-30 | 2009-11-05 | Larry Pearlstein | Image processing methods and systems for frame rate conversion |
US20120146891A1 (en) * | 2010-12-08 | 2012-06-14 | Sony Computer Entertainment Inc. | Adaptive displays using gaze tracking |
US20120170642A1 (en) * | 2011-01-05 | 2012-07-05 | Rovi Technologies Corporation | Systems and methods for encoding trick play streams for performing smooth visual search of media encoded for adaptive bitrate streaming via hypertext transfer protocol |
US20130293672A1 (en) * | 2011-02-10 | 2013-11-07 | Panasonic Corporation | Display device, computer program, and computer-implemented method |
US20120265856A1 (en) * | 2011-04-18 | 2012-10-18 | Cisco Technology, Inc. | System and method for data streaming in a computer network |
US8990682B1 (en) * | 2011-10-05 | 2015-03-24 | Google Inc. | Methods and devices for rendering interactions between virtual and physical objects on a substantially transparent display |
US20140123162A1 (en) * | 2012-10-26 | 2014-05-01 | Mobitv, Inc. | Eye tracking based defocusing |
US20150061995A1 (en) * | 2013-09-03 | 2015-03-05 | Tobbi Technology Ab | Portable eye tracking device |
US20160048964A1 (en) * | 2014-08-13 | 2016-02-18 | Empire Technology Development Llc | Scene analysis for improved eye tracking |
US9876780B2 (en) * | 2014-11-21 | 2018-01-23 | Sonos, Inc. | Sharing access to a media service |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11120837B2 (en) | 2014-07-14 | 2021-09-14 | Sony Interactive Entertainment Inc. | System and method for use in playing back panorama video content |
US10204658B2 (en) | 2014-07-14 | 2019-02-12 | Sony Interactive Entertainment Inc. | System and method for use in playing back panorama video content |
US10291845B2 (en) * | 2015-08-17 | 2019-05-14 | Nokia Technologies Oy | Method, apparatus, and computer program product for personalized depth of field omnidirectional video |
US10805592B2 (en) | 2016-06-30 | 2020-10-13 | Sony Interactive Entertainment Inc. | Apparatus and method for gaze tracking |
US11089280B2 (en) | 2016-06-30 | 2021-08-10 | Sony Interactive Entertainment Inc. | Apparatus and method for capturing and displaying segmented content |
US10911823B2 (en) * | 2016-12-12 | 2021-02-02 | Zte Corporation | Media information processing method, apparatus and system |
US11290699B2 (en) * | 2016-12-19 | 2022-03-29 | Dolby Laboratories Licensing Corporation | View direction based multilevel low bandwidth techniques to support individual user experiences of omnidirectional video |
US20180176535A1 (en) * | 2016-12-19 | 2018-06-21 | Dolby Laboratories Licensing Corporation | View Direction Based Multilevel Low Bandwidth Techniques to Support Individual User Experiences of Omnidirectional Video |
US10979663B2 (en) * | 2017-03-30 | 2021-04-13 | Yerba Buena Vr, Inc. | Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for VR videos |
US11004427B2 (en) * | 2017-07-24 | 2021-05-11 | Arm Limited | Method of and data processing system for providing an output surface |
US20190027120A1 (en) * | 2017-07-24 | 2019-01-24 | Arm Limited | Method of and data processing system for providing an output surface |
US20190064513A1 (en) * | 2017-08-31 | 2019-02-28 | Tobii Ab | Systems and methods for tracking a gaze of a user across a multi-display arrangement |
US10585277B2 (en) * | 2017-08-31 | 2020-03-10 | Tobii Ab | Systems and methods for tracking a gaze of a user across a multi-display arrangement |
US11048464B2 (en) * | 2018-07-31 | 2021-06-29 | Dell Products, L.P. | Synchronization and streaming of workspace contents with audio for collaborative virtual, augmented, and mixed reality (xR) applications |
US11656734B2 (en) | 2018-08-10 | 2023-05-23 | Sony Corporation | Method for mapping an object to a location in virtual space |
US12079441B2 (en) | 2018-08-10 | 2024-09-03 | Sony Group Corporation | Method for mapping an object to a location in virtual space |
US10833945B2 (en) * | 2018-11-13 | 2020-11-10 | International Business Machines Corporation | Managing downloading of content |
CN113170234A (zh) * | 2018-11-29 | 2021-07-23 | 苹果公司 | 多向视频的自适应编码和流式传输 |
US11627343B2 (en) | 2018-11-29 | 2023-04-11 | Apple Inc. | Adaptive coding and streaming of multi-directional video |
US12096044B2 (en) | 2018-11-29 | 2024-09-17 | Apple Inc. | Adaptive coding and streaming of multi-directional video |
JP2022511838A (ja) * | 2018-12-14 | 2022-02-01 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド | フォービエイテッドコーディングのスライスサイズマップ制御 |
JP7311600B2 (ja) | 2018-12-14 | 2023-07-19 | アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド | フォービエイテッドコーディングのスライスサイズマップ制御 |
US20210055787A1 (en) * | 2019-08-22 | 2021-02-25 | Samsung Electronics Co., Ltd. | Immersive device and method for streaming of immersive media |
US11481026B2 (en) * | 2019-08-22 | 2022-10-25 | Samsung Electronics Co., Ltd. | Immersive device and method for streaming of immersive media |
US11284141B2 (en) | 2019-12-18 | 2022-03-22 | Yerba Buena Vr, Inc. | Methods and apparatuses for producing and consuming synchronized, immersive interactive video-centric experiences |
US11750864B2 (en) | 2019-12-18 | 2023-09-05 | Yerba Buena Vr, Inc. | Methods and apparatuses for ingesting one or more media assets across a video platform |
US20220103655A1 (en) * | 2020-09-29 | 2022-03-31 | International Business Machines Corporation | Proactively selecting virtual reality content contexts |
Also Published As
Publication number | Publication date |
---|---|
EP3479257A4 (en) | 2020-02-26 |
EP3479574A1 (en) | 2019-05-08 |
CN109416931A (zh) | 2019-03-01 |
US20180004285A1 (en) | 2018-01-04 |
US20180007339A1 (en) | 2018-01-04 |
JP6859372B2 (ja) | 2021-04-14 |
JP2019525305A (ja) | 2019-09-05 |
CN109417624B (zh) | 2024-01-23 |
US10805592B2 (en) | 2020-10-13 |
US11089280B2 (en) | 2021-08-10 |
JP6686186B2 (ja) | 2020-04-22 |
KR20190022851A (ko) | 2019-03-06 |
JP2021103327A (ja) | 2021-07-15 |
KR20210000761A (ko) | 2021-01-05 |
JP7029562B2 (ja) | 2022-03-03 |
JP2019521388A (ja) | 2019-07-25 |
EP3479257A1 (en) | 2019-05-08 |
JP2020123962A (ja) | 2020-08-13 |
CN109417624A (zh) | 2019-03-01 |
KR102294098B1 (ko) | 2021-08-26 |
EP3479574A4 (en) | 2020-02-26 |
JP6944564B2 (ja) | 2021-10-06 |
CN109416931B (zh) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102294098B1 (ko) | 콘텐츠를 제공 및 디스플레이하기 위한 장치 및 방법 | |
US10536693B2 (en) | Analytic reprocessing for data stream system and method | |
US20160277772A1 (en) | Reduced bit rate immersive video | |
Fan et al. | A survey on 360 video streaming: Acquisition, transmission, and display | |
US11653065B2 (en) | Content based stream splitting of video data | |
US11290699B2 (en) | View direction based multilevel low bandwidth techniques to support individual user experiences of omnidirectional video | |
US11284124B2 (en) | Spatially tiled omnidirectional video streaming | |
US12096044B2 (en) | Adaptive coding and streaming of multi-directional video | |
CN112204993B (zh) | 使用重叠的被分区的分段的自适应全景视频流式传输 | |
US10769754B2 (en) | Virtual reality cinema-immersive movie watching for headmounted displays | |
US11120615B2 (en) | Dynamic rendering of low frequency objects in a virtual reality system | |
US20190104330A1 (en) | Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices | |
WO2018004936A1 (en) | Apparatus and method for providing and displaying content | |
CN116848840A (zh) | 多视图视频流式传输 | |
KR102183895B1 (ko) | 가상 현실 비디오 스트리밍에서의 관심영역 타일 인덱싱 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CASTLEMAN, DENNIS D.;REEL/FRAME:039969/0239 Effective date: 20161004 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |