US20180192022A1 - Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices - Google Patents
Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices Download PDFInfo
- Publication number
- US20180192022A1 US20180192022A1 US15/860,471 US201815860471A US2018192022A1 US 20180192022 A1 US20180192022 A1 US 20180192022A1 US 201815860471 A US201815860471 A US 201815860471A US 2018192022 A1 US2018192022 A1 US 2018192022A1
- Authority
- US
- United States
- Prior art keywords
- coordinate data
- matrix
- model
- fov
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000009877 rendering Methods 0.000 title claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 83
- 230000004927 fusion Effects 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
-
- H04N13/0018—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0888—Throughput
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/752—Media network packet handling adapting media to network capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H04N13/0029—
-
- H04N13/0055—
-
- H04N13/0275—
-
- H04N13/044—
-
- H04N13/0484—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/378—Image reproducers using viewer tracking for tracking rotational head movements around an axis perpendicular to the screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
- H04N21/2335—Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234345—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2405—Monitoring of the internal components or processes of the server, e.g. server load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2662—Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/87—Regeneration of colour television signals
- H04N9/8715—Regeneration of colour television signals involving the mixing of the reproduced video signal with a non-recorded signal, e.g. a text signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
Definitions
- the present disclosure relates to video processing technology, and more particularly, to a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices.
- VR virtual reality
- Virtual Reality is a computer simulation technology for creating and experiencing a virtual world. For example, a three-dimensional real-time image can be presented based on a technology which tracks a user's head, eyes or hand.
- a network-based virtual reality technology full-view video data is pre-stored on a server, and then transmitted to a display device, such as glasses. A video is displayed on the display device in accordance with a viewing angle of the user.
- a VR playback system should be optimized as much as possible in terms of software, so as to reduce resource consumption, improve processing efficiency and meanwhile avoid degrading users's viewing experience.
- the present disclosure provides a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices to solve the above problems.
- VR virtual reality
- a method for real-time rendering displaying virtual reality (VR) using head-up display devices comprising:
- obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
- the step of obtaining relevant parameters comprises:
- the step of obtaining relevant parameters comprises:
- the camera matrix and the projection matrix is adjusted to achieve binocular-mode viewing effects.
- the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
- left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye
- mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro
- eye_ipd represents the eye distance parameter
- fov left , fov right , fov up , fov down , far,near represent the parameters relevant to field of view in binocular mode
- P x,y,z MVP represents the first coordinate data
- P x,y,z original represents the original coordinate data
- mat4 model represents the model matrix
- mat4 projection represents the projection matrix
- the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4 view to obtain the first coordinate data P x,y,z MVP .
- the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
- (x d ,y d ) is distorted image field coordinates after lens projection, ie, the first coordinate data
- (x u ,y u ) is the corrected image field coordinates
- (x c ,y c ) is the center position of lens distortion
- K n is a nth radial distortion coefficient
- P n is a nth tangential distortion coefficient
- r is a distance from pixels to optical axis.
- coordinate for the center position of lens distortion is obtained by the following steps,
- x center window _ pixel lerp( x center normal , ⁇ 1,1,0,width widow )
- y center window _ pixel lerp( y center normal , ⁇ 1,1,0,height widow ) (12)
- the method further comprising: adding a blackout mask.
- the method further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
- the equation used for performing data smoothing is
- ⁇ t is a fusion rotation angle based on time t
- k is a fusion weight constant
- ⁇ is an angular velocity read by an accelerometer
- ⁇ is an angle read from the gyros
- ⁇ t is a difference between an output time moment and its previous time moment
- ⁇ t is a fusion rotation angle based on time t
- angularSpeed is an angular velocity read by the accelerometer
- predictionTimeS is a prediction time constant
- ⁇ is a rotation prediction threshold
- the gyros and the accelerometer are provided on a head-up display device.
- the method further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
- a system for real-time rendering displaying virtual reality (VR) using head-up display devices comprising:
- a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
- a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model
- a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model
- a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data
- a rasterization unit configured to rasterize the second coordinate data to obtain pixel information
- an image drawing unit configured to draw an image based on a VR video data and the pixel information.
- the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
- FIG. 1 is a diagram illustrating an example network of a VR playback system
- FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1 ;
- FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure
- FIG. 4 is an example diagram of a head-up display device
- FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3 ;
- VR virtual reality
- FIG. 6 is a schematic diagram of a parameter transfer process between a computer processor and a display chip.
- FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
- VR virtual reality
- FIG. 1 is a diagram illustrating an example network of a VR playback system.
- the VR playback system 10 includes a server 100 and a display device 120 which are coupled with each other through a network 110 , and a VR device.
- the server 100 may be a stand-alone computer server or a server cluster.
- the server 100 is used to store various video data and to store various applications that process these video data.
- various daemons run on the server 100 in real time, so as to process various video data in the server 100 and to respond various requests from VR devices and the display device 120 .
- the network 110 may be a selected one or selected ones from the group consisting of an internet, a local area network, an internet of things, and the like.
- the display device 120 may be any of the computing devices, including a computer device having an independent display screen and a processing capability.
- the display device 120 may be a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a palmtop computer, a personal digital assistant, a smart phone, an intelligent electrical apparatus, a game console, an iPad/iPhone, a video player, a DVD recorder/player, a television, or a home entertainment system.
- the display device 120 may store VR player software as a VR player. When the VR player is started, it requests and downloads various video data from the server 100 , and renders and plays the video data in the display device.
- the VR device 130 is a stand-alone head-up display device that can interact with the display device 120 and the server 100 , to communicate the user's current information with the display device 120 and/or the server 100 through signaling.
- the user's current information is, for example, parameters relevant to users' field of view, positions of users' helmet, changes of sight of eyes. According to these information, the display device 120 can flexibly process the currently played video data. In some embodiments, when a user turns his head, the display device 120 determines that a core viewing region for the user has been changed and starts to play video data with high resolution in the changed core viewing region.
- the VR device 130 is a stand-alone head-up display device.
- the VR device 130 is not limited thereto, and the VR device 130 may also be an all-in-one head-up display device.
- the all-in-one head-up display device itself has a display screen, so that it is not necessary to connect the all-in-one head-up display device with the external display device.
- the display device 120 may be omitted.
- the all-in-one head-up display device is configured to obtain video data from the server 100 and to perform playback operation, and the all-in-one head-up display device is also configured to detect a user's current viewing angle changing information and to adjust the playback operation according to the viewing angle changing information.
- FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1 . The method includes the following steps.
- step S 10 a video data processing procedure is operated on the server.
- step S 20 the display device obtains relevant information by interacting with the VR device.
- step S 30 the display device requests the server to provide video data and receives the video data.
- step S 40 the display device renders the received video data.
- the video data obtained from the server is used to draw an image, i.e., the video data is played.
- FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure.
- the method implements playing the video data in binocular mode.
- the method includes following steps.
- step S 100 relevant parameters are obtained.
- the relevant parameters are calculated based on specification of a head-up display device and a screen size.
- the relevant parameters include parameters for field of view of left and right lenses, a camera matrix, a projection matrix, a model matrix and a center position of lens distortion.
- FIG. 4 is an example diagram of a head-up display device.
- the head-up display device includes a stand and left and right lenses on the stand, and human eyes obtains images from left and right view areas through the left and right lenses. Because the left and right view areas provide images with difference, human mind, ater obtaining the information with difference, produces a three-dimensional sense.
- Different type of head-up devices have different specification and parameters, generally, the specification and parameters can be obtained by querying websites or querying built-in parameter files, and then the relevant parameters required in rendering process can be calculated in accordance with the specification and parameters.
- step S 200 a 3D model is built, and the original coordinate data of the 3D model is obtained.
- a suitable 3D model can be created in accordance with requirements.
- a polygonal sphere can be created as the 3D model and the original coordinate data can be obtained based on the polygonal sphere.
- step S 300 first coordinate data is obtained in accordance with the relevant parameters and the original coordinate data of the 3D model.
- step S 400 lens distortion is performed on the first coordinate data based on the center position of lens distortion to obtain second coordinate data.
- step S 300 vector calculation on the original coordinate data is performed in accordance with the camera matrix, the projection matrix and the model matrix to obtain the calculated coordinate data as the first coordinate data, and in step S 400 , the first coordinate data is further distorted to obtain the second coordinate data.
- step S 500 the second coordinate data is rasterized to obtain pixel information.
- the second coordinate data is processed into pixel information on a plane.
- step S 600 an image is drawn based on a VR video data and the pixel information.
- the VR video data downloaded from the server is decoded to obtain the pixel information therein, the pixel information are assigned in accordance with the pixel information, and finally the image is drawn.
- the original coordinate data in the 3D model is lens distorted and then the pixel information is assigned to the distorted coordinate data, so as to achieve binocular-mode viewing effects, because the lens distortion is performed during the time period of treatment of the 3D model, the video and binocular-mode rendering are implemented in one processing, which is equivalent to doubling the rendering efficiency of the existing scheme.
- the original coordinate data in the 3D model is lens-distorted in accordance with the relevant parameters obtained based on the information such as specification of the head-up display device, the screen size, and the like, the lens distortion effect can be adjusted by adjusting the relevant parameters to achieve better rendering effect.
- the above method further includes: obtaining real-time data of the gyros and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
- the above method further includes adding a blackout mask.
- the blackout mask can be seen in FIG. 6 , adding the blackout mask can improve immersive effect of VR viewing.
- FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3 .
- the method includes following steps.
- step S 101 the parameters such as field of view and the like are obtained according to the specification of the head-up display device and the screen size.
- step S 102 the eye distance parameter is obtained according to the specification of the head-up display device.
- step S 103 the model matrix is obtained.
- step S 104 the camera matrix is calculated.
- step S 105 the center position of lens distortion is calculated.
- step S 106 the projection matrix is calculated.
- the center position of lens distortion and eye distance can refer to FIG. 4 .
- Table 1 is a variable definition table.
- the first coordinate data can be calculated by the following equation:
- mat4 view represents a camera matrix, which can be generated directly in accordance with rotation angles of a gyro, left_view_matrix and right_view_matrix are respectively camera matrices for left and right eyes, eye_ipd represents the eye distance parameters;
- fov left , fov right , fov up , fov down , far,near represent parameters relevant to field of view in binocular mode.
- model matrix mat4 model is to be a unit matrix
- P x,y,z MVP represents the first coordinate data
- P x,y,z original represents the original coordinate data
- mat4 model represents the model matrix
- mat4 projection represents the projection matrix
- the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into the equation (6) instead of mat4 view to obtain the first coordinate data P x,y,z MVP .
- the above step S 400 can refer to the calculation steps in the following example.
- the linear interpolation between two vectors can be performed in accordance with t by using the equation lerp.
- the coordinate (x center window _ pixel ,y center window _ pixel ) of the center position of lens distortion center can be solved according to the projection matrix mat4 projection and screen size width widow *height widow , where the coordinates (x center normal ,y center normal ) is a point in the space coordinate axis of [ ⁇ 1,1].
- x center window _ pixel lerp( x center normal , ⁇ 1,1,0,width widow )
- y center window _ pixel lerp( y center normal , ⁇ 1,1,0,height widow ) (12)
- ⁇ t fusion rotation angle based on time t k fusion weight constant ⁇ ⁇ is an angular velocity read by an accelerometer (the accelerometer is provided on the head-up display device) ⁇ angles read from the gyros ⁇ t difference between the output time moment and the previous time moment
- FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
- VR virtual reality
- the system includes a parameter calculating unit 701 , a model building unit 702 , a coordinate calculating unit 703 , a lens distortion unit 704 , a rasterization unit 705 , and an image drawing unit 706 .
- the parameter calculating unit 701 configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion.
- the model building unit 702 is configured to create a 3D model and obtain an original coordinate data of the 3D model.
- the 3D model can be created based on WebGL and initialized to obtain UV coordinates.
- the coordinate calculating unit 703 is configured to obtain first coordinate data according to the relevant parameters and the original coordinate data of the 3D model.
- the first coordinate data is obtained by performing calculation based on the relevant parameters and the original coordinate data of the 3D model.
- the lens distortion unit 704 configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data. That is, the first coordinate data is distorted according to the center positions of the left and right lenses to obtain the second coordinate data.
- the rasterization unit 705 configured to rasterize the second coordinate data to obtain a pixel information.
- the image drawing unit 706 configured to an image based on a VR video data and the pixel information.
- the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Environmental & Geological Engineering (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Controls And Circuits For Display Device (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
Disclosed a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices. The method comprises: obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion; creating a 3D model and obtaining an original coordinate data of the 3D model; obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model; performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data; rasterizing the second coordinate data to obtain pixel information; and drawing an image in accordance with a VR video data and the pixel information. According to the present disclosure, the lens distortion on the coordinate data is performed in the 3D model, so that video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
Description
- This application claims the priority and benefit of U.S. provisional application 62/441,936, filed on Jan. 3, 2017, which is incorporated herein by reference in its entirety.
- The present disclosure relates to video processing technology, and more particularly, to a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices.
- Virtual Reality (VR) is a computer simulation technology for creating and experiencing a virtual world. For example, a three-dimensional real-time image can be presented based on a technology which tracks a user's head, eyes or hand. For a network-based virtual reality technology, full-view video data is pre-stored on a server, and then transmitted to a display device, such as glasses. A video is displayed on the display device in accordance with a viewing angle of the user.
- However, when the display device displays the video data, high-resolution video data needs to occupy a lot of computing resources, and as a result, the display device is required to has a high data processing capability. But currently, different types of display devices on market vary greatly in performance. In order to be compatible with these display devices, a VR playback system should be optimized as much as possible in terms of software, so as to reduce resource consumption, improve processing efficiency and meanwhile avoid degrading users's viewing experience.
- In view of this, the present disclosure provides a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices to solve the above problems.
- According to a first aspect of the present disclosure, there is provided a method for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
- obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
- creating a 3D model and obtaining an original coordinate data of the 3D model;
- obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
- performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
- rasterizing the second coordinate data to obtain pixel information; and
- drawing an image in accordance with a VR video data and the pixel information.
- Preferably, the step of obtaining relevant parameters comprises:
- obtaining parameters relevant to field of view in accordance with specification of a head-up display device and a screen size;
- calculating the center position of lens distortion in accordance with the parameters relevant to field of view; and
- calculate the projection matrix in accordance with the parameters relevant to field of view.
- Preferably, the step of obtaining relevant parameters comprises:
- obtaining an eye distance parameter based on specification of a head-up display device; and
- calculate the camera matrix in accordance with the eye distance parameter.
- Preferably, the camera matrix and the projection matrix is adjusted to achieve binocular-mode viewing effects.
- Preferably, the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
- calculating out camera matrices for left and right eyes in binocular mode by equations (1) to (4):
-
- wherein, left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye, mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro, and eye_ipd represents the eye distance parameter;
- calculating out the projection matrix mat4projection in binocular mode by equation (5),
-
- Wherein, fovleft, fovright, fovup, fovdown, far,near represent the parameters relevant to field of view in binocular mode;
- setting mat4model to be an identity matrix
- calculating out the first coordinate data Px,y,z MVP by equation (6),
-
P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original (6) - wherein, Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
- Preferably, the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
- obtaining distortion parameters in accordance with following equations (7) and (8):
-
- obtaining corrected image field coordinates (xu,yu) as the second coordinate data in accordance with the distortion parameters by using equations (9) and (10) in which all items containing p can be removed when tangential distortion correction is not performed,
-
x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . ) (9) -
y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . ) (10) - wherein (xd,yd) is distorted image field coordinates after lens projection, ie, the first coordinate data, (xu,yu) is the corrected image field coordinates, (xc,yc) is the center position of lens distortion, Kn is a nth radial distortion coefficient, Pn is a nth tangential distortion coefficient, r is a distance from pixels to optical axis.
- Preferably, coordinate for the center position of lens distortion is obtained by the following steps,
- performing linear interpolation between two vectors based on t using following equation:
-
- (xl,yl) and (xh,yh) are two coordinate points in a plane;
- calculating coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion in according to the projection matrix mat4projection and the screen size widthwidow*heightwidow by using following equations:
-
(x center normal ,y center normal)=mat4projection*[00-10] -
x center window _ pixel=lerp(x center normal,−1,1,0,widthwidow) -
y center window _ pixel=lerp(y center normal,−1,1,0,heightwidow) (12) - wherein the coordinate (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].
- Preferably, the method further comprising: adding a blackout mask.
- Preferably, the method further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
- Preferably, the equation used for performing data smoothing is
-
θt+1 =k(θt +ωΔt)+(1−k)Ø (13) - where, θt is a fusion rotation angle based on time t, k is a fusion weight constant, and ω is an angular velocity read by an accelerometer, Ø is an angle read from the gyros, Δt is a difference between an output time moment and its previous time moment;
- equations used for corner prediction is:
-
- wherein θt is a fusion rotation angle based on time t, angularSpeed is an angular velocity read by the accelerometer, predictionTimeS is a prediction time constant, and β is a rotation prediction threshold, the gyros and the accelerometer are provided on a head-up display device.
- Preferably, the method further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
- According to a second aspect of the disclosure, there is provided a system for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
- a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
- a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model;
- a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
- a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
- a rasterization unit configured to rasterize the second coordinate data to obtain pixel information;
- an image drawing unit configured to draw an image based on a VR video data and the pixel information.
- According to the embodiment of the present disclosure, the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
- The above and other objects, features and advantages of the present disclosure will become more apparent by describing the embodiments of the present disclosure with reference to the following drawings, in which:
-
FIG. 1 is a diagram illustrating an example network of a VR playback system; -
FIG. 2 is a flowchart diagram showing a method used in the VR playback system ofFIG. 1 ; -
FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure; -
FIG. 4 is an example diagram of a head-up display device; -
FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described inFIG. 3 ; -
FIG. 6 is a schematic diagram of a parameter transfer process between a computer processor and a display chip; and -
FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure. - Exemplary embodiments of the present disclosure will be described in more details below with reference to the accompanying drawings. In the drawings, like reference numerals denote like members. The figures are not drawn to scale, for the sake of clarity. Moreover, some well-known parts may not be shown.
-
FIG. 1 is a diagram illustrating an example network of a VR playback system. TheVR playback system 10 includes aserver 100 and adisplay device 120 which are coupled with each other through anetwork 110, and a VR device. For example, theserver 100 may be a stand-alone computer server or a server cluster. Theserver 100 is used to store various video data and to store various applications that process these video data. For example, various daemons run on theserver 100 in real time, so as to process various video data in theserver 100 and to respond various requests from VR devices and thedisplay device 120. Thenetwork 110 may be a selected one or selected ones from the group consisting of an internet, a local area network, an internet of things, and the like. For example, thedisplay device 120 may be any of the computing devices, including a computer device having an independent display screen and a processing capability. Thedisplay device 120 may be a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a palmtop computer, a personal digital assistant, a smart phone, an intelligent electrical apparatus, a game console, an iPad/iPhone, a video player, a DVD recorder/player, a television, or a home entertainment system. Thedisplay device 120 may store VR player software as a VR player. When the VR player is started, it requests and downloads various video data from theserver 100, and renders and plays the video data in the display device. In this example, theVR device 130 is a stand-alone head-up display device that can interact with thedisplay device 120 and theserver 100, to communicate the user's current information with thedisplay device 120 and/or theserver 100 through signaling. The user's current information is, for example, parameters relevant to users' field of view, positions of users' helmet, changes of sight of eyes. According to these information, thedisplay device 120 can flexibly process the currently played video data. In some embodiments, when a user turns his head, thedisplay device 120 determines that a core viewing region for the user has been changed and starts to play video data with high resolution in the changed core viewing region. - In the above embodiment, the
VR device 130 is a stand-alone head-up display device. However, those skilled in the art should understand that theVR device 130 is not limited thereto, and theVR device 130 may also be an all-in-one head-up display device. The all-in-one head-up display device itself has a display screen, so that it is not necessary to connect the all-in-one head-up display device with the external display device. For example, in this example, if the all-in-one head-up display device is used as the VR device, thedisplay device 120 may be omitted. At this point, the all-in-one head-up display device is configured to obtain video data from theserver 100 and to perform playback operation, and the all-in-one head-up display device is also configured to detect a user's current viewing angle changing information and to adjust the playback operation according to the viewing angle changing information. -
FIG. 2 is a flowchart diagram showing a method used in the VR playback system ofFIG. 1 . The method includes the following steps. - In step S10, a video data processing procedure is operated on the server.
- In step S20, the display device obtains relevant information by interacting with the VR device.
- In step S30, according to the relevant information, the display device requests the server to provide video data and receives the video data.
- In step S40, the display device renders the received video data.
- In this step, the video data obtained from the server is used to draw an image, i.e., the video data is played.
-
FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure. The method implements playing the video data in binocular mode. The method includes following steps. - In step S100, relevant parameters are obtained.
- For example, the relevant parameters are calculated based on specification of a head-up display device and a screen size. The relevant parameters include parameters for field of view of left and right lenses, a camera matrix, a projection matrix, a model matrix and a center position of lens distortion. Referring to
FIG. 4 ,FIG. 4 is an example diagram of a head-up display device. As shown in the figure, the head-up display device includes a stand and left and right lenses on the stand, and human eyes obtains images from left and right view areas through the left and right lenses. Because the left and right view areas provide images with difference, human mind, ater obtaining the information with difference, produces a three-dimensional sense. Different type of head-up devices have different specification and parameters, generally, the specification and parameters can be obtained by querying websites or querying built-in parameter files, and then the relevant parameters required in rendering process can be calculated in accordance with the specification and parameters. - In step S200, a 3D model is built, and the original coordinate data of the 3D model is obtained.
- In this step, a suitable 3D model can be created in accordance with requirements. For example, a polygonal sphere can be created as the 3D model and the original coordinate data can be obtained based on the polygonal sphere.
- In step S300, first coordinate data is obtained in accordance with the relevant parameters and the original coordinate data of the 3D model.
- In step S400, lens distortion is performed on the first coordinate data based on the center position of lens distortion to obtain second coordinate data.
- In step S300, vector calculation on the original coordinate data is performed in accordance with the camera matrix, the projection matrix and the model matrix to obtain the calculated coordinate data as the first coordinate data, and in step S400, the first coordinate data is further distorted to obtain the second coordinate data.
- In step S500, the second coordinate data is rasterized to obtain pixel information.
- In this step, the second coordinate data is processed into pixel information on a plane.
- In step S600, an image is drawn based on a VR video data and the pixel information.
- In the step, the VR video data downloaded from the server is decoded to obtain the pixel information therein, the pixel information are assigned in accordance with the pixel information, and finally the image is drawn.
- In the embodiment, the original coordinate data in the 3D model is lens distorted and then the pixel information is assigned to the distorted coordinate data, so as to achieve binocular-mode viewing effects, because the lens distortion is performed during the time period of treatment of the 3D model, the video and binocular-mode rendering are implemented in one processing, which is equivalent to doubling the rendering efficiency of the existing scheme. Further, because the original coordinate data in the 3D model is lens-distorted in accordance with the relevant parameters obtained based on the information such as specification of the head-up display device, the screen size, and the like, the lens distortion effect can be adjusted by adjusting the relevant parameters to achieve better rendering effect.
- In a preferred embodiment, in order to prevent a user from being dizzy due to immersive viewing, the above method further includes: obtaining real-time data of the gyros and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
- In another preferred embodiment, the above method further includes adding a blackout mask. The blackout mask can be seen in
FIG. 6 , adding the blackout mask can improve immersive effect of VR viewing. - It should be noted that some steps described in the embodiments of the present disclosure may be implemented by calling relevant interfaces providing by OpenGL and/or WebGL. However, corresponding functions of OpenGL and WebGL are mainly implemented by the display chip, and calculation operations of the relevant parameters such as the projection matrix and the camera matrix are performed by computer processor, and thus, when the projection matrix and the camera matrix are transferred to openGL and/or WebGl, data transmission is required. The details can be understood with reference to
FIG. 6 . -
FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described inFIG. 3 . The method includes following steps. - In step S101, the parameters such as field of view and the like are obtained according to the specification of the head-up display device and the screen size.
- In step S102, the eye distance parameter is obtained according to the specification of the head-up display device.
- In step S103, the model matrix is obtained.
- In step S104, the camera matrix is calculated.
- In step S105, the center position of lens distortion is calculated.
- In step S106, the projection matrix is calculated.
- The center position of lens distortion and eye distance can refer to
FIG. 4 . - To further explain the above steps, a specific calculation step is provided in the following example.
- Table 1 is a variable definition table.
-
TABLE 1 variable meaning Px, y, z original original coordinate of each point in the 3D model Px, y, z MVP calculated coordinate mat4model model matrix mat4view camera matrix mat4projection projection matrix - The first coordinate data can be calculated by the following equation:
- 1) The camera matrices for left and right eyes in binocular mode can be calculated by equations (1) to (4):
-
- Among them, mat4view represents a camera matrix, which can be generated directly in accordance with rotation angles of a gyro, left_view_matrix and right_view_matrix are respectively camera matrices for left and right eyes, eye_ipd represents the eye distance parameters;
- 2) the projection matrix mat4projection in binocular mode is calculated by using equation (5):
-
- where, fovleft, fovright, fovup, fovdown, far,near represent parameters relevant to field of view in binocular mode.
- 3) the model matrix mat4model is to be a unit matrix;
- 4) the first coordinate data Px,y,z MVP is calculated by using equation (6):
-
P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original (6) - Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into the equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
- The above step S400 can refer to the calculation steps in the following example.
-
TABLE 2 variable meaning (xd, yd) distorted image field coordinate after lens projection (xu, yu) corrected image field coordinates (i.e., using an ideal pinhole camera) (xc, yc) distortion center coordinates (ie, the center position of lens distortion according to the disclosure) Kn the nth tangential distortion coefficient Pn the nth tangential distortion coefficient r the distance from pixeles to optical axis - 1) distorted parameters is obtained based on the specification of lens of the head-up display device:
-
K 1 ,K 2 (7) - and it coordinates with an auxiliary equation (8) to obtain the distorted parameters and anti-distorted parameters,
-
- 2) lens distortion is performed based on the Brown model.
-
x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . ) (9) -
y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . ) (10) - When without tangential distortion correction, all items containing p can be removed.
- The coordinate of the center position of lens distortion can be solved by the following equations.
- For the two coordinate points (xl,yl) and (xh,yh) on a plane, the linear interpolation between two vectors can be performed in accordance with t by using the equation lerp.
-
- by using the following equations, the coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion center can be solved according to the projection matrix mat4projection and screen size widthwidow*heightwidow, where the coordinates (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].
-
(x center normal ,y center normal)=mat4projection*[00-10] -
x center window _ pixel=lerp(x center normal,−1,1,0,widthwidow) -
y center window _ pixel=lerp(y center normal,−1,1,0,heightwidow) (12) - The steps of data smoothing and corner prediction in the above embodiments can refer to the following description.
-
TABLE 3 variable meaning θt fusion rotation angle based on time t k fusion weight constant ω ω is an angular velocity read by an accelerometer ( the accelerometer is provided on the head-up display device) Ø angles read from the gyros Δt difference between the output time moment and the previous time moment - The equation for data smoothing is:
-
θt+1 =k(θt +ωΔt)+(1−k)Ø (13) -
TABLE 4 variable meaning θt fusion rotation angle based on time t angularSpeed an angular velocity read by an accelerometer (for example, the accelerometer is provided on the head-up display device)) predictionTimeS prediction time (a constant) β threshold value for rotation prediction - The equation for corner prediction is
-
-
FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure. - The system includes a
parameter calculating unit 701, a model building unit 702, a coordinate calculatingunit 703, alens distortion unit 704, arasterization unit 705, and animage drawing unit 706. - The
parameter calculating unit 701 configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion. - The model building unit 702 is configured to create a 3D model and obtain an original coordinate data of the 3D model. For example, the 3D model can be created based on WebGL and initialized to obtain UV coordinates.
- The coordinate calculating
unit 703 is configured to obtain first coordinate data according to the relevant parameters and the original coordinate data of the 3D model. The first coordinate data is obtained by performing calculation based on the relevant parameters and the original coordinate data of the 3D model. - The
lens distortion unit 704 configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data. That is, the first coordinate data is distorted according to the center positions of the left and right lenses to obtain the second coordinate data. - The
rasterization unit 705 configured to rasterize the second coordinate data to obtain a pixel information. - The
image drawing unit 706 configured to an image based on a VR video data and the pixel information. - According to the embodiment of the present disclosure, the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
- Although the embodiments of the present disclosure have been described above with reference to the preferred embodiments, it is not intended to limit the claims. Any modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the present disclosure, Therefore, the protection scope of the present disclosure should be based on the scope of the claims of the present disclosure.
- The foregoing descriptions of specific embodiments of the present disclosure have been presented, but are not intended to limit the disclosure to the precise forms disclosed. It will be readily apparent to one skilled in the art that many modifications and changes may be made in the present disclosure. Any modifications, equivalence, variations of the preferred embodiments can be made without departing from the doctrine and spirit of the present disclosure.
Claims (12)
1. A method for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
creating a 3D model and obtaining an original coordinate data of the 3D model;
obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
rasterizing the second coordinate data to obtain pixel information; and
drawing an image in accordance with a VR video data and the pixel information.
2. The method according to claim 1 , wherein the step of obtaining relevant parameters comprises:
obtaining parameters relevant to field of view in accordance with specification of a head-up display device and a screen size;
calculating the center position of lens distortion in accordance with the parameters relevant to field of view; and
calculate the projection matrix in accordance with the parameters relevant to field of view.
3. The method according to claim 1 , wherein the step of obtaining relevant parameters comprises:
obtaining an eye distance parameter based on specification of a head-up display device; and
calculate the camera matrix in accordance with the eye distance parameter.
4. The method according to claim 1 , wherein the camera matrix and the projection matrix are adjusted to achieve binocular-mode viewing effects.
5. The method according to claim 1 , wherein the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
calculating out camera matrices for left and right eyes in binocular mode by equations (1) to (4):
wherein, left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye, mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro, and eye_ipd represents the eye distance parameter;
calculating out the projection matrix mat4projection in binocular mode by equation (5),
wherein fovleft, fovright, fovup, fovdown, far, near represent the parameters relevant to field of view;
setting mat4model to be an identity matrix;
calculating out the first coordinate data Px,y,z MVP by equation (6),
P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original (6)
P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original (6)
wherein, Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
6. The method according to claim 1 , wherein the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
obtaining distortion parameters in accordance with following equations (7) and (8):
obtaining corrected image field coordinates (xu,yu) as the second coordinate data in accordance with the distortion parameters by using equations (9) and (10) in which all items containing p can be removed when tangential distortion correction is not performed,
x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . ) (9)
y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . ) (10)
x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . ) (9)
y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . ) (10)
wherein (xd,yd) is distorted image field coordinates after lens projection, ie, the first coordinate data, (xu,yu) is the corrected image field coordinates, (xc,yc) is the center position of lens distortion, Kn is a nth radial distortion coefficient, Pn is a nth tangential distortion coefficient, r is a distance from pixels to optical axis.
7. The method according to claim 1 , wherein coordinate for the center position of lens distortion is obtained by the following steps,
performing linear interpolation between two vectors based on t using following equation:
wherein, (xl,yl) and (xh,yh) are two coordinate points in a plane;
calculating coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion in according to the projection matrix mat4projection and the screen size widthwidow*heightwidow by using following equations:
(x center normal ,y center normal)=mat4projection*[00-10]
x center window _ pixel=lerp(x center normal,−1,1,0,widthwindow)
y center window _ pixel=lerp(y center normal,−1,1,0,heightwindow) (12)
(x center normal ,y center normal)=mat4projection*[00-10]
x center window _ pixel=lerp(x center normal,−1,1,0,widthwindow)
y center window _ pixel=lerp(y center normal,−1,1,0,heightwindow) (12)
wherein the coordinate (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].
8. The method according to claim 1 , further comprising: adding a blackout mask.
9. The method according to claim 1 , further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
10. The method according to claim 1 , wherein the equation used for performing data smoothing is
θt+1 =k(θt +ΩΔt)+(1−k)Ø (13)
θt+1 =k(θt +ΩΔt)+(1−k)Ø (13)
where, θt is a fusion rotation angle based on time t, k is a fusion weight constant, and ω is an angular velocity read by an accelerometer, Ø is an angle read from the gyros, Δt is a difference between an output time moment and its previous time moment;
equations used for corner prediction is:
wherein θt is a fusion rotation angle based on time t, angularSpeed is an angular velocity read by the accelerometer, predictionTimeS is a prediction time constant, and β is a rotation prediction threshold, the gyros and the accelerometer are provided on a head-up display device.
11. The method according to claim 1 , further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
12. A system for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model;
a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
a rasterization unit configured to rasterize the second coordinate data to obtain pixel information;
an image drawing unit configured to draw an image based on a VR video data and the pixel information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/860,471 US20180192022A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762441936P | 2017-01-03 | 2017-01-03 | |
US15/860,471 US20180192022A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180192022A1 true US20180192022A1 (en) | 2018-07-05 |
Family
ID=62711388
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/860,471 Abandoned US20180192022A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices |
US15/860,449 Expired - Fee Related US10334238B2 (en) | 2017-01-03 | 2018-01-02 | Method and system for real-time rendering displaying high resolution virtual reality (VR) video |
US15/860,392 Abandoned US20180192044A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Providing A Viewport Division Scheme for Virtual Reality (VR) Video Streaming |
US15/860,430 Abandoned US20180191868A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Downloading Multiple Resolutions Bitrate for Virtual Reality (VR) Video Streaming Optimization |
US15/860,358 Abandoned US20180192063A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions |
US15/860,494 Abandoned US20180189980A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/860,449 Expired - Fee Related US10334238B2 (en) | 2017-01-03 | 2018-01-02 | Method and system for real-time rendering displaying high resolution virtual reality (VR) video |
US15/860,392 Abandoned US20180192044A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Providing A Viewport Division Scheme for Virtual Reality (VR) Video Streaming |
US15/860,430 Abandoned US20180191868A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Downloading Multiple Resolutions Bitrate for Virtual Reality (VR) Video Streaming Optimization |
US15/860,358 Abandoned US20180192063A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions |
US15/860,494 Abandoned US20180189980A1 (en) | 2017-01-03 | 2018-01-02 | Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting |
Country Status (2)
Country | Link |
---|---|
US (6) | US20180192022A1 (en) |
CN (6) | CN108366293A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110336994A (en) * | 2019-07-04 | 2019-10-15 | 上海索倍信息科技有限公司 | A kind of naked eye 3D display system |
US11436787B2 (en) * | 2018-03-27 | 2022-09-06 | Beijing Boe Optoelectronics Technology Co., Ltd. | Rendering method, computer product and display apparatus |
US20240031676A1 (en) * | 2021-12-02 | 2024-01-25 | Fotonation Limited | Method And System For Camera Motion Blur Reduction |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10291910B2 (en) * | 2016-02-12 | 2019-05-14 | Gopro, Inc. | Systems and methods for spatially adaptive video encoding |
US10331862B2 (en) * | 2017-04-20 | 2019-06-25 | Cisco Technology, Inc. | Viewport decryption |
US11232532B2 (en) * | 2018-05-30 | 2022-01-25 | Sony Interactive Entertainment LLC | Multi-server cloud virtual reality (VR) streaming |
US10623791B2 (en) | 2018-06-01 | 2020-04-14 | At&T Intellectual Property I, L.P. | Field of view prediction in live panoramic video streaming |
US10812774B2 (en) | 2018-06-06 | 2020-10-20 | At&T Intellectual Property I, L.P. | Methods and devices for adapting the rate of video content streaming |
US10616621B2 (en) * | 2018-06-29 | 2020-04-07 | At&T Intellectual Property I, L.P. | Methods and devices for determining multipath routing for panoramic video content |
US11019361B2 (en) | 2018-08-13 | 2021-05-25 | At&T Intellectual Property I, L.P. | Methods, systems and devices for adjusting panoramic view of a camera for capturing video content |
CN109343518B (en) * | 2018-09-03 | 2021-07-02 | 浙江大丰实业股份有限公司 | On-spot drive platform of universal ride |
US11128869B1 (en) * | 2018-10-22 | 2021-09-21 | Bitmovin, Inc. | Video encoding based on customized bitrate table |
CN109375369B (en) * | 2018-11-23 | 2021-05-18 | 国网天津市电力公司 | Distortion preprocessing method in VR (virtual reality) large-screen cinema mode |
CN111510777B (en) * | 2019-01-30 | 2021-11-23 | 上海哔哩哔哩科技有限公司 | Method and device for measuring network speed, computer equipment and readable storage medium |
CN111669666A (en) * | 2019-03-08 | 2020-09-15 | 北京京东尚科信息技术有限公司 | Method, device and system for simulating reality |
CN111866485A (en) * | 2019-04-25 | 2020-10-30 | 中国移动通信有限公司研究院 | Stereoscopic picture projection and transmission method, device and computer readable storage medium |
CN110381331A (en) * | 2019-07-23 | 2019-10-25 | 深圳市道通智能航空技术有限公司 | A kind of image processing method, device, equipment of taking photo by plane and storage medium |
CN110490962B (en) * | 2019-08-20 | 2023-09-15 | 武汉邦拓信息科技有限公司 | Remote rendering method based on video stream |
CN110544425A (en) * | 2019-09-13 | 2019-12-06 | 广州城市职业学院 | ancient building VR display system |
CN111489428B (en) * | 2020-04-20 | 2023-06-30 | 北京字节跳动网络技术有限公司 | Image generation method, device, electronic equipment and computer readable storage medium |
US11245911B1 (en) * | 2020-05-12 | 2022-02-08 | Whirlwind 3D, LLC | Video encoder/decoder (codec) for real-time applications and size/b and width reduction |
CN111754614B (en) * | 2020-06-30 | 2024-07-02 | 平安国际智慧城市科技股份有限公司 | VR-based video rendering method and device, electronic equipment and storage medium |
CN112468806B (en) * | 2020-11-12 | 2022-07-26 | 中山大学 | Panoramic video transmission optimization method for cloud VR platform |
CN114286142B (en) * | 2021-01-18 | 2023-03-28 | 海信视像科技股份有限公司 | Virtual reality equipment and VR scene screen capturing method |
CN113347402A (en) * | 2021-06-28 | 2021-09-03 | 筑友建筑装饰装修工程有限公司 | Improved method, device and storage medium for rendering immersive content based on Unity |
CN114466220A (en) * | 2022-01-29 | 2022-05-10 | 维沃移动通信有限公司 | Video downloading method and electronic equipment |
CN115002519A (en) * | 2022-05-31 | 2022-09-02 | 北京势也网络技术有限公司 | Method for playing 8K panoramic video file in low-bandwidth network |
CN115396731A (en) * | 2022-08-10 | 2022-11-25 | 北京势也网络技术有限公司 | Panoramic video playing method and device, electronic equipment and readable storage medium |
CN116880723B (en) * | 2023-09-08 | 2023-11-17 | 江西格如灵科技股份有限公司 | 3D scene display method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240281A (en) * | 2014-08-28 | 2014-12-24 | 东华大学 | Virtual reality head-mounted device based on Unity3D engine |
US20160381256A1 (en) * | 2015-06-25 | 2016-12-29 | EchoPixel, Inc. | Dynamic Minimally Invasive Surgical-Aware Assistant |
US20170289214A1 (en) * | 2016-04-04 | 2017-10-05 | Hanwha Techwin Co., Ltd. | Method and apparatus for playing media stream on web browser |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3196889B2 (en) * | 1996-09-05 | 2001-08-06 | 株式会社アルファ | Three-dimensional image processing method and computer-readable recording medium storing a program for causing a computer to execute the three-dimensional image processing method |
TWI262725B (en) * | 2005-06-30 | 2006-09-21 | Cheertek Inc | Video decoding apparatus and digital audio and video display system capable of controlling presentation of subtitles and method thereof |
US8897370B1 (en) * | 2009-11-30 | 2014-11-25 | Google Inc. | Bitrate video transcoding based on video coding complexity estimation |
US8862763B2 (en) * | 2011-03-30 | 2014-10-14 | Verizon Patent And Licensing Inc. | Downloading video using excess bandwidth |
US8907968B2 (en) * | 2011-03-31 | 2014-12-09 | Panasonic Corporation | Image rendering device, image rendering method, and image rendering program for rendering stereoscopic panoramic images |
US8810598B2 (en) * | 2011-04-08 | 2014-08-19 | Nant Holdings Ip, Llc | Interference based augmented reality hosting platforms |
GB2501929B (en) * | 2012-05-11 | 2015-06-24 | Sony Comp Entertainment Europe | Apparatus and method for augmented reality |
WO2014025319A1 (en) * | 2012-08-08 | 2014-02-13 | National University Of Singapore | System and method for enabling user control of live video stream(s) |
US9129429B2 (en) * | 2012-10-24 | 2015-09-08 | Exelis, Inc. | Augmented reality on wireless mobile devices |
GB2509953B (en) * | 2013-01-18 | 2015-05-20 | Canon Kk | Method of displaying a region of interest in a video stream |
US9196199B2 (en) * | 2013-02-12 | 2015-11-24 | Pixtronix, Inc. | Display having staggered display element arrangement |
CN103702139B (en) * | 2013-12-13 | 2017-02-01 | 华中科技大学 | Video-on-demand system based on scalable coding under mobile environment |
US9398250B2 (en) * | 2014-01-06 | 2016-07-19 | Arun Sobti & Associates, Llc | System and apparatus for smart devices based conferencing |
CN105025351B (en) * | 2014-04-30 | 2018-06-29 | 深圳Tcl新技术有限公司 | The method and device of DST PLAYER buffering |
JP6337614B2 (en) * | 2014-05-23 | 2018-06-06 | セイコーエプソン株式会社 | Control device, robot, and control method |
US20150346812A1 (en) * | 2014-05-29 | 2015-12-03 | Nextvr Inc. | Methods and apparatus for receiving content and/or playing back content |
CN104268922B (en) * | 2014-09-03 | 2017-06-06 | 广州博冠信息科技有限公司 | A kind of image rendering method and image rendering device |
US10812546B2 (en) * | 2014-12-24 | 2020-10-20 | Intel IP Corporation | Link-aware streaming adaptation |
CN104616243B (en) * | 2015-01-20 | 2018-02-27 | 北京道和汇通科技发展有限公司 | A kind of efficient GPU 3 D videos fusion method for drafting |
US20160261908A1 (en) * | 2015-03-05 | 2016-09-08 | Htc Corporation | Media streaming system and control method thereof |
CN104735464A (en) * | 2015-03-31 | 2015-06-24 | 华为技术有限公司 | Panorama video interactive transmission method, server and client end |
CN104717507A (en) * | 2015-03-31 | 2015-06-17 | 北京奇艺世纪科技有限公司 | Video transcoding method and device |
US10083363B2 (en) * | 2015-05-26 | 2018-09-25 | Nbcuniversal Media, Llc | System and method for customizing content for a user |
US10102666B2 (en) * | 2015-06-12 | 2018-10-16 | Google Llc | Electronic display stabilization for head mounted display |
US10674185B2 (en) * | 2015-10-08 | 2020-06-02 | Koninklijke Kpn N.V. | Enhancing a region of interest in video frames of a video stream |
CN106919248A (en) * | 2015-12-26 | 2017-07-04 | 华为技术有限公司 | It is applied to the content transmission method and equipment of virtual reality |
CN105916022A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Video image processing method and apparatus based on virtual reality technology |
CN105455285B (en) * | 2015-12-31 | 2019-02-12 | 北京小鸟看看科技有限公司 | A kind of virtual implementing helmet adaptation method |
US10313417B2 (en) * | 2016-04-18 | 2019-06-04 | Qualcomm Incorporated | Methods and systems for auto-zoom based adaptive video streaming |
CN105898565A (en) * | 2016-04-28 | 2016-08-24 | 乐视控股(北京)有限公司 | Video processing method and device |
US9721393B1 (en) * | 2016-04-29 | 2017-08-01 | Immersive Enterprises, LLC | Method for processing and delivering virtual reality content to a user |
CN106060570B (en) * | 2016-06-30 | 2019-06-14 | 北京奇艺世纪科技有限公司 | A kind of full-view video image plays, coding method and device |
CN106060515B (en) * | 2016-07-14 | 2018-11-06 | 腾讯科技(深圳)有限公司 | Panorama pushing method for media files and device |
CN106231317A (en) * | 2016-09-29 | 2016-12-14 | 三星电子(中国)研发中心 | Video processing, coding/decoding method and device, VR terminal, audio/video player system |
US10595069B2 (en) * | 2016-12-05 | 2020-03-17 | Adobe Inc. | Prioritizing tile-based virtual reality video streaming using adaptive rate allocation |
US20180295375A1 (en) * | 2017-04-05 | 2018-10-11 | Lyrical Labs Video Compression Technology, LLC | Video processing and encoding |
CN107087212B (en) * | 2017-05-09 | 2019-10-29 | 杭州码全信息科技有限公司 | Interactive panoramic video transcoding and playback method and system based on spatial scalable coding |
-
2018
- 2018-01-02 US US15/860,471 patent/US20180192022A1/en not_active Abandoned
- 2018-01-02 US US15/860,449 patent/US10334238B2/en not_active Expired - Fee Related
- 2018-01-02 US US15/860,392 patent/US20180192044A1/en not_active Abandoned
- 2018-01-02 US US15/860,430 patent/US20180191868A1/en not_active Abandoned
- 2018-01-02 US US15/860,358 patent/US20180192063A1/en not_active Abandoned
- 2018-01-02 US US15/860,494 patent/US20180189980A1/en not_active Abandoned
- 2018-01-03 CN CN201810005392.9A patent/CN108366293A/en active Pending
- 2018-01-03 CN CN201810005019.3A patent/CN108419142A/en active Pending
- 2018-01-03 CN CN201810005787.9A patent/CN108366272A/en active Pending
- 2018-01-03 CN CN201810005786.4A patent/CN108377381A/en active Pending
- 2018-01-03 CN CN201810004987.2A patent/CN108419093A/en active Pending
- 2018-01-03 CN CN201810005788.3A patent/CN108391103A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240281A (en) * | 2014-08-28 | 2014-12-24 | 东华大学 | Virtual reality head-mounted device based on Unity3D engine |
US20160381256A1 (en) * | 2015-06-25 | 2016-12-29 | EchoPixel, Inc. | Dynamic Minimally Invasive Surgical-Aware Assistant |
US20170289214A1 (en) * | 2016-04-04 | 2017-10-05 | Hanwha Techwin Co., Ltd. | Method and apparatus for playing media stream on web browser |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11436787B2 (en) * | 2018-03-27 | 2022-09-06 | Beijing Boe Optoelectronics Technology Co., Ltd. | Rendering method, computer product and display apparatus |
CN110336994A (en) * | 2019-07-04 | 2019-10-15 | 上海索倍信息科技有限公司 | A kind of naked eye 3D display system |
US20240031676A1 (en) * | 2021-12-02 | 2024-01-25 | Fotonation Limited | Method And System For Camera Motion Blur Reduction |
Also Published As
Publication number | Publication date |
---|---|
US20180192063A1 (en) | 2018-07-05 |
CN108366272A (en) | 2018-08-03 |
US20180191868A1 (en) | 2018-07-05 |
CN108419142A (en) | 2018-08-17 |
CN108377381A (en) | 2018-08-07 |
CN108391103A (en) | 2018-08-10 |
US20180192044A1 (en) | 2018-07-05 |
CN108419093A (en) | 2018-08-17 |
US20180192026A1 (en) | 2018-07-05 |
CN108366293A (en) | 2018-08-03 |
US20180189980A1 (en) | 2018-07-05 |
US10334238B2 (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180192022A1 (en) | Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices | |
US11632537B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
US10861215B2 (en) | Asynchronous time and space warp with determination of region of interest | |
US9241155B2 (en) | 3-D rendering for a rotated viewer | |
US10257492B2 (en) | Image encoding and display | |
US20050219239A1 (en) | Method and apparatus for processing three-dimensional images | |
CN107908278B (en) | Virtual reality VR interface generation method and device | |
JP2008257127A (en) | Image display device and image display method | |
WO2017086244A1 (en) | Image processing device, information processing device, and image processing method | |
US20210382313A1 (en) | Image generation appratus, head-mounted display, content processing system, and image display method | |
CN108153417A (en) | Frame compensation method and the head-mounted display apparatus using this method | |
US11187895B2 (en) | Content generation apparatus and method | |
JP2018147504A (en) | Display control method and program for causing computer to execute the display control method | |
JP7429515B2 (en) | Image processing device, head-mounted display, and image display method | |
US11187914B2 (en) | Mirror-based scene cameras | |
JP2002300612A (en) | Image generating device, program, and information storage medium | |
KR101773929B1 (en) | System for processing video with wide viewing angle, methods for transmitting and displaying vide with wide viewing angle and computer programs for the same | |
US11863902B2 (en) | Techniques for enabling high fidelity magnification of video | |
US20230222754A1 (en) | Interactive video playback techniques to enable high fidelity magnification | |
US20220232201A1 (en) | Image generation system and method | |
JP7365183B2 (en) | Image generation device, head mounted display, content processing system, and image display method | |
WO2024004134A1 (en) | Image transmission device and image transmission method | |
KR102179810B1 (en) | Method and program for playing virtual reality image | |
CN117452637A (en) | Head mounted display and image display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BLACK SAILS TECHNOLOGY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, ZHUO;TANG, YONGTAO;ZHAO, RUOXI;AND OTHERS;REEL/FRAME:044519/0103 Effective date: 20180102 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |