US20180192022A1 - Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices - Google Patents

Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices Download PDF

Info

Publication number
US20180192022A1
US20180192022A1 US15/860,471 US201815860471A US2018192022A1 US 20180192022 A1 US20180192022 A1 US 20180192022A1 US 201815860471 A US201815860471 A US 201815860471A US 2018192022 A1 US2018192022 A1 US 2018192022A1
Authority
US
United States
Prior art keywords
coordinate data
matrix
model
fov
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/860,471
Inventor
Zhuo Wang
Yongtao Tang
Ruoxi Zhao
Haoyan Zu
Chia-Chi Chang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Black Sails Technology Inc
Original Assignee
Black Sails Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Black Sails Technology Inc filed Critical Black Sails Technology Inc
Priority to US15/860,471 priority Critical patent/US20180192022A1/en
Assigned to Black Sails Technology Inc. reassignment Black Sails Technology Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANG, CHIA-CHI, TANG, YONGTAO, WANG, ZHUO, ZHAO, RUOXI, ZU, HAOYAN
Publication of US20180192022A1 publication Critical patent/US20180192022A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N13/0018
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/752Media network packet handling adapting media to network capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • H04N13/0029
    • H04N13/0055
    • H04N13/0275
    • H04N13/044
    • H04N13/0484
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/378Image reproducers using viewer tracking for tracking rotational head movements around an axis perpendicular to the screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/87Regeneration of colour television signals
    • H04N9/8715Regeneration of colour television signals involving the mixing of the reproduced video signal with a non-recorded signal, e.g. a text signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present disclosure relates to video processing technology, and more particularly, to a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices.
  • VR virtual reality
  • Virtual Reality is a computer simulation technology for creating and experiencing a virtual world. For example, a three-dimensional real-time image can be presented based on a technology which tracks a user's head, eyes or hand.
  • a network-based virtual reality technology full-view video data is pre-stored on a server, and then transmitted to a display device, such as glasses. A video is displayed on the display device in accordance with a viewing angle of the user.
  • a VR playback system should be optimized as much as possible in terms of software, so as to reduce resource consumption, improve processing efficiency and meanwhile avoid degrading users's viewing experience.
  • the present disclosure provides a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices to solve the above problems.
  • VR virtual reality
  • a method for real-time rendering displaying virtual reality (VR) using head-up display devices comprising:
  • obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
  • the step of obtaining relevant parameters comprises:
  • the step of obtaining relevant parameters comprises:
  • the camera matrix and the projection matrix is adjusted to achieve binocular-mode viewing effects.
  • the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
  • left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye
  • mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro
  • eye_ipd represents the eye distance parameter
  • fov left , fov right , fov up , fov down , far,near represent the parameters relevant to field of view in binocular mode
  • P x,y,z MVP represents the first coordinate data
  • P x,y,z original represents the original coordinate data
  • mat4 model represents the model matrix
  • mat4 projection represents the projection matrix
  • the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4 view to obtain the first coordinate data P x,y,z MVP .
  • the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
  • (x d ,y d ) is distorted image field coordinates after lens projection, ie, the first coordinate data
  • (x u ,y u ) is the corrected image field coordinates
  • (x c ,y c ) is the center position of lens distortion
  • K n is a nth radial distortion coefficient
  • P n is a nth tangential distortion coefficient
  • r is a distance from pixels to optical axis.
  • coordinate for the center position of lens distortion is obtained by the following steps,
  • x center window _ pixel lerp( x center normal , ⁇ 1,1,0,width widow )
  • y center window _ pixel lerp( y center normal , ⁇ 1,1,0,height widow ) (12)
  • the method further comprising: adding a blackout mask.
  • the method further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
  • the equation used for performing data smoothing is
  • ⁇ t is a fusion rotation angle based on time t
  • k is a fusion weight constant
  • is an angular velocity read by an accelerometer
  • is an angle read from the gyros
  • ⁇ t is a difference between an output time moment and its previous time moment
  • ⁇ t is a fusion rotation angle based on time t
  • angularSpeed is an angular velocity read by the accelerometer
  • predictionTimeS is a prediction time constant
  • is a rotation prediction threshold
  • the gyros and the accelerometer are provided on a head-up display device.
  • the method further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
  • a system for real-time rendering displaying virtual reality (VR) using head-up display devices comprising:
  • a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
  • a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model
  • a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model
  • a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data
  • a rasterization unit configured to rasterize the second coordinate data to obtain pixel information
  • an image drawing unit configured to draw an image based on a VR video data and the pixel information.
  • the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
  • FIG. 1 is a diagram illustrating an example network of a VR playback system
  • FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1 ;
  • FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure
  • FIG. 4 is an example diagram of a head-up display device
  • FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3 ;
  • VR virtual reality
  • FIG. 6 is a schematic diagram of a parameter transfer process between a computer processor and a display chip.
  • FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
  • VR virtual reality
  • FIG. 1 is a diagram illustrating an example network of a VR playback system.
  • the VR playback system 10 includes a server 100 and a display device 120 which are coupled with each other through a network 110 , and a VR device.
  • the server 100 may be a stand-alone computer server or a server cluster.
  • the server 100 is used to store various video data and to store various applications that process these video data.
  • various daemons run on the server 100 in real time, so as to process various video data in the server 100 and to respond various requests from VR devices and the display device 120 .
  • the network 110 may be a selected one or selected ones from the group consisting of an internet, a local area network, an internet of things, and the like.
  • the display device 120 may be any of the computing devices, including a computer device having an independent display screen and a processing capability.
  • the display device 120 may be a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a palmtop computer, a personal digital assistant, a smart phone, an intelligent electrical apparatus, a game console, an iPad/iPhone, a video player, a DVD recorder/player, a television, or a home entertainment system.
  • the display device 120 may store VR player software as a VR player. When the VR player is started, it requests and downloads various video data from the server 100 , and renders and plays the video data in the display device.
  • the VR device 130 is a stand-alone head-up display device that can interact with the display device 120 and the server 100 , to communicate the user's current information with the display device 120 and/or the server 100 through signaling.
  • the user's current information is, for example, parameters relevant to users' field of view, positions of users' helmet, changes of sight of eyes. According to these information, the display device 120 can flexibly process the currently played video data. In some embodiments, when a user turns his head, the display device 120 determines that a core viewing region for the user has been changed and starts to play video data with high resolution in the changed core viewing region.
  • the VR device 130 is a stand-alone head-up display device.
  • the VR device 130 is not limited thereto, and the VR device 130 may also be an all-in-one head-up display device.
  • the all-in-one head-up display device itself has a display screen, so that it is not necessary to connect the all-in-one head-up display device with the external display device.
  • the display device 120 may be omitted.
  • the all-in-one head-up display device is configured to obtain video data from the server 100 and to perform playback operation, and the all-in-one head-up display device is also configured to detect a user's current viewing angle changing information and to adjust the playback operation according to the viewing angle changing information.
  • FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1 . The method includes the following steps.
  • step S 10 a video data processing procedure is operated on the server.
  • step S 20 the display device obtains relevant information by interacting with the VR device.
  • step S 30 the display device requests the server to provide video data and receives the video data.
  • step S 40 the display device renders the received video data.
  • the video data obtained from the server is used to draw an image, i.e., the video data is played.
  • FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure.
  • the method implements playing the video data in binocular mode.
  • the method includes following steps.
  • step S 100 relevant parameters are obtained.
  • the relevant parameters are calculated based on specification of a head-up display device and a screen size.
  • the relevant parameters include parameters for field of view of left and right lenses, a camera matrix, a projection matrix, a model matrix and a center position of lens distortion.
  • FIG. 4 is an example diagram of a head-up display device.
  • the head-up display device includes a stand and left and right lenses on the stand, and human eyes obtains images from left and right view areas through the left and right lenses. Because the left and right view areas provide images with difference, human mind, ater obtaining the information with difference, produces a three-dimensional sense.
  • Different type of head-up devices have different specification and parameters, generally, the specification and parameters can be obtained by querying websites or querying built-in parameter files, and then the relevant parameters required in rendering process can be calculated in accordance with the specification and parameters.
  • step S 200 a 3D model is built, and the original coordinate data of the 3D model is obtained.
  • a suitable 3D model can be created in accordance with requirements.
  • a polygonal sphere can be created as the 3D model and the original coordinate data can be obtained based on the polygonal sphere.
  • step S 300 first coordinate data is obtained in accordance with the relevant parameters and the original coordinate data of the 3D model.
  • step S 400 lens distortion is performed on the first coordinate data based on the center position of lens distortion to obtain second coordinate data.
  • step S 300 vector calculation on the original coordinate data is performed in accordance with the camera matrix, the projection matrix and the model matrix to obtain the calculated coordinate data as the first coordinate data, and in step S 400 , the first coordinate data is further distorted to obtain the second coordinate data.
  • step S 500 the second coordinate data is rasterized to obtain pixel information.
  • the second coordinate data is processed into pixel information on a plane.
  • step S 600 an image is drawn based on a VR video data and the pixel information.
  • the VR video data downloaded from the server is decoded to obtain the pixel information therein, the pixel information are assigned in accordance with the pixel information, and finally the image is drawn.
  • the original coordinate data in the 3D model is lens distorted and then the pixel information is assigned to the distorted coordinate data, so as to achieve binocular-mode viewing effects, because the lens distortion is performed during the time period of treatment of the 3D model, the video and binocular-mode rendering are implemented in one processing, which is equivalent to doubling the rendering efficiency of the existing scheme.
  • the original coordinate data in the 3D model is lens-distorted in accordance with the relevant parameters obtained based on the information such as specification of the head-up display device, the screen size, and the like, the lens distortion effect can be adjusted by adjusting the relevant parameters to achieve better rendering effect.
  • the above method further includes: obtaining real-time data of the gyros and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
  • the above method further includes adding a blackout mask.
  • the blackout mask can be seen in FIG. 6 , adding the blackout mask can improve immersive effect of VR viewing.
  • FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3 .
  • the method includes following steps.
  • step S 101 the parameters such as field of view and the like are obtained according to the specification of the head-up display device and the screen size.
  • step S 102 the eye distance parameter is obtained according to the specification of the head-up display device.
  • step S 103 the model matrix is obtained.
  • step S 104 the camera matrix is calculated.
  • step S 105 the center position of lens distortion is calculated.
  • step S 106 the projection matrix is calculated.
  • the center position of lens distortion and eye distance can refer to FIG. 4 .
  • Table 1 is a variable definition table.
  • the first coordinate data can be calculated by the following equation:
  • mat4 view represents a camera matrix, which can be generated directly in accordance with rotation angles of a gyro, left_view_matrix and right_view_matrix are respectively camera matrices for left and right eyes, eye_ipd represents the eye distance parameters;
  • fov left , fov right , fov up , fov down , far,near represent parameters relevant to field of view in binocular mode.
  • model matrix mat4 model is to be a unit matrix
  • P x,y,z MVP represents the first coordinate data
  • P x,y,z original represents the original coordinate data
  • mat4 model represents the model matrix
  • mat4 projection represents the projection matrix
  • the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into the equation (6) instead of mat4 view to obtain the first coordinate data P x,y,z MVP .
  • the above step S 400 can refer to the calculation steps in the following example.
  • the linear interpolation between two vectors can be performed in accordance with t by using the equation lerp.
  • the coordinate (x center window _ pixel ,y center window _ pixel ) of the center position of lens distortion center can be solved according to the projection matrix mat4 projection and screen size width widow *height widow , where the coordinates (x center normal ,y center normal ) is a point in the space coordinate axis of [ ⁇ 1,1].
  • x center window _ pixel lerp( x center normal , ⁇ 1,1,0,width widow )
  • y center window _ pixel lerp( y center normal , ⁇ 1,1,0,height widow ) (12)
  • ⁇ t fusion rotation angle based on time t k fusion weight constant ⁇ ⁇ is an angular velocity read by an accelerometer (the accelerometer is provided on the head-up display device) ⁇ angles read from the gyros ⁇ t difference between the output time moment and the previous time moment
  • FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
  • VR virtual reality
  • the system includes a parameter calculating unit 701 , a model building unit 702 , a coordinate calculating unit 703 , a lens distortion unit 704 , a rasterization unit 705 , and an image drawing unit 706 .
  • the parameter calculating unit 701 configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion.
  • the model building unit 702 is configured to create a 3D model and obtain an original coordinate data of the 3D model.
  • the 3D model can be created based on WebGL and initialized to obtain UV coordinates.
  • the coordinate calculating unit 703 is configured to obtain first coordinate data according to the relevant parameters and the original coordinate data of the 3D model.
  • the first coordinate data is obtained by performing calculation based on the relevant parameters and the original coordinate data of the 3D model.
  • the lens distortion unit 704 configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data. That is, the first coordinate data is distorted according to the center positions of the left and right lenses to obtain the second coordinate data.
  • the rasterization unit 705 configured to rasterize the second coordinate data to obtain a pixel information.
  • the image drawing unit 706 configured to an image based on a VR video data and the pixel information.
  • the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)

Abstract

Disclosed a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices. The method comprises: obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion; creating a 3D model and obtaining an original coordinate data of the 3D model; obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model; performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data; rasterizing the second coordinate data to obtain pixel information; and drawing an image in accordance with a VR video data and the pixel information. According to the present disclosure, the lens distortion on the coordinate data is performed in the 3D model, so that video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims the priority and benefit of U.S. provisional application 62/441,936, filed on Jan. 3, 2017, which is incorporated herein by reference in its entirety.
  • BACKGROUND OF THE DISCLOSURE Field of the Disclosure
  • The present disclosure relates to video processing technology, and more particularly, to a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices.
  • Background of the Disclosure
  • Virtual Reality (VR) is a computer simulation technology for creating and experiencing a virtual world. For example, a three-dimensional real-time image can be presented based on a technology which tracks a user's head, eyes or hand. For a network-based virtual reality technology, full-view video data is pre-stored on a server, and then transmitted to a display device, such as glasses. A video is displayed on the display device in accordance with a viewing angle of the user.
  • However, when the display device displays the video data, high-resolution video data needs to occupy a lot of computing resources, and as a result, the display device is required to has a high data processing capability. But currently, different types of display devices on market vary greatly in performance. In order to be compatible with these display devices, a VR playback system should be optimized as much as possible in terms of software, so as to reduce resource consumption, improve processing efficiency and meanwhile avoid degrading users's viewing experience.
  • SUMMARY OF THE DISCLOSURE
  • In view of this, the present disclosure provides a method and a system for real-time rendering displaying virtual reality (VR) using head-up display devices to solve the above problems.
  • According to a first aspect of the present disclosure, there is provided a method for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
  • obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
  • creating a 3D model and obtaining an original coordinate data of the 3D model;
  • obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
  • performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
  • rasterizing the second coordinate data to obtain pixel information; and
  • drawing an image in accordance with a VR video data and the pixel information.
  • Preferably, the step of obtaining relevant parameters comprises:
  • obtaining parameters relevant to field of view in accordance with specification of a head-up display device and a screen size;
  • calculating the center position of lens distortion in accordance with the parameters relevant to field of view; and
  • calculate the projection matrix in accordance with the parameters relevant to field of view.
  • Preferably, the step of obtaining relevant parameters comprises:
  • obtaining an eye distance parameter based on specification of a head-up display device; and
  • calculate the camera matrix in accordance with the eye distance parameter.
  • Preferably, the camera matrix and the projection matrix is adjusted to achieve binocular-mode viewing effects.
  • Preferably, the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
  • calculating out camera matrices for left and right eyes in binocular mode by equations (1) to (4):
  • half_eye _ipd = eye_ipd 2 ( 1 ) translate ( X , Y , Z ) = [ 1 0 0 X 0 1 0 Y 0 0 1 Z 0 0 0 1 ] ( 2 ) left_view _matrix = translate ( - half_eye _ipd , 0 , 0 ) * mat 4 view ( 3 ) right_view _matrix = translate ( half_eye _ipd , 0 , 0 ) * mat 4 view ( 4 )
  • wherein, left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye, mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro, and eye_ipd represents the eye distance parameter;
  • calculating out the projection matrix mat4projection in binocular mode by equation (5),
  • [ 2 tan ( fov left ) + tan ( fov right ) 0 - tan ( fov left ) - tan ( fov right ) tan ( fov left ) + tan ( fov right ) 0 0 2 tan ( fov up ) + tan ( fov down ) - tan ( fov up ) - tan ( fov down ) tan ( fov up ) + tan ( fov down ) Y 0 0 far near - far far * near near - far 0 0 - 1 0 ] ( 5 )
  • Wherein, fovleft, fovright, fovup, fovdown, far,near represent the parameters relevant to field of view in binocular mode;
  • setting mat4model to be an identity matrix
  • calculating out the first coordinate data Px,y,z MVP by equation (6),

  • P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original  (6)
  • wherein, Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
  • Preferably, the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
  • obtaining distortion parameters in accordance with following equations (7) and (8):
  • K 1 , K 2 ( 7 ) f ( K 1 , K 2 ) = { K 1 - 1 = - K 1 K 2 - 1 = 3 K 1 2 - K 2 ; ( 8 )
  • obtaining corrected image field coordinates (xu,yu) as the second coordinate data in accordance with the distortion parameters by using equations (9) and (10) in which all items containing p can be removed when tangential distortion correction is not performed,

  • x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . )  (9)

  • y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . )  (10)
  • wherein (xd,yd) is distorted image field coordinates after lens projection, ie, the first coordinate data, (xu,yu) is the corrected image field coordinates, (xc,yc) is the center position of lens distortion, Kn is a nth radial distortion coefficient, Pn is a nth tangential distortion coefficient, r is a distance from pixels to optical axis.
  • Preferably, coordinate for the center position of lens distortion is obtained by the following steps,
  • performing linear interpolation between two vectors based on t using following equation:
  • lerp ( t , x l , x h , y l , y h ) = y l + ( t - x l ) y h - y l x h - x l ( 11 )
  • (xl,yl) and (xh,yh) are two coordinate points in a plane;
  • calculating coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion in according to the projection matrix mat4projection and the screen size widthwidow*heightwidow by using following equations:

  • (x center normal ,y center normal)=mat4projection*[00-10]

  • x center window _ pixel=lerp(x center normal,−1,1,0,widthwidow)

  • y center window _ pixel=lerp(y center normal,−1,1,0,heightwidow)  (12)
  • wherein the coordinate (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].
  • Preferably, the method further comprising: adding a blackout mask.
  • Preferably, the method further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
  • Preferably, the equation used for performing data smoothing is

  • θt+1 =kt +ωΔt)+(1−k)Ø  (13)
  • where, θt is a fusion rotation angle based on time t, k is a fusion weight constant, and ω is an angular velocity read by an accelerometer, Ø is an angle read from the gyros, Δt is a difference between an output time moment and its previous time moment;
  • equations used for corner prediction is:
  • θ Δ = { angularSpeed * predictionTimeS , angularSpeed β 0 , angularSpeed [ 0 , β ] null , other ( 14 ) θ t + 1 = θ t + θ Δ ( 15 )
  • wherein θt is a fusion rotation angle based on time t, angularSpeed is an angular velocity read by the accelerometer, predictionTimeS is a prediction time constant, and β is a rotation prediction threshold, the gyros and the accelerometer are provided on a head-up display device.
  • Preferably, the method further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
  • According to a second aspect of the disclosure, there is provided a system for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
  • a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
  • a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model;
  • a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
  • a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
  • a rasterization unit configured to rasterize the second coordinate data to obtain pixel information;
  • an image drawing unit configured to draw an image based on a VR video data and the pixel information.
  • According to the embodiment of the present disclosure, the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other objects, features and advantages of the present disclosure will become more apparent by describing the embodiments of the present disclosure with reference to the following drawings, in which:
  • FIG. 1 is a diagram illustrating an example network of a VR playback system;
  • FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1;
  • FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure;
  • FIG. 4 is an example diagram of a head-up display device;
  • FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3;
  • FIG. 6 is a schematic diagram of a parameter transfer process between a computer processor and a display chip; and
  • FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
  • DETAILED DESCRIPTION OF THE DISCLOSURE
  • Exemplary embodiments of the present disclosure will be described in more details below with reference to the accompanying drawings. In the drawings, like reference numerals denote like members. The figures are not drawn to scale, for the sake of clarity. Moreover, some well-known parts may not be shown.
  • FIG. 1 is a diagram illustrating an example network of a VR playback system. The VR playback system 10 includes a server 100 and a display device 120 which are coupled with each other through a network 110, and a VR device. For example, the server 100 may be a stand-alone computer server or a server cluster. The server 100 is used to store various video data and to store various applications that process these video data. For example, various daemons run on the server 100 in real time, so as to process various video data in the server 100 and to respond various requests from VR devices and the display device 120. The network 110 may be a selected one or selected ones from the group consisting of an internet, a local area network, an internet of things, and the like. For example, the display device 120 may be any of the computing devices, including a computer device having an independent display screen and a processing capability. The display device 120 may be a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a palmtop computer, a personal digital assistant, a smart phone, an intelligent electrical apparatus, a game console, an iPad/iPhone, a video player, a DVD recorder/player, a television, or a home entertainment system. The display device 120 may store VR player software as a VR player. When the VR player is started, it requests and downloads various video data from the server 100, and renders and plays the video data in the display device. In this example, the VR device 130 is a stand-alone head-up display device that can interact with the display device 120 and the server 100, to communicate the user's current information with the display device 120 and/or the server 100 through signaling. The user's current information is, for example, parameters relevant to users' field of view, positions of users' helmet, changes of sight of eyes. According to these information, the display device 120 can flexibly process the currently played video data. In some embodiments, when a user turns his head, the display device 120 determines that a core viewing region for the user has been changed and starts to play video data with high resolution in the changed core viewing region.
  • In the above embodiment, the VR device 130 is a stand-alone head-up display device. However, those skilled in the art should understand that the VR device 130 is not limited thereto, and the VR device 130 may also be an all-in-one head-up display device. The all-in-one head-up display device itself has a display screen, so that it is not necessary to connect the all-in-one head-up display device with the external display device. For example, in this example, if the all-in-one head-up display device is used as the VR device, the display device 120 may be omitted. At this point, the all-in-one head-up display device is configured to obtain video data from the server 100 and to perform playback operation, and the all-in-one head-up display device is also configured to detect a user's current viewing angle changing information and to adjust the playback operation according to the viewing angle changing information.
  • FIG. 2 is a flowchart diagram showing a method used in the VR playback system of FIG. 1. The method includes the following steps.
  • In step S10, a video data processing procedure is operated on the server.
  • In step S20, the display device obtains relevant information by interacting with the VR device.
  • In step S30, according to the relevant information, the display device requests the server to provide video data and receives the video data.
  • In step S40, the display device renders the received video data.
  • In this step, the video data obtained from the server is used to draw an image, i.e., the video data is played.
  • FIG. 3 is a flowchart diagram of a method for real-time rendering displaying virtual reality (VR) using head-up display devices according to an embodiment of the present disclosure. The method implements playing the video data in binocular mode. The method includes following steps.
  • In step S100, relevant parameters are obtained.
  • For example, the relevant parameters are calculated based on specification of a head-up display device and a screen size. The relevant parameters include parameters for field of view of left and right lenses, a camera matrix, a projection matrix, a model matrix and a center position of lens distortion. Referring to FIG. 4, FIG. 4 is an example diagram of a head-up display device. As shown in the figure, the head-up display device includes a stand and left and right lenses on the stand, and human eyes obtains images from left and right view areas through the left and right lenses. Because the left and right view areas provide images with difference, human mind, ater obtaining the information with difference, produces a three-dimensional sense. Different type of head-up devices have different specification and parameters, generally, the specification and parameters can be obtained by querying websites or querying built-in parameter files, and then the relevant parameters required in rendering process can be calculated in accordance with the specification and parameters.
  • In step S200, a 3D model is built, and the original coordinate data of the 3D model is obtained.
  • In this step, a suitable 3D model can be created in accordance with requirements. For example, a polygonal sphere can be created as the 3D model and the original coordinate data can be obtained based on the polygonal sphere.
  • In step S300, first coordinate data is obtained in accordance with the relevant parameters and the original coordinate data of the 3D model.
  • In step S400, lens distortion is performed on the first coordinate data based on the center position of lens distortion to obtain second coordinate data.
  • In step S300, vector calculation on the original coordinate data is performed in accordance with the camera matrix, the projection matrix and the model matrix to obtain the calculated coordinate data as the first coordinate data, and in step S400, the first coordinate data is further distorted to obtain the second coordinate data.
  • In step S500, the second coordinate data is rasterized to obtain pixel information.
  • In this step, the second coordinate data is processed into pixel information on a plane.
  • In step S600, an image is drawn based on a VR video data and the pixel information.
  • In the step, the VR video data downloaded from the server is decoded to obtain the pixel information therein, the pixel information are assigned in accordance with the pixel information, and finally the image is drawn.
  • In the embodiment, the original coordinate data in the 3D model is lens distorted and then the pixel information is assigned to the distorted coordinate data, so as to achieve binocular-mode viewing effects, because the lens distortion is performed during the time period of treatment of the 3D model, the video and binocular-mode rendering are implemented in one processing, which is equivalent to doubling the rendering efficiency of the existing scheme. Further, because the original coordinate data in the 3D model is lens-distorted in accordance with the relevant parameters obtained based on the information such as specification of the head-up display device, the screen size, and the like, the lens distortion effect can be adjusted by adjusting the relevant parameters to achieve better rendering effect.
  • In a preferred embodiment, in order to prevent a user from being dizzy due to immersive viewing, the above method further includes: obtaining real-time data of the gyros and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
  • In another preferred embodiment, the above method further includes adding a blackout mask. The blackout mask can be seen in FIG. 6, adding the blackout mask can improve immersive effect of VR viewing.
  • It should be noted that some steps described in the embodiments of the present disclosure may be implemented by calling relevant interfaces providing by OpenGL and/or WebGL. However, corresponding functions of OpenGL and WebGL are mainly implemented by the display chip, and calculation operations of the relevant parameters such as the projection matrix and the camera matrix are performed by computer processor, and thus, when the projection matrix and the camera matrix are transferred to openGL and/or WebGl, data transmission is required. The details can be understood with reference to FIG. 6.
  • FIG. 5 is a specific flowchart diagram showing the step of obtaining relevant parameters mentioned in the method for real-time rendering displaying virtual reality (VR) using head-up display devices described in FIG. 3. The method includes following steps.
  • In step S101, the parameters such as field of view and the like are obtained according to the specification of the head-up display device and the screen size.
  • In step S102, the eye distance parameter is obtained according to the specification of the head-up display device.
  • In step S103, the model matrix is obtained.
  • In step S104, the camera matrix is calculated.
  • In step S105, the center position of lens distortion is calculated.
  • In step S106, the projection matrix is calculated.
  • The center position of lens distortion and eye distance can refer to FIG. 4.
  • To further explain the above steps, a specific calculation step is provided in the following example.
  • Table 1 is a variable definition table.
  • TABLE 1
    variable meaning
    Px, y, z original original coordinate of each point in the 3D model
    Px, y, z MVP calculated coordinate
    mat4model model matrix
    mat4view camera matrix
    mat4projection projection matrix
  • The first coordinate data can be calculated by the following equation:
  • 1) The camera matrices for left and right eyes in binocular mode can be calculated by equations (1) to (4):
  • half_eye _ipd = eye_ipd 2 ( 1 ) translate ( X , Y , Z ) = [ 1 0 0 X 0 1 0 Y 0 0 1 Z 0 0 0 1 ] ( 2 ) left_view _matrix = translate ( - half_eye _ipd , 0 , 0 ) * mat 4 view ( 3 ) right_view _matrix = translate ( half_eye _ipd , 0 , 0 ) * mat 4 view ( 4 )
  • Among them, mat4view represents a camera matrix, which can be generated directly in accordance with rotation angles of a gyro, left_view_matrix and right_view_matrix are respectively camera matrices for left and right eyes, eye_ipd represents the eye distance parameters;
  • 2) the projection matrix mat4projection in binocular mode is calculated by using equation (5):
  • mat 4 projection = [ 2 tan ( fov left ) + tan ( fov right ) 0 - tan ( fov left ) - tan ( fov right ) tan ( fov left ) + tan ( fov right ) 0 0 2 tan ( fov up ) + tan ( fov down ) - tan ( fov up ) - tan ( fov down ) tan ( fov up ) + tan ( fov down ) Y 0 0 far near - far far * near near - far 0 0 - 1 0 ] ( 5 )
  • where, fovleft, fovright, fovup, fovdown, far,near represent parameters relevant to field of view in binocular mode.
  • 3) the model matrix mat4model is to be a unit matrix;
  • 4) the first coordinate data Px,y,z MVP is calculated by using equation (6):

  • P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original  (6)
  • Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into the equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
  • The above step S400 can refer to the calculation steps in the following example.
  • TABLE 2
    variable meaning
    (xd, yd) distorted image field coordinate after lens projection
    (xu, yu) corrected image field coordinates (i.e., using an ideal pinhole
    camera)
    (xc, yc) distortion center coordinates (ie, the center position of lens
    distortion according to the disclosure)
    Kn the nth tangential distortion coefficient
    Pn the nth tangential distortion coefficient
    r the distance from pixeles to optical axis
  • 1) distorted parameters is obtained based on the specification of lens of the head-up display device:

  • K 1 ,K 2  (7)
  • and it coordinates with an auxiliary equation (8) to obtain the distorted parameters and anti-distorted parameters,
  • f ( K 1 , K 2 ) = { K 1 - 1 = - K 1 K 2 - 1 = 3 K 1 2 - K 2 ; ( 8 )
  • 2) lens distortion is performed based on the Brown model.

  • x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . )  (9)

  • y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . )  (10)
  • When without tangential distortion correction, all items containing p can be removed.
  • The coordinate of the center position of lens distortion can be solved by the following equations.
  • For the two coordinate points (xl,yl) and (xh,yh) on a plane, the linear interpolation between two vectors can be performed in accordance with t by using the equation lerp.
  • lerp ( t , x l , x h , y l , y h ) = y l + ( t - x l ) y h - y l x h - x l ( 11 )
  • by using the following equations, the coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion center can be solved according to the projection matrix mat4projection and screen size widthwidow*heightwidow, where the coordinates (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].

  • (x center normal ,y center normal)=mat4projection*[00-10]

  • x center window _ pixel=lerp(x center normal,−1,1,0,widthwidow)

  • y center window _ pixel=lerp(y center normal,−1,1,0,heightwidow)  (12)
  • The steps of data smoothing and corner prediction in the above embodiments can refer to the following description.
  • TABLE 3
    variable meaning
    θt fusion rotation angle based on time t
    k fusion weight constant
    ω ω is an angular velocity read by an accelerometer ( the
    accelerometer is provided on the head-up display device)
    Ø angles read from the gyros
    Δt difference between the output time moment and the
    previous time moment
  • The equation for data smoothing is:

  • θt+1 =kt +ωΔt)+(1−k)Ø  (13)
  • TABLE 4
    variable meaning
    θt fusion rotation angle based on time t
    angularSpeed an angular velocity read by an accelerometer
    (for example, the accelerometer is
    provided on the head-up display device))
    predictionTimeS prediction time (a constant)
    β threshold value for rotation prediction
  • The equation for corner prediction is
  • θ Δ = { angularSpeed * predictionTimeS , angularSpeed β 0 , angularSpeed [ 0 , β ] null , other ( 14 ) θ t + 1 = θ t + θ Δ ( 15 )
  • FIG. 7 is a schematic diagram of a system for real-time rendering displaying virtual reality (VR) using head-up display devices, according to an embodiment of the present disclosure.
  • The system includes a parameter calculating unit 701, a model building unit 702, a coordinate calculating unit 703, a lens distortion unit 704, a rasterization unit 705, and an image drawing unit 706.
  • The parameter calculating unit 701 configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion.
  • The model building unit 702 is configured to create a 3D model and obtain an original coordinate data of the 3D model. For example, the 3D model can be created based on WebGL and initialized to obtain UV coordinates.
  • The coordinate calculating unit 703 is configured to obtain first coordinate data according to the relevant parameters and the original coordinate data of the 3D model. The first coordinate data is obtained by performing calculation based on the relevant parameters and the original coordinate data of the 3D model.
  • The lens distortion unit 704 configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data. That is, the first coordinate data is distorted according to the center positions of the left and right lenses to obtain the second coordinate data.
  • The rasterization unit 705 configured to rasterize the second coordinate data to obtain a pixel information.
  • The image drawing unit 706 configured to an image based on a VR video data and the pixel information.
  • According to the embodiment of the present disclosure, the binocular-mode VR immersive viewing effect is achieved by performing lens distortion on the coordinate data of the 3D model. Because the lens distortion on the coordinate data is performed in the 3D model, video and immersive rendering can be realized in one processing, thereby improving rendering efficiency.
  • Although the embodiments of the present disclosure have been described above with reference to the preferred embodiments, it is not intended to limit the claims. Any modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the present disclosure, Therefore, the protection scope of the present disclosure should be based on the scope of the claims of the present disclosure.
  • The foregoing descriptions of specific embodiments of the present disclosure have been presented, but are not intended to limit the disclosure to the precise forms disclosed. It will be readily apparent to one skilled in the art that many modifications and changes may be made in the present disclosure. Any modifications, equivalence, variations of the preferred embodiments can be made without departing from the doctrine and spirit of the present disclosure.

Claims (12)

1. A method for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
obtaining relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
creating a 3D model and obtaining an original coordinate data of the 3D model;
obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
rasterizing the second coordinate data to obtain pixel information; and
drawing an image in accordance with a VR video data and the pixel information.
2. The method according to claim 1, wherein the step of obtaining relevant parameters comprises:
obtaining parameters relevant to field of view in accordance with specification of a head-up display device and a screen size;
calculating the center position of lens distortion in accordance with the parameters relevant to field of view; and
calculate the projection matrix in accordance with the parameters relevant to field of view.
3. The method according to claim 1, wherein the step of obtaining relevant parameters comprises:
obtaining an eye distance parameter based on specification of a head-up display device; and
calculate the camera matrix in accordance with the eye distance parameter.
4. The method according to claim 1, wherein the camera matrix and the projection matrix are adjusted to achieve binocular-mode viewing effects.
5. The method according to claim 1, wherein the step of obtaining first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model comprising:
calculating out camera matrices for left and right eyes in binocular mode by equations (1) to (4):
half_eye _ipd = eye_ipd 2 ( 1 ) translate ( X , Y , Z ) = [ 1 0 0 X 0 1 0 Y 0 0 1 Z 0 0 0 1 ] ( 2 ) left_view _matrix = translate ( - half_eye _ipd , 0 , 0 ) * mat 4 view ( 3 ) right_view _matrix = translate ( half_eye _ipd , 0 , 0 ) * mat 4 view ( 4 )
wherein, left_view_matrix and right_view_matrix represent respectively a camera matrix for left eye and a camera matrix for right eye, mat4_view is the camera matrix which can be generated directly in accordance with rotation angles of a gyro, and eye_ipd represents the eye distance parameter;
calculating out the projection matrix mat4projection in binocular mode by equation (5),
mat 4 projection = [ 2 tan ( fov left ) + tan ( fov right ) 0 - tan ( fov left ) - tan ( fov right ) tan ( fov left ) + tan ( fov right ) 0 0 2 tan ( fov up ) + tan ( fov down ) - tan ( fov up ) - tan ( fov down ) tan ( fov up ) + tan ( fov down ) Y 0 0 far near - far far * near near - far 0 0 - 1 0 ] ( 5 )
wherein fovleft, fovright, fovup, fovdown, far, near represent the parameters relevant to field of view;
setting mat4model to be an identity matrix;
calculating out the first coordinate data Px,y,z MVP by equation (6),

P x,y,z MVP=mat4model*mat4view*mat4projection *P x,y,z original  (6)
wherein, Px,y,z MVP represents the first coordinate data, Px,y,z original represents the original coordinate data, mat4model represents the model matrix, and mat4projection represents the projection matrix, the camera matrices left_view_matrix and right_view_matrix for left and right eyes are respectively provided into equation (6) instead of mat4view to obtain the first coordinate data Px,y,z MVP.
6. The method according to claim 1, wherein the step of performing lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data comprises:
obtaining distortion parameters in accordance with following equations (7) and (8):
K 1 , K 2 ( 7 ) f ( K 1 , K 2 ) = { K 1 - 1 = - K 1 K 2 - 1 = 3 K 1 2 - K 2 ; ( 8 )
obtaining corrected image field coordinates (xu,yu) as the second coordinate data in accordance with the distortion parameters by using equations (9) and (10) in which all items containing p can be removed when tangential distortion correction is not performed,

x u =x d+(x d −x c)(K 1 r 2 +K 2 r 4+ . . . )+(P 1(r 2+2(x d −x c)2+2P 2(x d −x c)(y d −y c))(1+P 3 r 2 +P 4 r 4+ . . . )  (9)

y u =x d+(y d −y c)(K 1 r 2 +K 2 r 4+ . . . )+(2P 1(x d −x c)(y d −y c)+P 2(r 2+2(y d −y c)2))(1+P 3 r 2 +P 4 r 4+ . . . )  (10)
wherein (xd,yd) is distorted image field coordinates after lens projection, ie, the first coordinate data, (xu,yu) is the corrected image field coordinates, (xc,yc) is the center position of lens distortion, Kn is a nth radial distortion coefficient, Pn is a nth tangential distortion coefficient, r is a distance from pixels to optical axis.
7. The method according to claim 1, wherein coordinate for the center position of lens distortion is obtained by the following steps,
performing linear interpolation between two vectors based on t using following equation:
lerp ( t , x l , x h , y l , y h ) = y l + ( t - x l ) y h - y l x h - x l ( 11 )
wherein, (xl,yl) and (xh,yh) are two coordinate points in a plane;
calculating coordinate (xcenter window _ pixel,ycenter window _ pixel) of the center position of lens distortion in according to the projection matrix mat4projection and the screen size widthwidow*heightwidow by using following equations:

(x center normal ,y center normal)=mat4projection*[00-10]

x center window _ pixel=lerp(x center normal,−1,1,0,widthwindow)

y center window _ pixel=lerp(y center normal,−1,1,0,heightwindow)  (12)
wherein the coordinate (xcenter normal,ycenter normal) is a point in the space coordinate axis of [−1,1].
8. The method according to claim 1, further comprising: adding a blackout mask.
9. The method according to claim 1, further comprising: acquiring real-time data from a gyros, and performing data smoothing and corner prediction while the VR video data is played to achieve anti-shake.
10. The method according to claim 1, wherein the equation used for performing data smoothing is

θt+1 =kt +ΩΔt)+(1−k)Ø  (13)
where, θt is a fusion rotation angle based on time t, k is a fusion weight constant, and ω is an angular velocity read by an accelerometer, Ø is an angle read from the gyros, Δt is a difference between an output time moment and its previous time moment;
equations used for corner prediction is:
θ Δ = { angularSpeed * predictionTimeS , angularSpeed β 0 , angularSpeed [ 0 , β ] null , other ( 14 ) θ t + 1 = θ t + θ Δ ( 15 )
wherein θt is a fusion rotation angle based on time t, angularSpeed is an angular velocity read by the accelerometer, predictionTimeS is a prediction time constant, and β is a rotation prediction threshold, the gyros and the accelerometer are provided on a head-up display device.
11. The method according to claim 1, further comprising: using relevant interfaces provided by OpenGL and WebGL to complete corresponding steps.
12. A system for real-time rendering displaying virtual reality (VR) using head-up display devices, comprising:
a parameter calculating unit configured to obtain relevant parameters including a camera matrix, a projection matrix, a model matrix and a center position of lens distortion;
a model building unit configured to create a 3D model and obtain original coordinate data of the 3D model;
a coordinate calculating unit configured to obtain first coordinate data in accordance with the relevant parameters and the original coordinate data of the 3D model;
a lens distortion unit configured to perform lens distortion on the first coordinate data based on the center position of lens distortion to obtain second coordinate data;
a rasterization unit configured to rasterize the second coordinate data to obtain pixel information;
an image drawing unit configured to draw an image based on a VR video data and the pixel information.
US15/860,471 2017-01-03 2018-01-02 Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices Abandoned US20180192022A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/860,471 US20180192022A1 (en) 2017-01-03 2018-01-02 Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762441936P 2017-01-03 2017-01-03
US15/860,471 US20180192022A1 (en) 2017-01-03 2018-01-02 Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices

Publications (1)

Publication Number Publication Date
US20180192022A1 true US20180192022A1 (en) 2018-07-05

Family

ID=62711388

Family Applications (6)

Application Number Title Priority Date Filing Date
US15/860,471 Abandoned US20180192022A1 (en) 2017-01-03 2018-01-02 Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices
US15/860,449 Expired - Fee Related US10334238B2 (en) 2017-01-03 2018-01-02 Method and system for real-time rendering displaying high resolution virtual reality (VR) video
US15/860,392 Abandoned US20180192044A1 (en) 2017-01-03 2018-01-02 Method and System for Providing A Viewport Division Scheme for Virtual Reality (VR) Video Streaming
US15/860,430 Abandoned US20180191868A1 (en) 2017-01-03 2018-01-02 Method and System for Downloading Multiple Resolutions Bitrate for Virtual Reality (VR) Video Streaming Optimization
US15/860,358 Abandoned US20180192063A1 (en) 2017-01-03 2018-01-02 Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions
US15/860,494 Abandoned US20180189980A1 (en) 2017-01-03 2018-01-02 Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting

Family Applications After (5)

Application Number Title Priority Date Filing Date
US15/860,449 Expired - Fee Related US10334238B2 (en) 2017-01-03 2018-01-02 Method and system for real-time rendering displaying high resolution virtual reality (VR) video
US15/860,392 Abandoned US20180192044A1 (en) 2017-01-03 2018-01-02 Method and System for Providing A Viewport Division Scheme for Virtual Reality (VR) Video Streaming
US15/860,430 Abandoned US20180191868A1 (en) 2017-01-03 2018-01-02 Method and System for Downloading Multiple Resolutions Bitrate for Virtual Reality (VR) Video Streaming Optimization
US15/860,358 Abandoned US20180192063A1 (en) 2017-01-03 2018-01-02 Method and System for Virtual Reality (VR) Video Transcode By Extracting Residual From Different Resolutions
US15/860,494 Abandoned US20180189980A1 (en) 2017-01-03 2018-01-02 Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting

Country Status (2)

Country Link
US (6) US20180192022A1 (en)
CN (6) CN108366293A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110336994A (en) * 2019-07-04 2019-10-15 上海索倍信息科技有限公司 A kind of naked eye 3D display system
US11436787B2 (en) * 2018-03-27 2022-09-06 Beijing Boe Optoelectronics Technology Co., Ltd. Rendering method, computer product and display apparatus
US20240031676A1 (en) * 2021-12-02 2024-01-25 Fotonation Limited Method And System For Camera Motion Blur Reduction

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10291910B2 (en) * 2016-02-12 2019-05-14 Gopro, Inc. Systems and methods for spatially adaptive video encoding
US10331862B2 (en) * 2017-04-20 2019-06-25 Cisco Technology, Inc. Viewport decryption
US11232532B2 (en) * 2018-05-30 2022-01-25 Sony Interactive Entertainment LLC Multi-server cloud virtual reality (VR) streaming
US10623791B2 (en) 2018-06-01 2020-04-14 At&T Intellectual Property I, L.P. Field of view prediction in live panoramic video streaming
US10812774B2 (en) 2018-06-06 2020-10-20 At&T Intellectual Property I, L.P. Methods and devices for adapting the rate of video content streaming
US10616621B2 (en) * 2018-06-29 2020-04-07 At&T Intellectual Property I, L.P. Methods and devices for determining multipath routing for panoramic video content
US11019361B2 (en) 2018-08-13 2021-05-25 At&T Intellectual Property I, L.P. Methods, systems and devices for adjusting panoramic view of a camera for capturing video content
CN109343518B (en) * 2018-09-03 2021-07-02 浙江大丰实业股份有限公司 On-spot drive platform of universal ride
US11128869B1 (en) * 2018-10-22 2021-09-21 Bitmovin, Inc. Video encoding based on customized bitrate table
CN109375369B (en) * 2018-11-23 2021-05-18 国网天津市电力公司 Distortion preprocessing method in VR (virtual reality) large-screen cinema mode
CN111510777B (en) * 2019-01-30 2021-11-23 上海哔哩哔哩科技有限公司 Method and device for measuring network speed, computer equipment and readable storage medium
CN111669666A (en) * 2019-03-08 2020-09-15 北京京东尚科信息技术有限公司 Method, device and system for simulating reality
CN111866485A (en) * 2019-04-25 2020-10-30 中国移动通信有限公司研究院 Stereoscopic picture projection and transmission method, device and computer readable storage medium
CN110381331A (en) * 2019-07-23 2019-10-25 深圳市道通智能航空技术有限公司 A kind of image processing method, device, equipment of taking photo by plane and storage medium
CN110490962B (en) * 2019-08-20 2023-09-15 武汉邦拓信息科技有限公司 Remote rendering method based on video stream
CN110544425A (en) * 2019-09-13 2019-12-06 广州城市职业学院 ancient building VR display system
CN111489428B (en) * 2020-04-20 2023-06-30 北京字节跳动网络技术有限公司 Image generation method, device, electronic equipment and computer readable storage medium
US11245911B1 (en) * 2020-05-12 2022-02-08 Whirlwind 3D, LLC Video encoder/decoder (codec) for real-time applications and size/b and width reduction
CN111754614B (en) * 2020-06-30 2024-07-02 平安国际智慧城市科技股份有限公司 VR-based video rendering method and device, electronic equipment and storage medium
CN112468806B (en) * 2020-11-12 2022-07-26 中山大学 Panoramic video transmission optimization method for cloud VR platform
CN114286142B (en) * 2021-01-18 2023-03-28 海信视像科技股份有限公司 Virtual reality equipment and VR scene screen capturing method
CN113347402A (en) * 2021-06-28 2021-09-03 筑友建筑装饰装修工程有限公司 Improved method, device and storage medium for rendering immersive content based on Unity
CN114466220A (en) * 2022-01-29 2022-05-10 维沃移动通信有限公司 Video downloading method and electronic equipment
CN115002519A (en) * 2022-05-31 2022-09-02 北京势也网络技术有限公司 Method for playing 8K panoramic video file in low-bandwidth network
CN115396731A (en) * 2022-08-10 2022-11-25 北京势也网络技术有限公司 Panoramic video playing method and device, electronic equipment and readable storage medium
CN116880723B (en) * 2023-09-08 2023-11-17 江西格如灵科技股份有限公司 3D scene display method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240281A (en) * 2014-08-28 2014-12-24 东华大学 Virtual reality head-mounted device based on Unity3D engine
US20160381256A1 (en) * 2015-06-25 2016-12-29 EchoPixel, Inc. Dynamic Minimally Invasive Surgical-Aware Assistant
US20170289214A1 (en) * 2016-04-04 2017-10-05 Hanwha Techwin Co., Ltd. Method and apparatus for playing media stream on web browser

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3196889B2 (en) * 1996-09-05 2001-08-06 株式会社アルファ Three-dimensional image processing method and computer-readable recording medium storing a program for causing a computer to execute the three-dimensional image processing method
TWI262725B (en) * 2005-06-30 2006-09-21 Cheertek Inc Video decoding apparatus and digital audio and video display system capable of controlling presentation of subtitles and method thereof
US8897370B1 (en) * 2009-11-30 2014-11-25 Google Inc. Bitrate video transcoding based on video coding complexity estimation
US8862763B2 (en) * 2011-03-30 2014-10-14 Verizon Patent And Licensing Inc. Downloading video using excess bandwidth
US8907968B2 (en) * 2011-03-31 2014-12-09 Panasonic Corporation Image rendering device, image rendering method, and image rendering program for rendering stereoscopic panoramic images
US8810598B2 (en) * 2011-04-08 2014-08-19 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
GB2501929B (en) * 2012-05-11 2015-06-24 Sony Comp Entertainment Europe Apparatus and method for augmented reality
WO2014025319A1 (en) * 2012-08-08 2014-02-13 National University Of Singapore System and method for enabling user control of live video stream(s)
US9129429B2 (en) * 2012-10-24 2015-09-08 Exelis, Inc. Augmented reality on wireless mobile devices
GB2509953B (en) * 2013-01-18 2015-05-20 Canon Kk Method of displaying a region of interest in a video stream
US9196199B2 (en) * 2013-02-12 2015-11-24 Pixtronix, Inc. Display having staggered display element arrangement
CN103702139B (en) * 2013-12-13 2017-02-01 华中科技大学 Video-on-demand system based on scalable coding under mobile environment
US9398250B2 (en) * 2014-01-06 2016-07-19 Arun Sobti & Associates, Llc System and apparatus for smart devices based conferencing
CN105025351B (en) * 2014-04-30 2018-06-29 深圳Tcl新技术有限公司 The method and device of DST PLAYER buffering
JP6337614B2 (en) * 2014-05-23 2018-06-06 セイコーエプソン株式会社 Control device, robot, and control method
US20150346812A1 (en) * 2014-05-29 2015-12-03 Nextvr Inc. Methods and apparatus for receiving content and/or playing back content
CN104268922B (en) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 A kind of image rendering method and image rendering device
US10812546B2 (en) * 2014-12-24 2020-10-20 Intel IP Corporation Link-aware streaming adaptation
CN104616243B (en) * 2015-01-20 2018-02-27 北京道和汇通科技发展有限公司 A kind of efficient GPU 3 D videos fusion method for drafting
US20160261908A1 (en) * 2015-03-05 2016-09-08 Htc Corporation Media streaming system and control method thereof
CN104735464A (en) * 2015-03-31 2015-06-24 华为技术有限公司 Panorama video interactive transmission method, server and client end
CN104717507A (en) * 2015-03-31 2015-06-17 北京奇艺世纪科技有限公司 Video transcoding method and device
US10083363B2 (en) * 2015-05-26 2018-09-25 Nbcuniversal Media, Llc System and method for customizing content for a user
US10102666B2 (en) * 2015-06-12 2018-10-16 Google Llc Electronic display stabilization for head mounted display
US10674185B2 (en) * 2015-10-08 2020-06-02 Koninklijke Kpn N.V. Enhancing a region of interest in video frames of a video stream
CN106919248A (en) * 2015-12-26 2017-07-04 华为技术有限公司 It is applied to the content transmission method and equipment of virtual reality
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN105455285B (en) * 2015-12-31 2019-02-12 北京小鸟看看科技有限公司 A kind of virtual implementing helmet adaptation method
US10313417B2 (en) * 2016-04-18 2019-06-04 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN105898565A (en) * 2016-04-28 2016-08-24 乐视控股(北京)有限公司 Video processing method and device
US9721393B1 (en) * 2016-04-29 2017-08-01 Immersive Enterprises, LLC Method for processing and delivering virtual reality content to a user
CN106060570B (en) * 2016-06-30 2019-06-14 北京奇艺世纪科技有限公司 A kind of full-view video image plays, coding method and device
CN106060515B (en) * 2016-07-14 2018-11-06 腾讯科技(深圳)有限公司 Panorama pushing method for media files and device
CN106231317A (en) * 2016-09-29 2016-12-14 三星电子(中国)研发中心 Video processing, coding/decoding method and device, VR terminal, audio/video player system
US10595069B2 (en) * 2016-12-05 2020-03-17 Adobe Inc. Prioritizing tile-based virtual reality video streaming using adaptive rate allocation
US20180295375A1 (en) * 2017-04-05 2018-10-11 Lyrical Labs Video Compression Technology, LLC Video processing and encoding
CN107087212B (en) * 2017-05-09 2019-10-29 杭州码全信息科技有限公司 Interactive panoramic video transcoding and playback method and system based on spatial scalable coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240281A (en) * 2014-08-28 2014-12-24 东华大学 Virtual reality head-mounted device based on Unity3D engine
US20160381256A1 (en) * 2015-06-25 2016-12-29 EchoPixel, Inc. Dynamic Minimally Invasive Surgical-Aware Assistant
US20170289214A1 (en) * 2016-04-04 2017-10-05 Hanwha Techwin Co., Ltd. Method and apparatus for playing media stream on web browser

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11436787B2 (en) * 2018-03-27 2022-09-06 Beijing Boe Optoelectronics Technology Co., Ltd. Rendering method, computer product and display apparatus
CN110336994A (en) * 2019-07-04 2019-10-15 上海索倍信息科技有限公司 A kind of naked eye 3D display system
US20240031676A1 (en) * 2021-12-02 2024-01-25 Fotonation Limited Method And System For Camera Motion Blur Reduction

Also Published As

Publication number Publication date
US20180192063A1 (en) 2018-07-05
CN108366272A (en) 2018-08-03
US20180191868A1 (en) 2018-07-05
CN108419142A (en) 2018-08-17
CN108377381A (en) 2018-08-07
CN108391103A (en) 2018-08-10
US20180192044A1 (en) 2018-07-05
CN108419093A (en) 2018-08-17
US20180192026A1 (en) 2018-07-05
CN108366293A (en) 2018-08-03
US20180189980A1 (en) 2018-07-05
US10334238B2 (en) 2019-06-25

Similar Documents

Publication Publication Date Title
US20180192022A1 (en) Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US10861215B2 (en) Asynchronous time and space warp with determination of region of interest
US9241155B2 (en) 3-D rendering for a rotated viewer
US10257492B2 (en) Image encoding and display
US20050219239A1 (en) Method and apparatus for processing three-dimensional images
CN107908278B (en) Virtual reality VR interface generation method and device
JP2008257127A (en) Image display device and image display method
WO2017086244A1 (en) Image processing device, information processing device, and image processing method
US20210382313A1 (en) Image generation appratus, head-mounted display, content processing system, and image display method
CN108153417A (en) Frame compensation method and the head-mounted display apparatus using this method
US11187895B2 (en) Content generation apparatus and method
JP2018147504A (en) Display control method and program for causing computer to execute the display control method
JP7429515B2 (en) Image processing device, head-mounted display, and image display method
US11187914B2 (en) Mirror-based scene cameras
JP2002300612A (en) Image generating device, program, and information storage medium
KR101773929B1 (en) System for processing video with wide viewing angle, methods for transmitting and displaying vide with wide viewing angle and computer programs for the same
US11863902B2 (en) Techniques for enabling high fidelity magnification of video
US20230222754A1 (en) Interactive video playback techniques to enable high fidelity magnification
US20220232201A1 (en) Image generation system and method
JP7365183B2 (en) Image generation device, head mounted display, content processing system, and image display method
WO2024004134A1 (en) Image transmission device and image transmission method
KR102179810B1 (en) Method and program for playing virtual reality image
CN117452637A (en) Head mounted display and image display method

Legal Events

Date Code Title Description
AS Assignment

Owner name: BLACK SAILS TECHNOLOGY INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, ZHUO;TANG, YONGTAO;ZHAO, RUOXI;AND OTHERS;REEL/FRAME:044519/0103

Effective date: 20180102

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION