WO2019033903A1 - 虚拟现实的图形渲染方法和装置 - Google Patents

虚拟现实的图形渲染方法和装置 Download PDF

Info

Publication number
WO2019033903A1
WO2019033903A1 PCT/CN2018/096858 CN2018096858W WO2019033903A1 WO 2019033903 A1 WO2019033903 A1 WO 2019033903A1 CN 2018096858 W CN2018096858 W CN 2018096858W WO 2019033903 A1 WO2019033903 A1 WO 2019033903A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
human eye
angle
gaze point
Prior art date
Application number
PCT/CN2018/096858
Other languages
English (en)
French (fr)
Inventor
戴天荣
张信
蔡磊
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Priority to US16/638,734 priority Critical patent/US10859840B2/en
Publication of WO2019033903A1 publication Critical patent/WO2019033903A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering

Definitions

  • the present application relates to the field of virtual reality technologies, and in particular, to a graphics rendering method and apparatus for virtual reality.
  • the present application provides a virtual reality graphics rendering method and apparatus, which can reduce the calculation amount of graphics rendering and improve the output frame rate of graphics rendering.
  • the application provides a graphic rendering method for virtual reality, including:
  • the first image is an overall image corresponding to the field of view of the virtual reality device, and the angular resolution is the number of pixels corresponding to the angle of view per degree in the range of the field of view;
  • the first image and the second image are combined into a third image.
  • the first angular resolution is obtained by multiplying a preset percentage by an angular resolution of the display screen, and the preset percentage is determined according to a balance point between the required graphics rendering quality and the graphics rendering computation amount.
  • the second angular resolution is a display angle resolution of the virtual reality device, and the resolution of the partial image is obtained according to the preset local image horizontal field angle and the vertical field angle multiplied by the second angle resolution.
  • the preset partial image horizontal field angle and vertical field angle are determined according to the image range of the desired gaze point rendering effect.
  • generating, according to the obtained human eye gaze point position information on the display screen, the second image corresponding to the position of the human eye gaze point according to the second angle resolution including:
  • Determining the direction information of the human eye gaze point according to the position information of the human eye gaze point and the direction information of the head mounted virtual reality device;
  • a second image corresponding to the human eye fixation point position information and the human eye fixation point direction information is generated according to the second angle resolution based on the human eye fixation point position information and the human eye fixation point direction information.
  • synthesizing the first image and the second image into a third image comprises:
  • the splicing boundary of the third image is subjected to smooth fusion processing.
  • the low resolution of the first image is reconstructed to the same high resolution as the resolution of the display screen, including:
  • the low-resolution first image is spatially transformed to obtain its YCbCr spatial image, where Y is a nonlinear luminance component, Cb is a blue color difference component, Cr is a red color difference component, and the Cb and Cr components are reconstructed by an interpolation method;
  • Y, Cb, and Cr are combined to obtain a YCbCr image, which is converted into an RGB image and stored, and a high-resolution reconstructed first image is obtained.
  • performing smooth fusion processing on the splicing boundary of the third image including:
  • Y is the value of the image YUV color-coded data after the splicing boundary is overlapped, and d is the weight
  • the YUV color-coded data at the non-splicing boundary is directly copied into the entire image YUV color-coded data for transition processing.
  • the application also provides a virtual reality graphics rendering device, including:
  • a first image generating module configured to generate, according to the acquired spatial position information and direction information of the head mounted virtual reality device, a first image corresponding to the spatial position information and the direction information according to the first angular resolution,
  • the first angle resolution is smaller than the angular resolution of the display screen, the first image is an overall image corresponding to the field of view of the virtual reality device, and the angular resolution is within a range of the angle of view, corresponding to the angle of view per degree.
  • a second image generating module configured to generate, according to the acquired position information of the human eye gaze point on the display screen, a second image corresponding to the position of the human eye gaze point according to the second angular resolution, the second angular resolution Equal to the angular resolution of the display screen, the second image being a partial image around the location of the human eye gaze point;
  • a third image generating module configured to synthesize the first image and the second image into a third image.
  • the first angular resolution is obtained by multiplying a preset percentage by an angular resolution of the display screen, and the preset percentage is determined according to a balance point between the required graphics rendering quality and the graphics rendering computation amount.
  • the second angular resolution is a display angle resolution of the virtual reality device, and the resolution of the partial image is obtained according to the preset local image horizontal field angle and the vertical field angle multiplied by the second angle resolution.
  • the preset partial image horizontal field angle and vertical field angle are determined according to the image range of the desired gaze point rendering effect.
  • the present application further provides a graphics rendering device for a virtual reality, comprising: a graphics processor and a memory, wherein the memory is configured to store a program for supporting the above-mentioned virtual reality graphics rendering device to execute the above-described virtual reality graphics rendering method,
  • a graphics processor is configured to execute a program stored in the memory.
  • the program includes one or more computer instructions, wherein the one or more computer instructions are for execution by the graphics processor.
  • the present application further provides a computer storage medium for storing computer software instructions for use in the above-described virtual reality graphics rendering device, the computer software instructions including graphics for performing the above-mentioned virtual reality graphics rendering method as virtual reality The program involved in rendering the device.
  • the first embodiment of the present application generates a first image corresponding to the spatial position information and the direction information according to the first angle resolution, where the first angle is resolved.
  • the rate is smaller than the angular resolution of the display screen, and the first image is an overall image corresponding to the virtual reality device FOV;
  • the second angle resolution is generated and generated.
  • the second image is synthesized into a third image.
  • the GPU When the GPU renders the virtual reality image through the GPU, the whole image is generated by using a lower angle resolution, and the partial image around the human eye gaze is rendered with the same angular resolution as the display to generate a partial clear image, and then the overall image is generated.
  • the image is merged with the local clear image to generate a final image and sent to the virtual reality display for display, which can effectively reduce the calculation amount of the GPU and improve the image rendering efficiency.
  • FIG. 1 is a schematic flowchart of a method for rendering a virtual reality according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a field of view angle adopted by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of image synthesis used in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a virtual reality graphics rendering apparatus according to an embodiment of the present disclosure.
  • Virtual reality refers to a way to add virtualization technology to the user's senses and then observe the world. It is supervised by users after being simulated and simulated by science and technology and superimposed into the real world, thus achieving a surreal sensory experience.
  • the user moves to wear the head of the virtual reality (VR) head, and the sensor, for example, the IMU nine-axis sensor, the spatial position sensor, etc., generates the spatial position and direction information of the head, and transmits the head position and direction information to VR game engine, VR game engine for game state update, VR game engine according to game state and the latest head position and direction information, through the graphical application programming interface (API) interface, to the graphics processor (Graphics Processing Unit, GPU) submits graphics rendering instructions, GPU rendering is completed and two 2D images corresponding to the left and right eyes are output and sent to the VR display for display.
  • the graphics processor Graphics Processing Unit, GPU
  • each frame of monocular image rendering will output a 2D image
  • its resolution and field of view (FOV) are the same as the resolution and FOV of the display of the VR device, because of its higher resolution and VR.
  • the requirements for rendering accuracy are also high, so the amount of rendering calculation is very large.
  • the technical problem solved by the present application is how to reduce the amount of graphics rendering calculation of the graphics processing unit GPU of the VR device.
  • the VR device includes a head mounted VR device.
  • FIG. 1 is a schematic flowchart of a method for rendering a virtual reality according to an embodiment of the present invention, as shown in FIG.
  • spatial location information and direction information of the head mounted VR device can be obtained through spatial positioning technology.
  • 3DOF positioning only outputs the head direction information (pitch, yaw, roll).
  • the fixed position is used as the head space information when the graphics are rendered; 6DOF positioning outputs the spatial position information in addition to the output direction information (x) , y, z).
  • the spatial positioning technology may be any technology in the prior art that can obtain the spatial location information and the direction information of the head mounted VR device, which is not limited in this application.
  • 3DOF information is obtained by a 9-axis sensor.
  • the spatial position information in 6DOF can be obtained from the laser-infrared camera based outside-in scheme or from the computational vision based inside-out scheme (SLAM).
  • SLAM computational vision based inside-out scheme
  • a camera is mounted on a user's head mounted VR device such that an image captured by the camera moves as the head moves, and an object or image in the image or an object edge moves with the head. The movement occurs; the image of the surrounding object continuously collected by the camera is analyzed by the image of the object to obtain the continuously changing information in the image, thereby obtaining the spatial position information and the direction information of the head-mounted VR device.
  • the image reflected on the retina of the human eye can only be clearly distinguished by the central portion, which is usually called the resolution field of view, and the angle of view is about 8 to 15 degrees.
  • the portion from the field of view angle of 15 to 30 degrees is called the effective field of view.
  • the peripheral portion with an angle of view of more than 30 degrees is called an induced field of view. It can only sense the presence of an object and cannot see what it is.
  • the human eye tracking technology is used to obtain the position information of the human eye gaze point on the display screen by the human eye tracking technology.
  • it is possible to acquire a technique of the user's current "gaze direction" by using various detection means such as electronic/optical. It uses some eye structures and features whose relative position is unchanged when the eyeball rotates as a reference, extracts the line of sight change parameter between the position change feature and these invariant features, and then obtains the line of sight direction through the geometric model or the map model.
  • the video tracking system based on eye video analysis generally uses the pupil-corneal reflection method to obtain the line of sight direction by tracking the relative positions of the pupil center and the corneal reflection.
  • Corneal reflex is a virtual image formed by the light source (usually a near-infrared light source) reflecting on the surface of the cornea, and the virtual image formed by the pupil in the eye image obtained by the camera through the cornea.
  • the captured eye image is extracted by the image processing to extract the pupil center and the spot position information, and the planar line-of-sight direction parameter is extracted, and then the planar information is converted into the human eye spatial visual information data by the established mathematical model.
  • the line-of-sight mapping relationship may be preset in the headset VR device.
  • the line-of-sight mapping relationship is a mapping relationship between the human eye space visual line information data and the coordinates of the left and right pixel point pairs of the image display source on the wearing VR device (also referred to as a line-of-screen coordinate mapping relationship).
  • the position information of the human eye gaze point on the display screen of the human eye gaze point is specifically:
  • the gaze tracking system records human eye space sight information data when the user looks at a certain object. Specifically, when the user looks at the external environment through the wearing VR device, the gaze tracking system tracks the change of the line of sight of the user's eye in real time. When the user looks at a certain object, the gaze tracking system calculates the line of sight information of the human eye at this time. Therefore, according to the transmitted human eye space visual line information data and the line-of-sight mapping relationship, the coordinate position data of the corresponding image display source pixel point pair is obtained, that is, the human eye fixation point position information.
  • the first angular resolution adopted by the embodiment of the present application is smaller than the angular resolution of the display screen, and the first image is an overall image corresponding to the field of view of the virtual reality device, and the angular resolution is within the range of the viewing angle.
  • the number of pixels corresponding to the angle of view; assuming that the display resolution of the VR device is 1200*1080 for a single eye, 100 degrees for a horizontal FOV, and 90 degrees for a vertical FOV, the horizontal angle resolution is 1200/100 12 pixels/degree.
  • the corresponding graphics rendering calculation threshold may be determined according to the output frame rate threshold, and the corresponding graphics rendering quality threshold may be determined according to the graphics rendering calculation threshold, thereby obtaining a graphics rendering calculation threshold and a graphics rendering quality threshold.
  • the first angle resolution is 1/3 of the original resolution, that is, the resolution is 400 ⁇ 360
  • the spatial position information and direction information of the VR device are input in the GPU rendering program
  • the horizontal FOV is 100 degrees
  • the vertical FOV is 90 degrees.
  • the resolution is 400 ⁇ 360
  • the GPU generates the first image. Therefore, the first image is an overall low-resolution image, which is rendered at a lower resolution according to normal rendering precision, thereby greatly reducing the amount of rendering calculation.
  • the basic movement mode of the human eye gaze point is gaze and beating, and the gaze generally lasts for more than 100 ms to see the object.
  • the frame rate of the camera is usually above 60 Hz, and the capture of each frame of image and the calculation of the algorithm are time consuming, usually below 30 ms. Therefore, when determining the FOV of a partial image, it is usually only considered to visually interfere with the visual field in the effective field of view and the induced field of view when the eye is resolved, regardless of the rotational speed factor of the human eye.
  • the area range of the second image is determined according to the position information of the human eye gaze point, wherein the second image is a partial image around the position of the gaze point of the human eye according to the area range of the second image, according to the second angle
  • the resolution rendering generates a corresponding second image.
  • the second angle resolution is a display angle resolution of the virtual reality device, and the resolution of the partial image is obtained according to the preset local image horizontal field angle and the vertical field angle multiplied by the second angle resolution;
  • the gaze point rendering technology is not used, the entire image picture is quite clear, but the calculation amount and power consumption of the GPU are quite large. If the gaze point rendering technique is used, only the image where the eyeball is gazing is clear, and the other places are relatively blurred, so that the calculation amount of the GPU is greatly reduced.
  • the clear picture of the eye gaze is actually using the eye tracking system to get the human eye gaze point, the partial image around the human eye gaze point will be better, and the rendering effect in other places will gradually Reduced, therefore, the local image horizontal field of view and vertical field of view angle can be determined based on the image range of the desired gaze point rendering effect.
  • the rendered FOV is used to be smaller than the original overall FOV, such as 40 degrees horizontally and 36 degrees vertically.
  • New direction information is calculated using the human eye gaze point position information and the head direction information.
  • the direction information of the human eye gaze point is determined according to the position information of the human eye gaze point, and the direction information of the human eye gaze point is determined;
  • the head position information and the human eye gaze point direction information are generated in accordance with the second angle resolution to generate a second image corresponding to the human eye gaze point position information and the human eye gaze point direction information.
  • the head direction information submitted to the GPU is the direction information corresponding to the gaze point when the human eye looks straight ahead.
  • the gaze point is located at the intersection of the optical axis of the VR lens and the display plane.
  • the position of the intersection point O on the display plane is known information.
  • the vector from the human eye to the intersection of the optical axis and the display plane is set to V1 (ie, head direction information), V1 is perpendicular to the display plane, and the gaze point is F (known), at the intersection point O of the display plane to the fixation point F.
  • V3 V1+V2 of the human eye to the gaze point F
  • V3 is the direction information of the human eye gaze point sought.
  • the specific implementation includes:
  • the resolution of the first image is reconstructed to be the same as the resolution of the display screen by interpolation or the like, that is, the resolution of the first image is reconstructed by interpolation or the like;
  • FIG. 3 is a schematic diagram of image synthesis used in the embodiment of the present application. As shown in FIG. 3, the partial clear image is the second image, and the overall low-definition image is the first image.
  • the splicing boundary of the third image is subjected to smooth fusion processing, for example, the boundary region of the third image is subjected to smooth fusion processing by low-pass filtering or the like.
  • the specific implementation process of performing the original high resolution reconstruction by using the resolution of the first image by interpolation or the like includes:
  • the low-resolution first image is spatially transformed to obtain its YCbCr spatial image, where Y is a nonlinear luminance component, Cb is a blue color difference component, Cr is a red color difference component, and the Cb and Cr components are reconstructed by an interpolation method;
  • Y, Cb, and Cr are combined to obtain a YCbCr image, which is converted into an RGB image and stored, and a high-resolution reconstructed first image is obtained.
  • the above interpolation method may be a bicubic interpolation method.
  • the above Xh and Xl are respectively normalized, spliced into a data X, and normalized, and then trained by the sparse coding Sparse Coding method, and finally the obtained D is further divided into Dh and Dl.
  • the above can be solved by the filter to solve the corresponding feature image, and 6x6 sampling is performed to obtain four sets of sampled data, and in the sampling process, the overlapping portion of the lIm corresponding to the given low-resolution color image is overlapped, and the image block of the feature image of fIm is The overlapping portion is 2*overlap, and then the data is used to obtain the sparse coefficient ⁇ of the corresponding image block.
  • the average of the fIm is increased to obtain the final high-resolution image block; and for the boundary, the final low-resolution color image is up-sampled by interpolation for 3 times to obtain the final boundary image block;
  • the high resolution image block and the final boundary image block are combined into a Y component, wherein the average is taken for the overlapping image blocks.
  • the high-resolution reconstruction method of the present embodiment can construct an approximate high-resolution image directly through its own low-resolution image without a high-resolution image library, and then establish a sampling block, and obtain a corresponding training dictionary. Then, by using the dictionary D1 of the low-resolution image trained at this time, the corresponding sparse coefficient is obtained by the sparse representation theory, and finally the sparse coefficient is reused for the dictionary Dh of the high-resolution image, and the high-resolution image is reconstructed.
  • performing smooth fusion processing on the splicing boundary of the third image specifically includes:
  • Y is the value of the image YUV color-coded data after the splicing boundary is overlapped, and d is the weight
  • the YUV color-coded data at the non-splicing boundary is directly copied into the entire image YUV color-coded data for transition processing.
  • the technical solution of the present application can implement gaze point rendering on the outside of the GPU program, that is, in the case that the GPU program does not need to support gaze point rendering, the implementation of the present application can also be implemented in the plug-in of the 3D engine.
  • the gaze point described in the example renders the desired output image (third image).
  • the first embodiment of the present application generates a first image corresponding to the spatial position information and the direction information according to the first angle resolution, where the first angle is resolved.
  • the rate is smaller than the angular resolution of the display screen, and the first image is an overall image corresponding to the virtual reality device FOV;
  • the second angle resolution is generated and generated.
  • the second image is synthesized into a third image.
  • the GPU When the GPU renders the virtual reality image through the GPU, the whole image is generated by using a lower angle resolution, and the partial image around the human eye gaze is rendered with the same angular resolution as the display to generate a partial clear image, and then the overall image is generated.
  • the image is merged with the local clear image to generate a final image and sent to the virtual reality display for display, which can effectively reduce the calculation amount of the GPU and improve the image rendering efficiency.
  • FIG. 4 is a schematic structural diagram of a virtual reality graphics rendering apparatus according to an embodiment of the present disclosure. As shown in FIG. 4, the method includes:
  • a first image generating module configured to generate, according to the acquired spatial position information and direction information of the head mounted virtual reality device, a first image corresponding to the spatial position information and the direction information according to the first angular resolution,
  • the first angle resolution is smaller than the angular resolution of the display screen, the first image is an overall image corresponding to the field of view of the virtual reality device, and the angular resolution is within a range of the angle of view, corresponding to the angle of view per degree.
  • a second image generating module configured to generate, according to the acquired position information of the human eye gaze point on the display screen, a second image corresponding to the position of the human eye gaze point according to the second angular resolution, the second angular resolution Equal to the angular resolution of the display screen, the second image being a partial image around the location of the human eye gaze point;
  • a third image generating module configured to synthesize the first image and the second image into a third image.
  • the first angular resolution is obtained by multiplying the preset percentage by the angular resolution of the display screen, and the preset percentage is determined according to a balance point between the required graphics rendering quality and the graphics rendering computation amount.
  • the second angle resolution is a display resolution of the virtual reality device, and the resolution of the partial image is obtained according to the preset local image horizontal field angle and the vertical field angle multiplied by the second angle resolution.
  • the local image horizontal field angle and the vertical field angle are determined according to the image range of the desired gaze point rendering effect.
  • the second image generating module is specifically configured to:
  • Determining the direction information of the human eye gaze point according to the position information of the human eye gaze point and the direction information of the head mounted virtual reality device;
  • the second image corresponding to the human eye fixation point position information and the human eye fixation point direction information is generated according to the second angle resolution according to the human eye fixation point position information and the human eye fixation point direction information.
  • the third image generating module is specifically configured to:
  • the resolution of the first image is reconstructed to be the same as the resolution of the display screen, and the specific implementation is referred to the related description in the embodiment of FIG. 1 above;
  • the splicing boundary of the third image is subjected to a smooth merging process, and the specific implementation is referred to the related description in the embodiment of FIG. 1 above.
  • the device may generate, according to the obtained spatial position information and direction information of the head mounted virtual reality device, a first image corresponding to the spatial position information and the direction information according to the first angle resolution.
  • the first angle resolution is smaller than the angular resolution of the display screen, and the first image is an overall image corresponding to the virtual reality device FOV;
  • the second angle Rate rendering generates a second image corresponding to the position of the human eye gaze point, the second angle resolution is equal to an angular resolution of the display screen, and the second image is a partial image around the position of the human eye gaze point;
  • the first image and the second image are combined into a third image.
  • the GPU When the GPU renders the virtual reality image through the GPU, the whole image is generated by using a lower angle resolution, and the partial image around the human eye gaze is rendered with the same angular resolution as the display to generate a partial clear image, and then the overall image is generated.
  • the image is merged with the local clear image to generate a final image and sent to the virtual reality display for display, which can effectively reduce the calculation amount of the GPU and improve the image rendering efficiency.
  • the structure of the virtual reality graphics rendering apparatus includes a graphics processor and a memory, where the memory is used to store a graphics rendering device supporting the virtual reality to execute the virtual reality graphics in the embodiment shown in FIG. A program of a rendering method, the graphics processor being configured to execute a program stored in the memory.
  • the program includes one or more computer instructions, wherein the one or more computer instructions are for execution by the graphics processor.
  • the embodiment of the invention further provides a computer storage medium for storing computer software instructions for a virtual reality graphics rendering device, wherein the computer software instruction comprises a graphic for performing the virtual reality graphic rendering method as virtual reality.
  • the program involved in rendering the device is not limited to a virtual reality graphics rendering device.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了虚拟现实的图形渲染方法和装置,根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与空间位置信息与方向信息对应的第一图像,根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与人眼注视点位置对应的第二图像,将第一图像和第二图像合成为第三图像。由于本申请对虚拟现实画面进行渲染时,采用的第一角度分辨率和第二角度分辨率是较低角度分辨率,可以有效降低GPU的计算量,提高图像渲染效率。

Description

虚拟现实的图形渲染方法和装置
交叉引用
本申请引用于2017年08月14日递交的名称为“一种虚拟现实的图形渲染方法和装置”的第201710691256.5号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实的图形渲染方法和装置。
背景技术
现有的虚拟现实(简称VR)设备需要支持的显示分辨率要求越来越高,从1k,2k到4k,未来甚至需要支持8k,16k的屏幕分辨率,以消除显示的纱窗效应,增强虚拟环境下的显示效果的真实性。但是另一个方面,越来越高的显示分辨率,对VR设备的图形处理单元(简称GPU)的3D图形渲染能力也提出越来越大的挑战。当GPU性能不足时,图形渲染质量与图形渲染输出帧率之间则需要进行妥协,要么降低图形渲染的画质要求,要么降低渲染输出帧率。而对于VR应用场景来说,输出帧率是一个关键指标,当输出帧率不足时会导致用户眩晕。另一方面即使GPU性能足够,因为图形渲染的计算负载很重,这会导致VR设备电流消耗较大,并产生持续发热的问题。
因此,对VR产品设计中的主动与被动散热等设计提出较高要求,有效降低GPU的渲染工作量成为解决上述各方面问题的一个重要方向。
发明内容
为了解决上述问题,本申请提供一种虚拟现实的图形渲染方法和装置,可以减少图形渲染的计算量,提高图形渲染的输出帧率。
本申请提供一种虚拟现实的图形渲染方法,包括:
根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;
根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;
将第一图像和第二图像合成为第三图像。
可选地,所述第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
可选地,所述第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
可选地,根据获取的人眼注视点在显示屏上的人眼注视点位置信息,按照第二角度分辨率生成与所述人眼注视点位置对应的第二图像,包括:
根据人眼注视点位置信息,结合头戴式虚拟现实设备的方向信息,确定人眼注视点方向信息;
根据人眼注视点位置信息和人眼注视点方向信息,按照第二角度分辨率 生成与所述人眼注视点位置信息和人眼注视点方向信息对应的第二图像。
可选地,将第一图像和第二图像合成为第三图像,包括:
通过插值方法将第一图像的分辨率重建为与显示屏的分辨率相同;
根据人眼注视点位置信息,将第二图像覆盖到高分辨率重建后的第一图像中的与人眼注视点位置信息相对应的位置,得到第三图像;
对第三图像的拼接边界进行平滑融合处理。
可选地,将第一图像的低分辨率重建为与显示屏的分辨率相同的高分辨率,包括:
对低分辨率的第一图像进行空间转化,得到其YCbCr空间图像,其中Y是非线性亮度分量,Cb是蓝色色差分量,Cr是红色色差分量,对Cb、Cr分量利用插值方法进行重建;
构造用于训练的数据库,即高分辨率图像块Xh和低分辨率图像块Xl,并组合成数据库X;
对数据库X利用稀疏编码方法生成字典D,并分解为高分辨率图像的字典Dh和低分辨率图像的字典Dl;
利用Dl和低分辨率的第一图像2倍上采样的图像对应的特征图像来求解稀疏系数;
通过稀疏系数和Dh,求解低分辨率的第一图像3倍上采样的图像,即Y分量;
将Y、Cb、Cr组合得到YCbCr图像,并转化为RGB图像,进行存储,就得到了高分辨率重建后的第一图像。
可选地,对第三图像的拼接边界进行平滑融合处理,包括:
利用公式加权平滑公式Y=Y1*(1-d)+Y2*d,将第三图像的拼接边界处的YUV颜色编码数据平滑处理到整个图像YUV颜色编码数据,其中,Y1和Y2分别是拼接边界处相邻图像YUV颜色编码数据的值,Y是拼接边界处重叠后的图像YUV颜色编码数据的值,d为权值;
将非拼接边界处的YUV颜色编码数据直接复制到整个图像YUV颜色编码 数据中进行过渡处理。
本申请还提供一种虚拟现实的图形渲染装置,包括:
第一图像生成模块,用于根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;
第二图像生成模块,用于根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;
第三图像生成模块,用于将第一图像和第二图像合成为第三图像。
可选地,所述第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
可选地,所述第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
本申请还提供一种虚拟现实的图形渲染装置,包括:图形处理器和存储器,所述存储器用于存储支持上述的虚拟现实的图形渲染装置执行上述的虚拟现实的图形渲染方法的程序,所述图形处理器被配置为用于执行所述存储器中存储的程序。
所述程序包括一条或多条计算机指令,其中,所述一条或多条计算机指令供所述图形处理器调用执行。
本申请还提供一种计算机存储介质,用于储存上述的虚拟现实的图形渲染装置所用的计算机软件指令,所述计算机软件指令包含了用于执行上述的 虚拟现实的图形渲染方法为虚拟现实的图形渲染装置所涉及的程序。
本申请实施例根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备FOV的整体图像;根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与所述人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;将第一图像和第二图像合成为第三图像。本申请通过GPU对虚拟现实画面进行渲染时,采用较低角度分辨率渲染生成整体图像,对人眼注视点周围局部图像采用与显示屏相同的角度分辨率渲染生成局部清晰图像,然后将整体图像和局部清晰图像进行融合,生成一个最终图像发送给虚拟现实显示屏用于显示,可以有效降低GPU的计算量,提高图像渲染效率。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1为本申请一实施例提供的虚拟现实的图形渲染方法的流程示意图;
图2为本申请实施例采用的视场角示意图;
图3为本申请实施例采用的图像合成示意图;
图4为本申请一实施例提供的虚拟现实的图形渲染装置结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体 实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
虚拟现实是指一种将虚拟化技术加到用户感官上再来观察世界的方式,通过科学技术模拟仿真后再叠加到现实世界被用户所感知,从而达到超现实的感官体验。
本申请的一种应用场景举例如下:
用户移动佩戴虚拟现实(Virtual Reality,VR)头显的头部,由传感器,例如,IMU九轴传感器、空间位置传感器等,生成头部的空间位置与方向信息,传递头部位置与方向信息到VR游戏引擎,VR游戏引擎进行游戏状态更新,VR游戏引擎根据游戏状态与最新的头部位置与方向信息,通过图形应用程序编程接口(Application Programming Interface,API)接口,向图形处理器(Graphics Processing Unit,GPU)提交图形渲染指令,GPU渲染完成并输出对应左右眼的两幅2D图像,发送至VR显示屏进行显示。由于每一帧单眼图像渲染会输出一幅2D图像,其分辨率和视场角(Field of Vision,FOV)与VR设备的显示屏的分辨率和FOV相同,因其分辨率较高,且VR对渲染精度的要求也比较高,因此渲染计算量非常大。
本申请解决的技术问题是如何降低VR设备的图形处理单元GPU的图形渲染计算量。其中,VR设备包括头戴式VR设备。
图1为本申请一实施例提供的虚拟现实的图形渲染方法的流程示意图,如图1所示,
101、获取VR设备的空间位置信息与方向信息,获取人眼注视点在显示屏上的人眼注视点位置信息;
本申请实施例中,主要可以通过空间定位技术获取头戴式VR设备的空间位置信息与方向信息。目前的VR设备的空间定位方式主要有两种,一种是只支持3自由度(3DOF,Degree of Freedom)的基本空间定位,另一种是支持 6DOF的空间定位。3DOF定位仅输出头显的方向信息(pitch,yaw,roll),这种情况下进行图形渲染时使用固定的位置作为头显的空间位置信息;6DOF定位除了输出方向信息还输出空间位置信息(x,y,z)。空间定位技术可以是现有技术中任一可以实现头戴式VR设备的空间位置信息与方向信息获取的技术,本申请对此不作限定。通常3DOF信息由9轴传感器获得,6DOF中的空间位置信息可以由基于激光/红外摄像头的outside-in方案获得,也可以由基于计算视觉的inside-out方案(SLAM)获得。例如,在用户的头戴式VR设备上安装有摄像头,使得摄像头所采集的图像随着头部的移动而发生移动,图像中的物体或图像中的特征点或物体边缘随着头部的移动而发生移动;通过摄像头连续采集的周围物体的图像,通过对物体图像进行分析,获取图片中位置连续变化的信息,由此可以获取头戴式VR设备的空间位置信息与方向信息。
由于人眼的视场角是有限的,一般而言,映在人眼视网膜上的图像,只有中心部分能分辨清楚,这部分通常叫分辨视域,约视场角8度到15度。从视场角15到30度之间部分称为有效视域,在有效视域中用户能看清物体的存在和动作,虽然不需要转动头部也能看清楚,但是分辨力已经下降了。视场角超过30度的周边部分称为诱导视野,只能感觉物体的存在,不能看清楚是什么物体。
本申请实施例中,通过上述人眼特性,采用人眼追踪技术获取人眼注视点在显示屏上的人眼注视点位置信息。例如,可以利用电子/光学等各种检测手段获取用户当前“注视方向”的技术。它是利用眼球转动时相对位置不变的某些眼部结构和特征作为参照,在位置变化特征和这些不变特征之间提取视线变化参数,而后通过几何模型或映射模型获取视线方向。
其中,根据人眼提取的特征一般有3类:1)瞳孔中心与角膜反射的向量;2)角膜反射矩阵;3)虹膜的椭圆边界。目前基于眼睛视频分析的视频追踪系统(VOG)普遍使用瞳孔-角膜反射方法,通过追踪瞳孔中心和角膜反射的相对位置获得视线方向。角膜反射是光源(一般是近红外光源)在角膜表面反射形成的虚像,相机获取到的眼睛图像中的瞳孔经过角膜折 射后形成的虚像。例如,捕获的眼睛图像,经图像处理提取瞳孔中心与光斑位置信息,提取出平面视线方向参数,然后由建立好的数学模型将平面信息转化为人眼空间视线信息数据。
本申请实施例中,可以预先在头戴VR设备内预置视线映射关系。视线映射关系为人眼空间视线信息数据与头戴VR设备上的图像显示源的左右像素点对的坐标之间的映射关系(亦可称为视线-屏幕坐标映射关系)。
本实施例中获取人眼注视点在显示屏上的人眼注视点位置信息具体为:
视线追踪系统记录用户注视某一目标物时的人眼空间视线信息数据。具体为:在用户通过头戴VR设备看外界环境时,视线追踪系统实时追踪用户眼球的视线变化,当用户注视某一目标物时,视线追踪系统计算此时用户的人眼空间视线信息数据,从而根据传送的人眼空间视线信息数据及视线映射关系,获得对应的图像显示源像素点对的坐标位置数据,即为人眼注视点位置信息。
102、根据VR设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像;
本申请实施例采用的第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;假设VR设备的显示分辨率为单眼1200*1080,水平FOV为100度,垂直FOV为90度,则水平角度分辨率为1200/100=12像素/度;垂直角度分辨率为:1080/90=12像素/度。因此,本发明实施例所述的第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
需要说明的是,如果对图形渲染质量要求高,则图形渲染计算量自然大,此时,会降低图形渲染输出帧率,如果为了提高图形渲染输出帧率,必然要降低图形渲染计算量,也就是要降低图形渲染质量,因此,图形渲染质量与 图形渲染计算量之间是成反比关系,也就是在实际应用中,图形渲染质量与图形渲染计算量是需要进行妥协,要么降低图形渲染的质量要求,要么降低图形渲染计算量,在实现本发明的过程中,根据大量的用户VR体验的反馈信息,当VR设备的输出帧率小于一定的输出帧率阈值时才会导致用户眩晕,因此,本发明实施例中,可以根据输出帧率阈值确定对应的图形渲染计算量阈值,根据图形渲染计算量阈值确定对应的图形渲染质量阈值,由此可以得到,图形渲染计算量阈值和图形渲染质量阈值就是上述图形渲染质量与图形渲染计算量之间的平衡点之间的平衡点,根据这个平衡点预设计算第一角度分辨率的百分比。
图2为本申请实施例采用的视场角示意图,如图2所示,假设VR设备的显示分辨率为单眼1200*1080,水平FOV为100度,垂直FOV为90度,水平角度分辨率为1200/100=12个像素点。假设第一角度分辨率为原分辨率的1/3,即分辨率为400×360,在GPU渲染程序中输入VR设备的空间位置信息与方向信息、水平FOV为100度、垂直FOV为90度以及分辨率400×360,GPU生成第一图像,因此,第一图像是整体的低分辨率图像,按照正常渲染精度,但以较低分辨率进行渲染,从而可以大幅减少渲染计算量。
103、根据人眼注视点位置信息,按照第二角度分辨率渲染生成与所述人眼注视点位置信息对应的第二图像;
人眼注视点的基本运动方式有注视和跳动,注视时一般持续时间在100ms以上才能看清物体。而在眼球追踪系统中,摄像头的帧率通常在60Hz以上,每帧图像的捕获以及算法计算耗时,通常在30ms以下。因此在决定局部图像的FOV时,通常只考虑注视分辨视域时有效视域和诱导视野内的拼接线对视觉的干扰效应,而不考虑人眼转动速度因素。
本申请实施例中先根据人眼注视点位置信息确定第二图像的区域范围,其中,第二图像为所述人眼注视点位置周围的局部图像根据第二图像的区域范围,按照第二角度分辨率渲染生成对应的第二图像。
其中,第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设 的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率;由于局部图像越小,注视点渲染效果越差,因为视野内离注视点较近区域的显示清晰度会对人的观察效果产生影响;局部图像越大,低清晰度的外围区域离的越远,对观察效果的影响越小;因此,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
需要说明的是,如果没有使用注视点渲染技术的,整个图像画面相当清晰,但是GPU的计算量和功耗都相当大。如果使用了注视点渲染技术,则只有眼球注视的地方画面清晰,其他地方则相对模糊,这样一来,GPU的计算量大幅度降低了。这里,眼球注视的地方画面清晰其实就是利用眼球追踪系统得到人眼注视点,在人眼注视点周围的局部图像的渲染效果会好一些,而其他地方的渲染效果会以注视点为圆形逐渐降低,因此,可以根据所需要的注视点渲染效果的图像范围来确定局部图像水平视场角和垂直视场角。
举例来说,由于第二图像是整体图像的局部图片,使用的渲染FOV要比原整体FOV要小,比如水平40度,垂直36度。由于第二图像是人眼注视点周围的局部清晰图,因此需要保持原来的清晰度,即角度分辨率(如12像素/度)不变,则对应的分辨率为480×432(12×40=480,12×36=432)。采用人眼注视点位置信息与头部方向信息计算出新的方向信息。
需要说明的是,按照第二角度分辨率渲染生成对应的第二图像时,需要根据人眼注视点位置信息,结合头戴式虚拟现实设备的方向信息,确定人眼注视点方向信息;进而根据头部位置信息和人眼注视点方向信息,按照第二角度分辨率渲染生成与所述人眼注视点位置信息和人眼注视点方向信息对应的第二图像。举例来说,一般VR图形渲染时,提交给GPU的头部方向信息,也就是人眼直视前方时注视点所对应的方向信息,此时注视点位于VR透镜光轴与显示平面的交点O,交点O在显示平面上的位置为已知信息。人眼到光轴与显示平面交点O的矢量设为V1(即头部方向信息),V1与显示平面垂直,设注视点为F(已知),在显示平面的交点O到注视点F的矢量为V2(V2=F–O),人眼到注视点F的矢量V3=V1+V2,V3即为所求的人眼注视点方向信 息。
104、将第一图像和第二图像合成为第三图像。
具体实现时包括:
通过插值等方法将第一图像的分辨率重建为与显示屏的分辨率相同,也就是说,将第一图像的分辨率通过插值等方法进行原来的高分辨率重建;
根据人眼注视点位置信息,将第二图像覆盖到高分辨率重建后的第一图像中的与人眼注视点位置信息相对应的位置(可以是矩形覆盖,或者处理成圆形再覆盖),得到第三图像;图3为本申请实施例采用的图像合成示意图,如图3所示,局部清晰图像为第二图像,整体低清晰度图像为第一图像。
对第三图像的拼接边界进行平滑融合处理,例如对第三图像的边界区域通过低通滤波等方法进行平滑融合处理。
在一种可选的实施方式中,将第一图像的分辨率通过插值等方法进行原来的高分辨率重建的具体实现过程包括:
对低分辨率的第一图像进行空间转化,得到其YCbCr空间图像,其中Y是非线性亮度分量,Cb是蓝色色差分量,Cr是红色色差分量,对Cb、Cr分量利用插值方法进行重建;
构造用于训练的数据库,即高分辨率图像块Xh和低分辨率图像块Xl,并组合成数据库X;
对数据库X利用稀疏编码方法生成字典D,并分解为高分辨率图像的字典Dh和低分辨率图像的字典Dl;
利用Dl和低分辨率的第一图像2倍上采样的图像对应的特征图像来求解稀疏系数;
通过稀疏系数和Dh,求解低分辨率的第一图像3倍上采样的图像,即Y分量;
将Y、Cb、Cr组合得到YCbCr图像,并转化为RGB图像,进行存储,就得到了高分辨率重建后的第一图像。
其中,上述插值方法可以是双三次插值方法。
上述对Y分量记为lIm,进行2倍上采样,得到对应的近似2倍高分辨率图像fIm,它对应的采样数据作为对应Dh的来源,而fIm进行1/3下采样的结果lfIm,再进行2倍上采样的结果作为Dl的数据源l2bfIm;对应lfIm的是逐点采样3x3的图像块,其重叠采样部分overlap=2,而对应fIm则是采样9x9的图像块,相应重叠部分为3*overlap,对应l2bfIm则是采样6x6的图像块,相应重叠部分为2*overlap;Xh是对当前块减去均值得到的结果,而Xl是对l2bfIm求解其特征图。
上述对Xh和Xl分别进行归一化,拼接成一个数据X,并进行归一化,然后利用稀疏编码Sparse Coding方法进行训练,最终将得到的D再拆分成Dh和Dl。
上述可以通过滤波器求解相应的特征图像,进行6x6的采样得到四组采样数据,而采样过程中对应于给定的低分辨率彩色图像的lIm的重叠部分overlap,fIm的特征图像的图像块的重叠部分为2*overlap,然后利用该数据得到对应图像块的稀疏系数α。
上述利用α以及Dh求解得到Xh后,增加fIm的均值得到最终高分辨率图像块;而对于边界,对给定的低分辨率彩色图像通过插值进行3倍上采样得到最终边界图像块;将最终高分辨率图像块和最终边界图像块合成Y分量,其中对于重叠图像块取均值。
因此,本实施例的高分辨率重建方法可以在没有高分辨率图像库的前提下,直接通过自身的低分辨率图像,构造近似高分辨率图像,然后建立采样块,并得到相应的训练字典,然后借助此时训练的低分辨率图像的字典Dl,通过稀疏表示理论求出相应的稀疏系数,最后将该稀疏系数复用于高分辨率图像的字典Dh,重建得到高分辨率图像。
在一种可选的实施方式中,对第三图像的拼接边界进行平滑融合处理具体包括:
利用公式加权平滑公式Y=Y1*(1-d)+Y2*d,将第三图像的拼接边界处的YUV颜色编码数据平滑处理到整个图像YUV颜色编码数据,其中,Y1 和Y2分别是拼接边界处相邻图像YUV颜色编码数据的值,Y是拼接边界处重叠后的图像YUV颜色编码数据的值,d为权值;
将非拼接边界处的YUV颜色编码数据直接复制到整个图像YUV颜色编码数据中进行过渡处理。
需要说明的是,本申请的技术方案可以在GPU程序的外部实现注视点渲染,也就是说,在GPU程序不需要支持注视点渲染的情况下,也能在3D引擎的插件中实现本申请实施例所述的注视点渲染所需的输出图像(第三图像)。
本申请实施例根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备FOV的整体图像;根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与所述人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;将第一图像和第二图像合成为第三图像。本申请通过GPU对虚拟现实画面进行渲染时,采用较低角度分辨率渲染生成整体图像,对人眼注视点周围局部图像采用与显示屏相同的角度分辨率渲染生成局部清晰图像,然后将整体图像和局部清晰图像进行融合,生成一个最终图像发送给虚拟现实显示屏用于显示,可以有效降低GPU的计算量,提高图像渲染效率。
图4为本申请一实施例提供的虚拟现实的图形渲染装置结构示意图,如图4所示,包括:
第一图像生成模块,用于根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;
第二图像生成模块,用于根据获取的人眼注视点在显示屏上的位置信息, 按照第二角度分辨率渲染生成与人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;
第三图像生成模块,用于将第一图像和第二图像合成为第三图像。
其中,所述第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
其中,所述第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
其中,所述第二图像生成模块具体用于:
根据人眼注视点位置信息,结合头戴式虚拟现实设备的方向信息,确定人眼注视点方向信息;
根据人眼注视点位置信息和人眼注视点方向信息,按照第二角度分辨率生成与所述人眼注视点位置信息和人眼注视点方向信息对应的第二图像。
其中,所述第三图像生成模块具体用于:
将第一图像的分辨率重建为与显示屏的分辨率相同,具体实现参考上述图1实施例中的相关描述;
根据人眼注视点位置信息,将第二图像覆盖到高分辨率重建后的第一图像中的与人眼注视点位置信息相对应的位置,得到第三图像;
对第三图像的拼接边界进行平滑融合处理,具体实现参考上述图1实施例中的相关描述。
本申请实施例所述的装置可以根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备FOV的整体图像;根据获取的人眼注视点 在显示屏上的位置信息,按照第二角度分辨率渲染生成与所述人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;将第一图像和第二图像合成为第三图像。本申请通过GPU对虚拟现实画面进行渲染时,采用较低角度分辨率渲染生成整体图像,对人眼注视点周围局部图像采用与显示屏相同的角度分辨率渲染生成局部清晰图像,然后将整体图像和局部清晰图像进行融合,生成一个最终图像发送给虚拟现实显示屏用于显示,可以有效降低GPU的计算量,提高图像渲染效率。
本发明实施例中,上述虚拟现实的图形渲染装置的结构中包括图形处理器和存储器,所述存储器用于存储支持上述虚拟现实的图形渲染装置执行上述图1所示实施例中虚拟现实的图形渲染方法的程序,所述图形处理器被配置为用于执行所述存储器中存储的程序。
所述程序包括一条或多条计算机指令,其中,所述一条或多条计算机指令供所述图形处理器调用执行。
本发明实施例还提供了一种计算机存储介质,用于储存虚拟现实的图形渲染装置所用的计算机软件指令,所述计算机软件指令包含了用于执行上述虚拟现实的图形渲染方法为虚拟现实的图形渲染装置所涉及的程序。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通 过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (10)

  1. 一种虚拟现实的图形渲染方法,其中,包括:
    根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;
    根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与人眼注视点位置信息对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;
    将第一图像和第二图像合成为第三图像。
  2. 根据权利要求1所述的方法,其中,所述第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
  3. 根据权利要求1所述的方法,其中,所述第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
  4. 根据权利要求1所述的方法,其中,根据获取的人眼注视点在显示屏上的人眼注视点位置信息,按照第二角度分辨率生成与所述人眼注视点位置对应的第二图像,包括:
    根据人眼注视点位置信息,结合头戴式虚拟现实设备的方向信息,确定人眼注视点方向信息;
    根据人眼注视点位置信息和人眼注视点方向信息,按照第二角度分辨率生成与所述人眼注视点位置信息和人眼注视点方向信息对应的第二图像。
  5. 根据权利要求1所述的方法,其中,将第一图像和第二图像合成为第 三图像,包括:
    将第一图像的低分辨率重建为与显示屏的分辨率相同的高分辨率;
    根据人眼注视点位置信息,将第二图像覆盖到高分辨率重建后的第一图像中的与人眼注视点位置信息相对应的位置,得到第三图像;
    对第三图像的拼接边界进行平滑融合处理。
  6. 根据权利要求5所述的方法,其中,将第一图像的低分辨率重建为与显示屏的分辨率相同的高分辨率,包括:
    对低分辨率的第一图像进行空间转化,得到其YCbCr空间图像,其中Y是非线性亮度分量,Cb是蓝色色差分量,Cr是红色色差分量,对Cb、Cr分量利用插值方法进行重建;
    构造用于训练的数据库,即高分辨率图像块Xh和低分辨率图像块Xl,并组合成数据库X;
    对数据库X利用稀疏编码方法生成字典D,并分解为高分辨率图像的字典Dh和低分辨率图像的字典Dl;
    利用Dl和低分辨率的第一图像2倍上采样的图像对应的特征图像来求解稀疏系数;
    通过稀疏系数和Dh,求解低分辨率的第一图像3倍上采样的图像,即Y分量;
    将Y、Cb、Cr组合得到YCbCr图像,并转化为RGB图像,进行存储,就得到了高分辨率重建后的第一图像。
  7. 根据权利要求5所述的方法,其中,对第三图像的拼接边界进行平滑融合处理,包括:
    利用公式加权平滑公式Y=Y1*(1-d)+Y2*d,将第三图像的拼接边界处的YUV颜色编码数据平滑处理到整个图像YUV颜色编码数据,其中,Y1和Y2分别是拼接边界处相邻图像YUV颜色编码数据的值,Y是拼接边界处重叠后的图像YUV颜色编码数据的值,d为权值;
    将非拼接边界处的YUV颜色编码数据直接复制到整个图像YUV颜色 编码数据中进行过渡处理。
  8. 一种虚拟现实的图形渲染装置,其中,包括:
    第一图像生成模块,用于根据获取的头戴式虚拟现实设备的空间位置信息与方向信息,按照第一角度分辨率渲染生成与所述空间位置信息与方向信息对应的第一图像,所述第一角度分辨率小于显示屏的角度分辨率,所述第一图像为对应于虚拟现实设备视场角的整体图像,角度分辨率是指在视场角范围内,每度视场角对应的像素点个数;
    第二图像生成模块,用于根据获取的人眼注视点在显示屏上的位置信息,按照第二角度分辨率渲染生成与人眼注视点位置对应的第二图像,所述第二角度分辨率等于显示屏的角度分辨率,所述第二图像为所述人眼注视点位置周围的局部图像;
    第三图像生成模块,用于将第一图像和第二图像合成为第三图像。
  9. 根据权利要求8所述的装置,其中,所述第一角度分辨率是根据预设的百分比乘以显示屏的角度分辨率得到,预设的百分比根据所需要的图形渲染质量与图形渲染计算量之间的平衡点进行确定。
  10. 根据权利要求8所述的装置,其中,所述第二角度分辨率为虚拟现实设备的显示屏角度分辨率,根据预设的局部图像水平视场角和垂直视场角乘以第二角度分辨率得到局部图像的分辨率,预设的局部图像水平视场角和垂直视场角是根据所需要的注视点渲染效果的图像范围进行确定的。
PCT/CN2018/096858 2017-08-14 2018-07-24 虚拟现实的图形渲染方法和装置 WO2019033903A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/638,734 US10859840B2 (en) 2017-08-14 2018-07-24 Graphics rendering method and apparatus of virtual reality

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710691256.5 2017-08-14
CN201710691256.5A CN107516335A (zh) 2017-08-14 2017-08-14 虚拟现实的图形渲染方法和装置

Publications (1)

Publication Number Publication Date
WO2019033903A1 true WO2019033903A1 (zh) 2019-02-21

Family

ID=60723302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096858 WO2019033903A1 (zh) 2017-08-14 2018-07-24 虚拟现实的图形渲染方法和装置

Country Status (3)

Country Link
US (1) US10859840B2 (zh)
CN (1) CN107516335A (zh)
WO (1) WO2019033903A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190486A (zh) * 2019-12-27 2020-05-22 季华实验室 一种基于眼睛控制的分区显示方法及装置
CN112017267A (zh) * 2019-05-28 2020-12-01 上海科技大学 基于机器学习的流体快速合成方法、装置、系统和介质
CN113242384A (zh) * 2021-05-08 2021-08-10 聚好看科技股份有限公司 一种全景视频显示方法及显示设备
GB2595872A (en) * 2020-06-09 2021-12-15 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
CN114531904A (zh) * 2020-09-09 2022-05-24 京东方科技集团股份有限公司 Ar/vr图像显示方法、ar/vr图像显示设备和计算机程序产品
WO2024093835A1 (zh) * 2022-11-01 2024-05-10 华为技术有限公司 一种图像数据的处理方法及相关设备

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN108287678B (zh) * 2018-03-06 2020-12-29 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质
US10768879B2 (en) 2018-03-06 2020-09-08 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method and apparatus, virtual reality apparatus, and computer-program product
CN115842907A (zh) 2018-03-27 2023-03-24 京东方科技集团股份有限公司 渲染方法、计算机产品及显示装置
CN108635851B (zh) * 2018-05-16 2021-07-27 网易(杭州)网络有限公司 一种游戏画面的处理方法和装置
CN108665521B (zh) * 2018-05-16 2020-06-02 京东方科技集团股份有限公司 图像渲染方法、装置、系统、计算机可读存储介质及设备
WO2020141344A2 (en) * 2018-07-20 2020-07-09 Tobii Ab Distributed foveated rendering based on user gaze
CN109242943B (zh) 2018-08-21 2023-03-21 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质
CN108828779B (zh) * 2018-08-28 2020-01-21 北京七鑫易维信息技术有限公司 一种头戴式显示设备
CN110913199B (zh) * 2018-09-14 2021-06-11 东方梦幻虚拟现实科技有限公司 一种vr图像传输方法
CN110163943A (zh) * 2018-11-21 2019-08-23 深圳市腾讯信息技术有限公司 图像的渲染方法和装置、存储介质、电子装置
US10885882B2 (en) * 2018-12-06 2021-01-05 Tobii Ab Reducing aliasing artifacts in foveated rendering using cross-resolution modulation
CN109712224B (zh) * 2018-12-29 2023-05-16 海信视像科技股份有限公司 虚拟场景的渲染方法、装置及智能设备
CN109727305B (zh) * 2019-01-02 2024-01-12 京东方科技集团股份有限公司 虚拟现实系统画面处理方法、装置及存储介质
CN109741463B (zh) * 2019-01-02 2022-07-19 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备
CN109727316B (zh) * 2019-01-04 2024-02-02 京东方科技集团股份有限公司 虚拟现实图像的处理方法及其系统
CN109766011A (zh) * 2019-01-16 2019-05-17 北京七鑫易维信息技术有限公司 一种图像渲染方法和装置
CN109741289B (zh) * 2019-01-25 2021-12-21 京东方科技集团股份有限公司 一种图像融合方法和vr设备
CN109886876A (zh) * 2019-02-25 2019-06-14 昀光微电子(上海)有限公司 一种基于人眼视觉特征的近眼显示方法
CN110267025B (zh) * 2019-07-03 2021-04-13 京东方科技集团股份有限公司 虚拟3d显示的渲染方法、装置以及其显示方法、系统
CN110378914A (zh) * 2019-07-22 2019-10-25 北京七鑫易维信息技术有限公司 基于注视点信息的渲染方法及装置、系统、显示设备
CN110827380B (zh) * 2019-09-19 2023-10-17 北京铂石空间科技有限公司 图像的渲染方法、装置、电子设备及计算机可读介质
JP7274392B2 (ja) * 2019-09-30 2023-05-16 京セラ株式会社 カメラ、ヘッドアップディスプレイシステム、及び移動体
WO2021102857A1 (zh) * 2019-11-28 2021-06-03 深圳市大疆创新科技有限公司 图像处理方法、装置、设备及存储介质
CN111290581B (zh) * 2020-02-21 2024-04-16 京东方科技集团股份有限公司 虚拟现实显示方法、显示装置及计算机可读介质
CN111338591B (zh) * 2020-02-25 2022-04-12 京东方科技集团股份有限公司 一种虚拟现实显示设备及显示方法
CN111476104B (zh) * 2020-03-17 2022-07-01 重庆邮电大学 动态眼位下ar-hud图像畸变矫正方法、装置、系统
CN111314686B (zh) * 2020-03-20 2021-06-25 深圳市博盛医疗科技有限公司 一种自动优化3d立体感的方法、系统及介质
CN111556305B (zh) 2020-05-20 2022-04-15 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质
CN111754614B (zh) * 2020-06-30 2024-07-02 平安国际智慧城市科技股份有限公司 基于vr视频渲染方法、装置、电子设备及存储介质
CN111768352B (zh) * 2020-06-30 2024-05-07 Oppo广东移动通信有限公司 图像处理方法及装置
CN111785229B (zh) * 2020-07-16 2022-04-15 京东方科技集团股份有限公司 一种显示方法、装置及系统
CN114071150B (zh) * 2020-07-31 2023-06-16 京东方科技集团股份有限公司 图像压缩方法及装置、图像显示方法及装置和介质
CN111930233B (zh) * 2020-08-05 2023-07-21 聚好看科技股份有限公司 一种全景视频图像显示方法及显示设备
CN112465939B (zh) * 2020-11-25 2023-01-24 上海哔哩哔哩科技有限公司 全景视频渲染方法及系统
CN114578940A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种控制方法、装置和电子设备
CN112672131B (zh) * 2020-12-07 2024-02-06 聚好看科技股份有限公司 一种全景视频图像显示方法及显示设备
CN112578564B (zh) * 2020-12-15 2023-04-11 京东方科技集团股份有限公司 一种虚拟现实显示设备及显示方法
CN114721144A (zh) * 2021-01-04 2022-07-08 宏碁股份有限公司 裸视立体显示器及其控制方法
CN112887646B (zh) * 2021-01-22 2023-05-26 京东方科技集团股份有限公司 图像处理方法及装置、扩展现实系统、计算机设备及介质
CN113209604A (zh) * 2021-04-28 2021-08-06 杭州小派智能科技有限公司 一种大视角vr渲染的方法和系统
CN113223183B (zh) * 2021-04-30 2023-03-10 杭州小派智能科技有限公司 一种基于已有vr内容的渲染方法和系统
CN113362449B (zh) * 2021-06-01 2023-01-17 聚好看科技股份有限公司 一种三维重建方法、装置及系统
CN113885822A (zh) * 2021-10-15 2022-01-04 Oppo广东移动通信有限公司 图像数据的处理方法、装置、电子设备以及存储介质
CN114554173B (zh) * 2021-11-17 2024-01-30 北京博良胜合科技有限公司 基于Cloud XR的云端简化注视点渲染的方法以及装置
CN116958386A (zh) * 2022-04-12 2023-10-27 华为云计算技术有限公司 一种数据处理的方法、系统和设备
CN114581583A (zh) * 2022-04-19 2022-06-03 京东方科技集团股份有限公司 图像处理方法、装置及存储介质
CN116012474B (zh) * 2022-12-13 2024-01-30 昆易电子科技(上海)有限公司 仿真测试图像生成、回注方法及系统、工控机、装置
CN117095149B (zh) * 2023-10-18 2024-02-02 广东图盛超高清创新中心有限公司 用于超高清vr现场制作的实时图像处理方法
CN117499614B (zh) * 2023-11-21 2024-04-26 北京视睿讯科技有限公司 一种3d显示方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979224A (zh) * 2016-06-23 2016-09-28 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理方法、系统
CN106412563A (zh) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 一种图像显示方法以及装置
CN106652004A (zh) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 基于头戴式可视设备对虚拟现实进行渲染的方法及装置
US20170186231A1 (en) * 2015-12-28 2017-06-29 Oculus Vr, Llc Increasing field of view of head-mounted display using a mirror
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3478606B2 (ja) * 1994-10-12 2003-12-15 キヤノン株式会社 立体画像表示方法および装置
US20110279453A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for rendering a location-based user interface
CN102722865B (zh) * 2012-05-22 2015-05-13 北京工业大学 一种超分辨率稀疏重建方法
US9058053B2 (en) * 2012-10-26 2015-06-16 The Boeing Company Virtual reality display system
CN104618648B (zh) * 2015-01-29 2018-11-09 桂林长海发展有限责任公司 一种全景视频拼接系统及拼接方法
US11010956B2 (en) * 2015-12-09 2021-05-18 Imagination Technologies Limited Foveated rendering
CN105425399B (zh) * 2016-01-15 2017-11-28 中意工业设计(湖南)有限责任公司 一种根据人眼视觉特点的头戴设备用户界面呈现方法
US10157448B2 (en) * 2016-02-12 2018-12-18 Qualcomm Incorporated Foveated video rendering
CN111710050A (zh) * 2016-08-24 2020-09-25 赵成智 一种用于虚拟现实设备的图像处理方法及装置
CN106648049B (zh) * 2016-09-19 2019-12-10 上海青研科技有限公司 一种基于眼球追踪及眼动点预测的立体渲染方法
CN106485790A (zh) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 一种画面显示的方法以及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106652004A (zh) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 基于头戴式可视设备对虚拟现实进行渲染的方法及装置
US20170186231A1 (en) * 2015-12-28 2017-06-29 Oculus Vr, Llc Increasing field of view of head-mounted display using a mirror
CN105979224A (zh) * 2016-06-23 2016-09-28 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理方法、系统
CN106412563A (zh) * 2016-09-30 2017-02-15 珠海市魅族科技有限公司 一种图像显示方法以及装置
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017267A (zh) * 2019-05-28 2020-12-01 上海科技大学 基于机器学习的流体快速合成方法、装置、系统和介质
CN112017267B (zh) * 2019-05-28 2023-07-25 上海科技大学 基于机器学习的流体快速合成方法、装置、系统和介质
CN111190486A (zh) * 2019-12-27 2020-05-22 季华实验室 一种基于眼睛控制的分区显示方法及装置
CN111190486B (zh) * 2019-12-27 2023-07-25 季华实验室 一种基于眼睛控制的分区显示方法及装置
GB2595872A (en) * 2020-06-09 2021-12-15 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
GB2595872B (en) * 2020-06-09 2023-09-20 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
CN114531904A (zh) * 2020-09-09 2022-05-24 京东方科技集团股份有限公司 Ar/vr图像显示方法、ar/vr图像显示设备和计算机程序产品
CN113242384A (zh) * 2021-05-08 2021-08-10 聚好看科技股份有限公司 一种全景视频显示方法及显示设备
WO2024093835A1 (zh) * 2022-11-01 2024-05-10 华为技术有限公司 一种图像数据的处理方法及相关设备

Also Published As

Publication number Publication date
US10859840B2 (en) 2020-12-08
US20200183166A1 (en) 2020-06-11
CN107516335A (zh) 2017-12-26

Similar Documents

Publication Publication Date Title
WO2019033903A1 (zh) 虚拟现实的图形渲染方法和装置
CN107317987B (zh) 虚拟现实的显示数据压缩方法和设备、系统
CN110199267B (zh) 利用数据压缩进行实时图像转换的无缺失的高速缓存结构
US11704768B2 (en) Temporal supersampling for foveated rendering systems
JP6961018B2 (ja) テンポラル・アンチエイリアシングの中心窩適合
JP7422785B2 (ja) ニューラルネットワークおよび角検出器を使用した角検出のための方法および装置
EP3574408B1 (en) No miss cache structure for real-time image transformations
JP6353214B2 (ja) 画像生成装置および画像生成方法
US10672368B2 (en) No miss cache structure for real-time image transformations with multiple LSR processing engines
US10553016B2 (en) Phase aligned foveated rendering
CN114026603B (zh) 渲染计算机生成现实文本
US20230274455A1 (en) Systems and methods for low compute high-resolution depth map generation using low-resolution cameras
JP2016105279A (ja) 視覚データを処理するための装置および方法、ならびに関連するコンピュータプログラム製品
JP5632245B2 (ja) 眼鏡の視野画像表示装置
CN110214300B (zh) 相位对准的凹形渲染
CN107065164B (zh) 图像展示方法及装置
EP4150563A1 (en) Upsampling low temporal resolution depth maps
Papadopoulos et al. Acuity-driven gigapixel visualization
JP2008257431A (ja) 画像表示装置及び画像表示方法
EP1330785A2 (en) Dynamic depth-of-field emulation based on eye-tracking
CN111275612A (zh) 一种基于vr技术的k线显示、交互方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18845517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18845517

Country of ref document: EP

Kind code of ref document: A1