CN112785530A - Image rendering method, device and equipment for virtual reality and VR equipment - Google Patents

Image rendering method, device and equipment for virtual reality and VR equipment Download PDF

Info

Publication number
CN112785530A
CN112785530A CN202110163013.0A CN202110163013A CN112785530A CN 112785530 A CN112785530 A CN 112785530A CN 202110163013 A CN202110163013 A CN 202110163013A CN 112785530 A CN112785530 A CN 112785530A
Authority
CN
China
Prior art keywords
image
rendering
transformation matrix
attitude
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110163013.0A
Other languages
Chinese (zh)
Other versions
CN112785530B (en
Inventor
李朝庭
林榕
郑广平
吴开钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Unionman Technology Co Ltd
Original Assignee
Guangdong Unionman Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Unionman Technology Co Ltd filed Critical Guangdong Unionman Technology Co Ltd
Priority to CN202110163013.0A priority Critical patent/CN112785530B/en
Publication of CN112785530A publication Critical patent/CN112785530A/en
Application granted granted Critical
Publication of CN112785530B publication Critical patent/CN112785530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention relates to the technical field of image processing, in particular to an image rendering method, device and equipment for virtual reality and VR equipment. The image rendering method for the virtual reality is based on an open graphic library and comprises the following steps: in response to a vertical synchronization signal, an anti-distortion thread acquires an image rendered by a 3D rendering thread and a first attitude transformation matrix of the 3D rendering thread; changing the first attitude transformation matrix according to the acquired attitude parameters of the virtual reality equipment to obtain a third attitude transformation matrix; and performing inverse distortion processing on the rendered image by the third posture transformation matrix and outputting and displaying the processed image. The embodiment of the invention reduces the picture rendering time and improves the display frame rate.

Description

Image rendering method, device and equipment for virtual reality and VR equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image rendering method for virtual reality, an image rendering device for virtual reality, an image rendering apparatus for virtual reality, and a VR apparatus.
Background
Virtual reality (abbreviated as VR) technology is a practical technology for simulating virtual environment by computer. Among them, VR glasses are a head-mounted virtual reality device, and the work of the VR glasses requires a high definition of a display device, a high frame rate, and an extremely low motion display delay (a time delay from a change of a human motion posture to a change of a picture). Therefore, rendering techniques that solve the above problems are the core of VR techniques. At present, the rendering capability of the mainstream mobile terminal GPU is difficult to meet the requirements, some technologies such as asynchronous distortion and the like are available in the market to optimize the problems, but most rendering frame rates are difficult to control.
OpenGL: the Open Graphics Library (Open Graphics Library) is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D, 3D vector Graphics.
Disclosure of Invention
In view of this, the present invention is directed to an image rendering method, an image rendering device, an image rendering apparatus, and a VR device for virtual reality, which can stably increase a frame rate of a rendered image of the VR device and effectively reduce motion display delay without changing definition.
In order to achieve the above object, a first aspect of the present invention provides an image rendering method for virtual reality, based on an open graphics library, the image rendering method including: in response to a vertical synchronization signal, an anti-distortion thread acquires an image rendered by a 3D rendering thread and a first attitude transformation matrix of the 3D rendering thread; changing the first attitude transformation matrix according to the acquired attitude parameters of the virtual reality equipment to obtain a third attitude transformation matrix; and performing inverse distortion processing on the rendered image by the third posture transformation matrix and outputting and displaying the processed image.
Preferably, the anti-distortion thread acquires the rendered image through a texture ID of the rendered image shared by the 3D rendering thread.
Preferably, the obtaining a third posture transformation matrix after changing the first posture transformation matrix according to the obtained state parameters of the virtual reality device includes: acquiring attitude parameters of the virtual reality equipment; setting a second attitude transformation matrix based on the attitude parameters; and calculating according to the first attitude transformation matrix and the second attitude transformation matrix to obtain the third attitude transformation matrix.
Preferably, the image rendering method further includes: and setting a rendering buffer area, wherein the rendering buffer area is used for caching the image rendered by the 3D rendering thread.
Preferably, the setting a rendering buffer includes: setting the following parameters of the rendering buffer: color buffer bit number, depth buffer bit number, stencil buffer bit number, multiple sampling buffer bit number, and sampling number of each pixel.
Preferably, the output and display of the rendered image after the third posture transformation matrix is subjected to inverse distortion processing includes: decomposing the third attitude transformation matrix into a plurality of sub-transformation matrices; the plurality of sub-transformation matrixes are respectively used for carrying out corresponding transformation operation on the rendered image; and after the rendered image is subjected to inverse distortion processing by the plurality of sub-transformation matrixes, outputting a processing result to a screen for displaying.
Preferably, the plurality of sub-transformation matrices include: a scaling transformation matrix, a translation transformation matrix, a rotation transformation matrix, and an attitude matrix.
In a second aspect of the present invention, there is also provided an image rendering apparatus for virtual reality, the image rendering apparatus including: the acquisition module is used for responding to a vertical synchronization signal, and the anti-distortion thread acquires an image rendered by the 3D rendering thread and a first posture transformation matrix of the 3D rendering thread; the matrix attitude determination module is used for changing the first attitude transformation matrix according to the acquired parameter state of the virtual reality equipment to obtain a third attitude transformation matrix; and the transformation rendering module is used for performing inverse distortion processing on the rendered image through the third posture transformation matrix and outputting and displaying the processed image.
In a third aspect of the present invention, there is also provided an image rendering apparatus for virtual reality, the image rendering apparatus including: at least one processor; a memory coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the at least one processor implements the aforementioned image rendering method for virtual reality by executing the instructions stored by the memory.
In a fourth aspect of the present invention, there is also provided a VR device including a display device, and the aforementioned image rendering device.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: according to the embodiment of the invention, the distortion transformation matrix is applied to the anti-distortion thread in real time, so that the rendering time consumption of the image is obviously reduced, the display frame rate is improved, and the smoothness of the image is ensured.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic step diagram of an image rendering method for virtual reality according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating steps of an image rendering method for virtual reality according to an embodiment of the present invention;
fig. 3 is a block diagram of an image rendering apparatus for virtual reality according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic step diagram of an image rendering method for virtual reality according to an embodiment of the present invention, and as shown in fig. 1, the image rendering method for virtual reality is based on an open graphics library, and includes:
s01, responding to a vertical synchronization signal, and acquiring an image rendered by a 3D rendering thread and a first posture transformation matrix of the 3D rendering thread by an anti-distortion thread;
the 3D rendering thread is a 3D main rendering thread of OpenGL, and the 3D rendering process is started when the App triggers the picture refreshing. The 3D rendering process comprises the functions of obtaining posture parameters related to quaternion Q0, rendering binocular textures to a rendering buffer area, setting time posture Q0 (a first posture transformation matrix), switching an anti-distortion thread texture ID, switching a rendering texture ID of a next frame per se and the like.
The anti-distortion thread is a 2D rendering thread of OpenGL and is a necessary thread which needs anti-distortion processing on a picture for normal display of VR glasses. The thread can be set to be started by a 3D main rendering thread, and the rendering process of the anti-distortion thread comprises the functions of waiting for a vertical synchronization signal, calculating a distortion transformation matrix, executing anti-distortion and time distortion, submitting a picture to a screen and the like.
S02, changing the first attitude transformation matrix according to the acquired parameter state of the virtual reality equipment to obtain a third attitude transformation matrix;
the anti-distortion thread is created by the application and specifies the rendering buffer type, and the context and texture ID of the shared 3D rendering thread are set during initialization. And (3) creating an off-screen rendering buffer zone by the self, triggering a rendering action by the vsync from the display screen, and acquiring a current attitude quaternion Q1 when rendering is started. The antialiasing thread computes, via Q0 and Q1, a third pose transformation matrix (i.e., a warp transformation matrix) that is used to achieve the effect of real-time warping by performing antialiasing.
And S03, performing inverse distortion processing on the rendered image by the third posture transformation matrix, and outputting and displaying the processed image. The anti-distortion thread converts quaternion Q0 and Q1 into 24 x4 transformation matrixes respectively, the 2 matrixes calculate the third posture transformation matrix, the left eye and the right eye (display screens) share the same third posture transformation matrix to perform anti-distortion rendering, and finally rendered pictures are submitted to a screen for display.
Through the embodiment, 2D rendering operation is executed through the anti-distortion thread, the operation amount is only about one tenth of that of 3D rendering, so that the distortion transformation matrix is applied to the anti-distortion thread in real time, and 40-50 milliseconds needed by the original 3D rendering time can be reduced to 5-6 milliseconds needed by the 2D rendering time. Because the anti-distortion thread rendering signal is directly from vsync (vertical synchronization signal), and the 2D rendering time is far shorter than the screen refreshing interval (the screen refreshing interval is 16.7 milliseconds at 60 hz), the display sending frequency can be kept consistent with the screen hardware refreshing rate all the time, thereby ensuring the image fluency.
In an embodiment provided by the present invention, the anti-distortion thread acquires the rendered image through a texture ID of the rendered image shared by the 3D rendering thread. After the 3D rendering thread is initialized, the running state of an anti-distortion thread instance needs to be detected, the existing instance is not created when running, otherwise, the anti-distortion thread is started, and texture sharing is set through an ID. And the anti-distortion thread executes initialization after being started, wherein the initialization comprises the steps of setting an anti-distortion thread running environment, detecting and obtaining the OpenGL rendering context of the 3D rendering thread, and obtaining the ID shared by the texture of the 3D rendering thread.
In one embodiment provided by the invention, the parameter states of the detection equipment and the sensor of the virtual reality equipment are obtained; setting a second attitude transformation matrix based on the sensor parameters; and calculating according to the first attitude transformation matrix and the second attitude transformation matrix to obtain the third attitude transformation matrix. And detecting the parameter states of the equipment and the sensor by the aid of the anti-distortion thread. Binding a rendering buffer area, and adding the texture image which enables multi-sampling rendering to a frame buffer object; the anti-distortion thread acquires and sets a 4x4 attitude transformation matrix Q1 at the current moment; the anti-distortion thread converts quaternions Q0 and Q1 into 2 transformation matrixes of 4x4 respectively, and real-time distortion transformation matrixes are calculated by the two matrixes.
The image rendering method further includes: setting a rendering buffer area, wherein the rendering buffer area is used for caching the image rendered by the 3D rendering thread, and the specific setting is as follows: configuring the bits of red, green and blue of OpenGL color buffer area to be 8, the bit of depth buffer area to be 8, the bit of template buffer area to be 8, the bit of multi-sampling buffer area to be 1, and the sampling number of each pixel point to be 4. The above arrangement has the advantages that the number of bits of the color buffer red, green and blue is 8, and 24-bit display depth can be displayed; the number of bits of the template buffer area is 8, so that 3D depth of field information can not be lost; the number of bits of the template buffer area is 8, so that color information can not be lost; the number of bits of the multi-sampling buffer area is set to be 1, and multi-sampling can be started; the sampling number of each pixel point is set to be 4, and the anti-aliasing effect of multiple sampling of 4 times can be obtained.
And performing inverse distortion processing on the rendered image by the third posture transformation matrix and outputting and displaying the rendered image, wherein the output and display process comprises the following steps: decomposing the third attitude transformation matrix into a plurality of sub-transformation matrices; the plurality of sub-transformation matrixes are respectively used for carrying out corresponding transformation operation on the rendered image; and after the rendered image is subjected to inverse distortion processing by the plurality of sub-transformation matrixes, outputting a processing result to a screen for displaying. The method comprises the following specific steps: the anti-distortion thread converts quaternion Q0 and Q1 into 24 x4 transformation matrixes respectively, real-time distortion transformation matrixes are calculated by the 2 matrixes, left and right eyes (display screens) share the same distortion transformation matrix to conduct anti-distortion rendering, and finally rendered pictures are submitted to the screen for display. The principle of warping transformation matrix calculation is as follows: if Q0 has been transformed to Q1 using the matrix tw; then Q0 tw-Q1, and tw-Q0-1*Q1。
The calculation method comprises the following steps: tw1 tw2+ tw3 tw4+ tw5 tw. Wherein tw1-tw5 are the sub-transform matrices. Further, tw 1: z-axis reverse depth value transformation matrix; tw 2: scaling the value transformation matrix by XY axis, 0.5 for each eye; tw 3: translating the value transformation matrix in the XYZ direction; tw 4: a synthetic transformation matrix of rotation and translation; tw 5: an original 2D pose parameter transformation matrix. Through the above sub-transformation matrix, the distortion transformation of the rendered image is realized. The matrix structures of tw1 to tw5 are as follows:
Figure BDA0002936271080000061
fig. 2 is a diagram illustrating implementation steps of an image rendering method for virtual reality according to an embodiment of the present invention, as shown in fig. 2. For the understanding and implementation of the method by those skilled in the art, the implementation steps are described as follows by the embodiment:
(1) the 3D rendering thread performs initialization. And detecting the running state of the anti-distortion thread instance, if the existing instance is run, not creating the instance, otherwise, starting the anti-distortion thread, and setting texture sharing through the ID.
(2) The anti-distortion thread performs initialization. And setting an anti-distortion thread running environment, detecting and acquiring the OpenGL rendering context of the 3D rendering thread, and acquiring the ID shared by the texture of the 3D rendering thread.
(3) And the rendering priority is set to be high by the anti-distortion thread, so that the GPU can respond to the thread at the first time. Configuring the bits of red, green and blue of OpenGL color buffer area to be 8, the bit of depth buffer area to be 8, the bit of template buffer area to be 8, the bit of multi-sampling buffer area to be 1, and the sampling number of each pixel point to be 4. The above arrangement has the advantages that the number of bits of the color buffer red, green and blue is 8, and 24-bit display depth can be displayed; the number of bits of the template buffer area is 8, so that 3D depth of field information can not be lost; the number of bits of the template buffer area is 8, so that color information can not be lost; the number of bits of the multi-sampling buffer area is set to be 1, and multi-sampling can be started; the sampling number of each pixel point is set to be 4, and the anti-aliasing effect of multiple sampling of 4 times can be obtained.
(4) The 3D rendering thread acquires and sets a 4x4 pose transformation matrix Q0 at the current time, and then renders the binocular 3D scene according to the current pose. The next frame rendering target texture ID is changed. At this time, the texture just rendered is already used by the antialiasing thread, and the 3D rendering thread renders the next texture of the texture queue in the next frame.
(5) The anti-distortion thread waits for the vsync signal and acquires the current rendering screen picture through the texture ID.
(6) And detecting the parameter states of the equipment and the sensor by the aid of the anti-distortion thread. And binding the rendering buffer area, and adding the texture image which enables the multi-sampling rendering to the frame buffer object.
(7) The anti-distortion thread acquires and sets the pose transformation matrix Q1 of 4x4 at the current time.
(8) The anti-distortion thread converts quaternion Q0 and Q1 into 24 x4 transformation matrixes respectively, real-time distortion transformation matrixes are calculated by the 2 matrixes, left and right eyes (display screens) share the same distortion transformation matrix to conduct anti-distortion rendering, and finally rendered pictures are submitted to the screen for display. The warping transformation matrix calculation principle is as described above and will not be repeated here.
Fig. 3 is a block diagram of an image rendering apparatus for virtual reality according to an embodiment of the present invention, as shown in fig. 3. In this embodiment, there is also provided an image rendering apparatus for virtual reality, the image rendering apparatus including: the acquisition module is used for responding to a vertical synchronization signal, and the anti-distortion thread acquires an image rendered by the 3D rendering thread and a first posture transformation matrix of the 3D rendering thread; the matrix attitude determination module is used for changing the first attitude transformation matrix according to the acquired parameter state of the virtual reality equipment to obtain a third attitude transformation matrix; and the transformation rendering module is used for performing inverse distortion processing on the rendered image through the third posture transformation matrix and outputting and displaying the processed image.
For the specific definition of each functional module in the image rendering apparatus for virtual reality, reference may be made to the above definition of the image rendering method for virtual reality, and details are not repeated here. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment provided by the present invention, there is also provided an image rendering apparatus for virtual reality, the image rendering apparatus including: at least one processor; a memory coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the at least one processor implements the aforementioned image rendering method for virtual reality by executing the instructions stored by the memory. The control module or processor herein has the functions of numerical calculation and logical operation, and it has at least a central processing unit CPU, a random access memory RAM, a read only memory ROM, various I/O ports and interrupt systems, etc. of data processing capability. The processor and the memory in this embodiment may also be those in an existing virtual reality device, and the image rendering function for virtual reality implemented by the processor and the memory is one of functions that the virtual reality device can implement. The image rendering device is embodied in the form of a piece of software code in a hardware execution environment that relies on a controller in an existing virtual reality device. Here, the control module or the control device may be, for example, a single chip, a PLC, or common hardware such as a processor.
In an embodiment provided by the invention, a VR device is also provided, and the VR device includes a display device and the aforementioned image rendering device. The image rendering equipment is adopted in the VR equipment to execute the image rendering method for virtual reality, so that the rendering time can be obviously reduced, the display sending frequency can be kept consistent with the screen hardware refresh rate all the time, and the image fluency is ensured.
According to the image rendering method, the image rendering device, the image rendering equipment and the VR equipment for virtual reality, the 3D rendering thread and the anti-distortion thread in the open graphics library are defined in a labor-sharing manner, so that rendering time is reduced, and the technical effect of improving the display frame rate is achieved.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of different implementation manners of the embodiments of the present invention can be performed, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the idea of the embodiments of the present invention.

Claims (10)

1. An image rendering method for virtual reality, based on an open graphics library, the image rendering method comprising:
in response to a vertical synchronization signal, an anti-distortion thread acquires an image rendered by a 3D rendering thread and a first attitude transformation matrix of the 3D rendering thread;
changing the first attitude transformation matrix according to the acquired attitude parameters of the virtual reality equipment to obtain a third attitude transformation matrix;
and performing inverse distortion processing on the rendered image by the third posture transformation matrix and outputting and displaying the processed image.
2. The image rendering method according to claim 1, wherein the anti-distortion thread acquires the image rendered by the 3D rendering thread by using a texture ID of the rendered image shared by the 3D rendering thread.
3. The image rendering method according to claim 1, wherein the obtaining a third posture transformation matrix after the first posture transformation matrix is changed according to the obtained state parameters of the virtual reality device includes:
acquiring attitude parameters of the virtual reality equipment;
setting a second attitude transformation matrix based on the attitude parameters;
and calculating according to the first attitude transformation matrix and the second attitude transformation matrix to obtain the third attitude transformation matrix.
4. The image rendering method of claim 1, further comprising: and setting a rendering buffer area, wherein the rendering buffer area is used for caching the image rendered by the 3D rendering thread.
5. The image rendering method according to claim 4, wherein the setting a rendering buffer includes: setting the following parameters of the rendering buffer: color buffer bit number, depth buffer bit number, stencil buffer bit number, multiple sampling buffer bit number, and sampling number of each pixel.
6. The image rendering method of claim 1, wherein performing inverse distortion processing on the rendered image by using the third posture transformation matrix and outputting and displaying the rendered image, comprises:
decomposing the third attitude transformation matrix into a plurality of sub-transformation matrices; the plurality of sub-transformation matrixes are respectively used for carrying out corresponding transformation operation on the rendered image;
and after the rendered image is subjected to inverse distortion processing by the plurality of sub-transformation matrixes, outputting a processing result to a screen for displaying.
7. The image rendering method of claim 5, wherein the plurality of sub-transformation matrices comprises: a scaling transformation matrix, a translation transformation matrix, a rotation transformation matrix, and an attitude matrix.
8. An image rendering apparatus for virtual reality, characterized in that the image rendering apparatus comprises:
the acquisition module is used for responding to a vertical synchronization signal, and the anti-distortion thread acquires an image rendered by the 3D rendering thread and a first posture transformation matrix of the 3D rendering thread;
the matrix attitude determination module is used for changing the first attitude transformation matrix according to the acquired parameter state of the virtual reality equipment to obtain a third attitude transformation matrix; and
and the transformation rendering module is used for performing inverse distortion processing on the rendered image through the third posture transformation matrix and outputting and displaying the processed image.
9. An image rendering apparatus for virtual reality, characterized by comprising:
at least one processor;
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, and the at least one processor implements the image rendering method for virtual reality of any one of claims 1 to 7 by executing the instructions stored by the memory.
10. A VR device, characterized in that the VR device comprises a display device, and the image rendering device of claim 9.
CN202110163013.0A 2021-02-05 2021-02-05 Image rendering method, device and equipment for virtual reality and VR equipment Active CN112785530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110163013.0A CN112785530B (en) 2021-02-05 2021-02-05 Image rendering method, device and equipment for virtual reality and VR equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110163013.0A CN112785530B (en) 2021-02-05 2021-02-05 Image rendering method, device and equipment for virtual reality and VR equipment

Publications (2)

Publication Number Publication Date
CN112785530A true CN112785530A (en) 2021-05-11
CN112785530B CN112785530B (en) 2024-05-24

Family

ID=75761058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110163013.0A Active CN112785530B (en) 2021-02-05 2021-02-05 Image rendering method, device and equipment for virtual reality and VR equipment

Country Status (1)

Country Link
CN (1) CN112785530B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
US20170154460A1 (en) * 2015-11-26 2017-06-01 Le Holdings (Beijing) Co., Ltd. Viewing frustum culling method and device based on virtual reality equipment
US20170192236A1 (en) * 2015-12-31 2017-07-06 Beijing Pico Technology Co., Ltd. Method of Adapting a Virtual Reality Helmet
CN107680047A (en) * 2017-09-05 2018-02-09 北京小鸟看看科技有限公司 A kind of virtual reality scenario rendering intent, image processor and wear display device
US20180081429A1 (en) * 2016-09-16 2018-03-22 Tomas G. Akenine-Moller Virtual reality/augmented reality apparatus and method
CN108876725A (en) * 2017-05-12 2018-11-23 深圳市魔眼科技有限公司 A kind of virtual image distortion correction method and system
CN108921050A (en) * 2018-06-14 2018-11-30 华中科技大学 A kind of virtual reality image processing system based on mobile terminal
CN109308742A (en) * 2018-08-09 2019-02-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus running 2D application in the 3D scene of virtual reality
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system
CN110335200A (en) * 2018-03-29 2019-10-15 腾讯科技(深圳)有限公司 A kind of anti-method, apparatus and the relevant device of distorting of virtual reality
CN111595342A (en) * 2020-04-02 2020-08-28 清华大学 Indoor positioning method and system capable of being deployed in large scale
CN111965781A (en) * 2020-08-28 2020-11-20 广东九联科技股份有限公司 VR lens barrel focal length electric adjustment control method, system and device
CN112114664A (en) * 2020-08-21 2020-12-22 青岛小鸟看看科技有限公司 Safety reminding method and device based on virtual reality and head-mounted all-in-one machine

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
US20170154460A1 (en) * 2015-11-26 2017-06-01 Le Holdings (Beijing) Co., Ltd. Viewing frustum culling method and device based on virtual reality equipment
US20170192236A1 (en) * 2015-12-31 2017-07-06 Beijing Pico Technology Co., Ltd. Method of Adapting a Virtual Reality Helmet
US20180081429A1 (en) * 2016-09-16 2018-03-22 Tomas G. Akenine-Moller Virtual reality/augmented reality apparatus and method
CN108876725A (en) * 2017-05-12 2018-11-23 深圳市魔眼科技有限公司 A kind of virtual image distortion correction method and system
CN107680047A (en) * 2017-09-05 2018-02-09 北京小鸟看看科技有限公司 A kind of virtual reality scenario rendering intent, image processor and wear display device
CN110335200A (en) * 2018-03-29 2019-10-15 腾讯科技(深圳)有限公司 A kind of anti-method, apparatus and the relevant device of distorting of virtual reality
CN108921050A (en) * 2018-06-14 2018-11-30 华中科技大学 A kind of virtual reality image processing system based on mobile terminal
CN109308742A (en) * 2018-08-09 2019-02-05 重庆爱奇艺智能科技有限公司 A kind of method and apparatus running 2D application in the 3D scene of virtual reality
CN109739356A (en) * 2018-12-29 2019-05-10 歌尔股份有限公司 Control method, device and the VR helmet that image is shown in VR system
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN111595342A (en) * 2020-04-02 2020-08-28 清华大学 Indoor positioning method and system capable of being deployed in large scale
CN112114664A (en) * 2020-08-21 2020-12-22 青岛小鸟看看科技有限公司 Safety reminding method and device based on virtual reality and head-mounted all-in-one machine
CN111965781A (en) * 2020-08-28 2020-11-20 广东九联科技股份有限公司 VR lens barrel focal length electric adjustment control method, system and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
TAKAAKI KUDO等: "Three-dimensional interactive processor which processes 64 directional images", 《PROCEEDINGS.SPIE》, vol. 5599, 25 October 2004 (2004-10-25), pages 1 - 8 *
蔺慧丽: "虚拟现实技术的应用现状与前景", 《中国新通信》, vol. 19, no. 16, 31 December 2017 (2017-12-31), pages 94 *
薛晗等: "基于四元数的Linux平台下虚拟显示系统研究与开发", 《2007年国防科技工业虚拟制造技术高层论坛论文集》, 17 April 2009 (2009-04-17), pages 1 - 7 *
邱振青: "图像渲染技术在虚拟现实头盔中的应用和发展", 《数字技术与应用》, no. 3, 25 March 2019 (2019-03-25), pages 86 *
黄然: "基于虚拟现实技术的智能视频监控系统的实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 5, 15 May 2019 (2019-05-15), pages 136 - 599 *

Also Published As

Publication number Publication date
CN112785530B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
US10120187B2 (en) Sub-frame scanout for latency reduction in virtual reality applications
US6411294B1 (en) Image display apparatus and image display method
US6816161B2 (en) Vertex assembly buffer and primitive launch buffer
US11270492B2 (en) Graphics processing systems
JP6726946B2 (en) Rendering method, rendering device, and electronic device
US9767595B2 (en) Graphics processing systems
US7068275B2 (en) Methods and apparatus for rendering an image with depth-of-field display
US9182938B2 (en) Method for controlling multiple displays and system thereof
US8395619B1 (en) System and method for transferring pre-computed Z-values between GPUs
JPH05282458A (en) Plural extensible image buffers for graphics system
US10217259B2 (en) Method of and apparatus for graphics processing
US10803547B2 (en) Graphics processing systems using a subset of pipeline stages
CN105844581B (en) A kind of image drawing method, device and equipment
CN106570927A (en) Method of realizing virtual reality based on Android system, terminal and system
US9412194B2 (en) Method for sub-pixel texture mapping and filtering
EP4283466A1 (en) A-buffer dynamic allocation
TWI566205B (en) Method for approximating motion blur in rendered frame from within graphic driver
CN112785530A (en) Image rendering method, device and equipment for virtual reality and VR equipment
US20030160794A1 (en) Arbitration scheme for efficient parallel processing
JP3052839B2 (en) Image processing apparatus and processing method thereof
US6900803B2 (en) Method for rasterizing graphics for optimal tiling performance
KR20120138185A (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
JP4754385B2 (en) Program, information recording medium, and image generation system
US20240242403A1 (en) System and Method for Creating a Design Tool Using a Clockwise Fill Rule
KR980010875A (en) 3D Rendering Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant