CN108241211B - Head-mounted display device and image rendering method - Google Patents

Head-mounted display device and image rendering method Download PDF

Info

Publication number
CN108241211B
CN108241211B CN201611218181.0A CN201611218181A CN108241211B CN 108241211 B CN108241211 B CN 108241211B CN 201611218181 A CN201611218181 A CN 201611218181A CN 108241211 B CN108241211 B CN 108241211B
Authority
CN
China
Prior art keywords
frame image
head
texture data
mounted display
eye texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611218181.0A
Other languages
Chinese (zh)
Other versions
CN108241211A (en
Inventor
张毅
刘扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Idealsee Technology Co Ltd
Original Assignee
Chengdu Idealsee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Idealsee Technology Co Ltd filed Critical Chengdu Idealsee Technology Co Ltd
Priority to CN201611218181.0A priority Critical patent/CN108241211B/en
Publication of CN108241211A publication Critical patent/CN108241211A/en
Application granted granted Critical
Publication of CN108241211B publication Critical patent/CN108241211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a head-mounted display device and an image rendering method, which comprise the following steps: the image processing device comprises a Graphics Processing Unit (GPU) and a display control unit, wherein the GPU is used for drawing a scene to be displayed and generating left eye texture data and right eye texture data; and the hardware synthesis module is used for carrying out texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image. In the scheme, the scene drawing and the texture synthesis are distributed to different hardware modules to be executed, and the special hardware synthesis module executes the texture synthesis task, so that the technical problem of screen tearing caused by untimely GPU texture synthesis in the prior art is solved, and the real-time performance of the texture synthesis task is ensured.

Description

Head-mounted display device and image rendering method
Technical Field
The invention relates to the field of wearable head-mounted display equipment, in particular to head-mounted display equipment and an image rendering method.
Background
VR (Virtual Reality) display or AR (Augmented Reality) display is mainly divided into two steps: the first step is that off-line scene rendering is carried out through a GPU (Graphics Processing Unit), namely, a scene to be displayed is drawn on left and right eye textures through the GPU; and the second step is that the rendered texture is submitted to a GPU for online texture synthesis, and simultaneously, the texture is subjected to anti-distortion processing and asynchronous time warping Timewarp, and finally the texture is synthesized and displayed on a screen.
Generally speaking, the two steps are realized by multiplexing the GPU, and since scene rendering belongs to a relatively complicated task and texture synthesis belongs to a real-time task, the two tasks can be distinguished by setting different priorities, a normal priority is given to the scene rendering task, and a higher priority is given to the texture synthesis task, so that when left-eye texture data and right-eye texture data need to be synthesized to a screen, the GPU can be switched to the texture synthesis task from the scene rendering task in time, and the left-eye texture data and the right-eye texture data are synthesized to a screen cache for display.
However, multiplexing the GPU usually causes the GPU to have too high load, and in some scenes with high real-time requirements, the GPU and the display controller are required to read and write data in the screen cache at the same time, and if the texture synthesis of the GPU is not timely, a timing error may occur when the GPU writes data into the screen cache and the display controller reads data from the screen cache, which may cause screen tearing (i.e., the display displays two or more frames on the same frame), and thus, a new rendering manner is required in the prior art to ensure the real-time performance of the texture synthesis task.
Disclosure of Invention
The invention aims to provide a head-mounted display device and an image rendering method, which are used for solving the technical problem of screen tearing caused by untimely GPU texture synthesis in the prior art.
In order to achieve the above object, a first aspect of embodiments of the present invention provides a head-mounted display device, including:
the image processing device comprises a Graphics Processing Unit (GPU) and a display control unit, wherein the GPU is used for drawing a scene to be displayed and generating left eye texture data and right eye texture data;
and the hardware synthesis module is used for carrying out texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image.
Optionally, the head-mounted display device further comprises a display controller;
the display controller is used for generating a vertical synchronization VSYNC signal;
the hardware synthesis module is used for reading the VSYNC signal and carrying out texture synthesis according to the VSYNC signal.
Optionally, the head-mounted display device further includes a central processing unit CPU;
the CPU is used for sending the left eye texture number and the right eye texture number to the hardware synthesis module;
the hardware synthesis module is used for obtaining cache addresses of the left eye texture data and the right eye texture data according to the left eye texture number and the right eye texture number, and reading the left eye texture data and the right eye texture data from a texture cache of the head-mounted display device based on the cache addresses.
Optionally, the head-mounted display device further comprises a sensor;
the sensor is used for detecting direction information used for representing the head rotation of the user when the head-mounted display equipment is worn by the user;
and the hardware synthesis module is used for carrying out direction correction on the left eye texture data and the right eye texture data according to the direction information and synthesizing an intermediate frame image according to the corrected left eye texture data and right eye texture data.
Optionally, the head-mounted display device further comprises a sensor;
the sensor is used for detecting user position information used for representing the position of a user in the space when the head-mounted display equipment is worn by the user;
and the hardware synthesis module is used for estimating the position of the object in the next frame image according to the user position information and the image position information of the object in the current frame image to generate an intermediate frame image.
Optionally, the hardware synthesis module is configured to determine a moving object in the current frame image and a motion trajectory of the moving object in N adjacent frame images including the current frame image, and estimate a position of the moving object in a next frame image according to the motion trajectory to generate an intermediate frame image, where N is a positive integer greater than or equal to 2.
Optionally, the hardware synthesis module is further configured to perform distortion correction on the left-eye texture data and the right-eye texture data according to the lens information of the head-mounted display device, and synthesize the current frame image according to the left-eye texture data and the right-eye texture data after the distortion correction.
Optionally, the hardware synthesis module is a field programmable gate array FPGA, an application specific integrated circuit ASIC, or another GPU.
A second aspect of the embodiments of the present invention provides an image rendering method, including:
a GPU of the head-mounted display device performs scene drawing on a scene to be displayed to generate left eye texture data and right eye texture data;
and the hardware synthesis module of the head-mounted display equipment performs texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image.
Optionally, the method further includes:
the hardware synthesis module reads a vertical synchronization VSYNC signal of the head-mounted display equipment and carries out texture synthesis according to the VSYNC signal.
One or more technical solutions in the embodiments of the present invention have at least the following technical effects or advantages:
in the scheme of the embodiment of the invention, the head-mounted display device comprises the GPU and a hardware synthesis module, wherein the GPU is used for performing scene drawing on a scene to be displayed to generate left eye texture data and right eye texture data, the hardware synthesis module is used for performing texture synthesis on the left eye texture data and the right eye texture data to generate a current frame image, and the scene drawing and the texture synthesis are distributed on different hardware modules to be executed, so that a special hardware synthesis module executes a texture synthesis task, the situation that the scene drawing task and the texture synthesis task are multiplexed with the GPU is avoided, the load of the GPU is reduced, the technical problem that a screen is torn due to untimely GPU texture synthesis in the prior art is solved, and the real-time performance of the texture synthesis task is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise:
fig. 1 is a schematic functional block diagram of a head-mounted display device according to an embodiment of the present invention;
fig. 2 is a schematic diagram of another functional module of a head-mounted display device according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating an image rendering method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the present invention, a Head-mounted Display device (hereinafter, referred to as "Head Mount Display" and "HMD" for short) may transmit an optical signal to an eye, so as to achieve different effects such as VR, AR (Augmented Reality), MR (mixed Reality), and the like, for example: the head mounted display device may be a VR kiosk, a transmissive HMD, or the like.
Referring to fig. 1, fig. 1 is a schematic functional block diagram of a head-mounted display device according to an embodiment of the present invention, where the head-mounted display device includes: the graphics processing unit GPU10 is configured to perform scene rendering on a scene to be displayed, and generate left-eye texture data and right-eye texture data; and the hardware synthesis module 11 is configured to perform texture synthesis according to the left-eye texture data and the right-eye texture data, and generate a current frame image.
Specifically, the GPU10 draws a scene to be displayed according to the latest direction information detected by a sensor of the head-mounted display device, and generates left-eye texture data and right-eye texture data, the sensor may be one or more of a gyroscope, an accelerometer, and a magnetometer, the head-mounted display device obtains final direction information by fusing data of the three sensors, and the direction information is used for representing rotation of the head of the user. Then, the GPU10 outputs the rendering result, that is, the left-eye texture data and the right-eye texture data, to a texture buffer of the head-mounted display device, so that when texture synthesis is required, the hardware synthesis module 11 reads the left-eye texture data and the right-eye texture data from the texture buffer, performs texture synthesis to generate a current frame image, writes the current frame image into a frame buffer memory of the head-mounted display device, and then, the display controller of the head-mounted display device fetches the data from the frame buffer memory and displays the data on the screen.
The hardware synthesis module 11 may be an FPGA (Field-Programmable Gate Array), an ASIC (Application Specific Integrated Circuits), or another GPU. The frame buffer memory, also called frame buffer, video memory or screen buffer, is a direct image of the displayed picture on the screen, each memory cell of the frame buffer memory corresponds to a pixel on the screen, the whole frame buffer memory corresponds to a frame image, the hardware synthesis module 11 automatically fetches data from the frame buffer memory and displays the data on the screen by continuously writing data into the frame buffer memory.
In the embodiment of the invention, the GPU10 is used for scene drawing, the hardware synthesis module 11 is used for texture synthesis, the scene drawing and the texture synthesis are distributed to different hardware modules for execution, and the dedicated hardware synthesis module 11 executes the texture synthesis task, so that the situation that the scene drawing task and the texture synthesis task are multiplexed by the GPU10 is avoided, the load of the GPU10 is reduced, the technical problem that the screen is torn due to untimely texture synthesis of the GPU10 in the prior art is solved, and the real-time performance of the texture synthesis task is ensured.
Further, in the process of drawing a frame of scene by the GPU in the prior art, since the GPU needs to be used as a texture synthesis task, and since the priority of texture synthesis is higher, the current drawing task of the GPU is interrupted from time to time, so that the GPU frequently switches tasks, the cache utilization rate of the GPU is low, and the rendering efficiency of the GPU is reduced.
In the embodiment of the present invention, as shown in fig. 2, the head-mounted display device further includes a display controller 12, configured to generate a VSYNC (Vertical Synchronization) signal, where after a refresh of the whole screen is completed, the display controller 12 generates the VSYNC signal, and the VSYNC signal has an effect of making the operation speed of the GPU10 and the screen refresh frequency consistent, so as to output a stable picture quality. Specifically, after the VSYNC signal is generated, the CPU controls the GPU10 to render the next frame of image, and if resources of the CPU or the GPU10 are occupied, scheduling of the CPU or the GPU10 may be untimely, so that a timing error occurs when the GPU10 writes data into the frame buffer and the display controller 12 reads data from the frame buffer, thereby causing a display tearing problem.
In the embodiment of the present invention, the hardware synthesis module 11 directly reads the VSYNC signal, performs texture synthesis according to the VSYNC signal, and synchronizes the VSYNC signal through the hardware synthesis module 11 to perform texture synthesis, so that the frame generation rate of the hardware synthesis module 11 is consistent with the screen refresh frequency, and the GPU rendering pipeline generally includes an application processing stage, a geometry processing stage, and a rasterization processing stage, and the rendering pipeline of the hardware synthesis module 11 includes a pixel processing stage, it is obvious that the hardware synthesis module 11 employs a shorter rendering pipeline, and compared with the time of the GPU10 for rendering a frame of image, the time of the hardware synthesis module 11 for texture synthesis is very short, thereby avoiding the problem of tearing of the screen display caused by the untimely scheduling of the CPU or the GPU10, and ensuring the real-time performance of texture synthesis.
In this embodiment of the present invention, the head-mounted display device further includes a central processing unit CPU13, and after controlling the GPU10 to draw a scene, the CPU13 may send the latest left-eye texture number (i.e., texture ID) and the latest right-eye texture number to the hardware synthesis module 11, so that when the hardware synthesis module 11 needs to read texture data, the hardware synthesis module 11 obtains cache addresses of the left-eye texture data and the right-eye texture data according to the left-eye texture number and the right-eye texture number, reads the left-eye texture data and the right-eye texture data from a texture cache in the head-mounted display device based on the cache addresses, and then synthesizes the texture left-eye texture data and the right-eye texture data into a current frame image and writes the current frame image into the frame cache. After acquiring texture data of the next frame of image, the hardware synthesis module 11 releases the texture data of the previous frame of data and starts new texture synthesis.
In this embodiment of the present invention, the hardware synthesis module 11 may further generate an intermediate frame image according to the left eye texture data and the right eye texture data of the previous frame image (i.e., the current frame image), so that the head-mounted display device may operate at a lower frame rate, that is, by inserting the intermediate frame image, the rendering quality of the head-mounted display device may not be significantly reduced even when the rendering frame rate of the GPU10 is lower or the frame rate of the VR content is lower.
In a possible implementation manner, the head-mounted display device further includes a sensor 14, where the sensor 14 may be one or more of a gyroscope, an accelerometer, and a magnetometer, and the sensor 14 is configured to detect direction information for characterizing a head rotation of a user when the head-mounted display device is worn by the user, and then the hardware synthesis module 11 is configured to perform direction correction on the left-eye texture data and the right-eye texture data according to the direction information, and synthesize an intermediate frame image according to the corrected left-eye texture data and right-eye texture data.
Specifically, when using a head-mounted display device, if the user's head is rotated too fast and the GPU10 takes too long to render a frame of an image, a delay in scene rendering may occur, that is, the user's head has rotated past but the corresponding image has not yet been rendered or an image of the previous frame is rendered, and the picture may be shaken. In order to solve the above problem, in the embodiment of the present invention, the hardware synthesis module 11 is configured to perform direction correction on left-eye texture data and right-eye texture data of a previous frame image (that is, a current frame image) according to the direction information, synthesize an intermediate frame image according to the corrected left-eye texture data and right-eye texture data, and display the intermediate frame image, so as to effectively reduce jitter of a display image. The direction information is the direction information of the rotation of the user's head detected by the sensor 14.
Since the time for the hardware synthesis module 11 to generate the intermediate frame image is very short compared with the time for the GPU10 to render one frame image, and the intermediate frame image is generated by the hardware synthesis module 11, the resource of the GPU10 is not occupied, so that the scene rendering task of the GPU10 is not affected, and since the hardware synthesis module 11 can directly read the VSYNC signal of the display controller 12 and perform texture synthesis according to the VSYNC signal, the hardware synthesis module 11 can synchronize the VSYNC signal to perform texture synthesis under the condition that the rendering frame rate of the GPU10 is low or the frame rate of the VR content is low, so that the frame rate of the hardware synthesis module 11 is consistent with the screen refresh frequency, thereby reducing the display jitter.
Another possible embodiment is that the sensor 14 is also used to detect user position information characterizing the position of the user in space when the head mounted display device is worn by the user; then, the hardware synthesis module 11 is configured to estimate a position of the object in a next frame image according to the user position information and the image position information of the object in the current frame image, so as to generate an intermediate frame image.
In particular, when a user wears a head mounted display device to move in space, the head mounted display device may determine the position of the user in space and then render a corresponding scene according to the position of the user in space. If the rendering speed of the head-mounted display device cannot keep up with the moving speed of the user, the displayed picture is not smooth. In order to solve the above problem, in the embodiment of the present invention, the hardware synthesis module 11 is configured to estimate a position of the object in a next frame image according to the user position information detected by the sensor 14 and the image position information of the object in the current frame image, generate an intermediate frame image, and then display the intermediate frame image, so as to improve the fluency of the display screen.
Compared with the time for rendering one frame of image by the GPU10, the time for generating the intermediate frame image by the hardware synthesis module 11 is very short, so that the position of the object in the next frame of image is estimated by the hardware synthesis module 11 according to the position information of the user, the current frame image is generated, and the current frame image is displayed, so that the rendering speed of the head-mounted display device can be matched with the moving speed of the user, thereby improving the smoothness of the display screen.
In a third possible implementation manner, for an object moving in a display scene, the hardware synthesis module 11 is further configured to determine a moving object in the current frame image and a motion trajectory of the moving object in N adjacent frame images including the current frame image, and estimate a position of the moving object in a next frame image according to the motion trajectory to generate an intermediate frame image, where N is a positive integer greater than or equal to 2.
The value of N can be set according to the calculation amount of the head-mounted display equipment, and the larger the value of N is, the larger the calculation amount is, and the more accurate the position estimation is. For example: the value of N may be 2, assuming that a moving object in the current frame image is a basketball, and a motion trajectory of the basketball moves from bottom to top in the current frame image and a previous frame image of the current frame image, the hardware synthesis module 11 may predict a position of the basketball in the next frame image according to the motion trajectory, so as to determine the position of the basketball in the next frame image, generate an intermediate frame image, and compared with the current frame image, the basketball in the intermediate frame image moves from bottom to top, and then display the intermediate frame image, so that the movement of the basketball in the scene becomes smoother.
It can be seen that, in the above embodiments, by inserting the intermediate frame image, in the case that the rendering frame rate of the GPU10 is low, or the frame rate of the VR content is low, the rendering quality of the head-mounted display device is not significantly reduced, so that the head-mounted display device can operate at a lower frame rate.
In this embodiment of the present invention, the hardware synthesis module 11 is further configured to perform distortion correction on the left-eye texture data and the right-eye texture data according to the lens information of the head-mounted display device, and synthesize the current frame image according to the left-eye texture data and the right-eye texture data after the distortion correction. The lens information includes magnification, distortion parameters, and the like of the lens group, and the hardware synthesis module 11 may perform distortion correction on the left-eye texture data and the right-eye texture data according to the magnification, the distortion parameters, and the like, and then synthesize the left-eye texture data and the right-eye texture data into the frame buffer to reduce image distortion.
In another embodiment, before generating the intermediate frame image, the hardware synthesis module 11 may also perform distortion correction on the texture data, and then synthesize the intermediate frame image according to the distortion-corrected additional texture data, where the intermediate frame image may be an image generated according to any one of the three embodiments, which is not limited herein.
Based on the same inventive concept, an embodiment of the present invention further provides an image rendering method, as shown in fig. 3, including:
step 30, a GPU of the head-mounted display device performs scene drawing on a scene to be displayed to generate left eye texture data and right eye texture data;
and step 31, performing texture synthesis by a hardware synthesis module of the head-mounted display device according to the left eye texture data and the right eye texture data to generate a current frame image.
Optionally, the method further includes:
the hardware synthesis module reads a vertical synchronization VSYNC signal of the head-mounted display equipment and carries out texture synthesis according to the VSYNC signal.
Optionally, the method further includes:
a Central Processing Unit (CPU) of the head-mounted display device sends a left eye texture number and a right eye texture number to the hardware synthesis module;
the hardware synthesis module obtains cache addresses of the left eye texture data and the right eye texture data according to the left eye texture number and the right eye texture number, and reads the left eye texture data and the right eye texture data from a texture cache of the head-mounted display device based on the cache addresses.
Optionally, the method further includes:
when a user wears the head-mounted display device, a sensor of the head-mounted display device detects direction information representing head rotation of the user;
and the hardware synthesis module performs direction correction on the left eye texture data and the right eye texture data according to the direction information, and synthesizes an intermediate frame image according to the corrected left eye texture data and right eye texture data.
Optionally, the method further includes:
while a user is wearing the head-mounted display device, a sensor of the head-mounted display device detects user location information that characterizes a location of the user in space;
and the hardware synthesis module estimates the position of the object in the next frame image according to the user position information and the image position information of the object in the current frame image to generate an intermediate frame image.
Optionally, the method further includes:
and the hardware synthesis module determines a moving object in the current frame image and a motion track of the moving object in an adjacent N frame image including the current frame image, estimates the position of the moving object in the next frame image according to the motion track and generates an intermediate frame image, wherein N is a positive integer greater than or equal to 2.
Optionally, the method further includes:
and the hardware synthesis module is used for carrying out distortion correction on the left eye texture data and the right eye texture data according to the lens information of the head-mounted display equipment and synthesizing the current frame image according to the left eye texture data and the right eye texture data after the distortion correction.
Various changes and specific examples in the head-mounted display device in the foregoing embodiment of fig. 1 are also applicable to the image rendering method in this embodiment, and a person skilled in the art can clearly know the implementation method of the image rendering method in this embodiment through the foregoing detailed description of the head-mounted display device, so for the brevity of the description, detailed description is not repeated here.
Based on the same inventive concept, the embodiment of the invention further provides split device, which comprises a host and a head-mounted display device, wherein the host comprises a Graphics Processing Unit (GPU) for drawing a scene to be displayed and generating left eye texture data and right eye texture data; the head-mounted display equipment comprises a hardware synthesis module, and the hardware synthesis module is used for carrying out texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image. The host and the head-mounted display device may be connected by a wire or wirelessly through an HDMI (High Definition Multimedia Interface), an MHL (Mobile High-Definition Link), or a USB3.0(Universal Serial Bus).
One or more technical solutions in the embodiments of the present invention have at least the following technical effects or advantages:
in the scheme of the embodiment of the invention, the head-mounted display device comprises the GPU and a hardware synthesis module, wherein the GPU is used for performing scene drawing on a scene to be displayed to generate left eye texture data and right eye texture data, the hardware synthesis module is used for performing texture synthesis on the left eye texture data and the right eye texture data to generate a current frame image, and the scene drawing and the texture synthesis are distributed on different hardware modules to be executed, so that a special hardware synthesis module executes a texture synthesis task, the situation that the scene drawing task and the texture synthesis task are multiplexed with the GPU is avoided, the load of the GPU is reduced, the technical problem that a screen is torn due to untimely GPU texture synthesis in the prior art is solved, and the real-time performance of the texture synthesis task is ensured.
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed.

Claims (7)

1. A head-mounted display device, comprising:
the image processing device comprises a Graphics Processing Unit (GPU) and a display control unit, wherein the GPU is used for drawing a scene to be displayed and generating left eye texture data and right eye texture data;
the hardware synthesis module is used for carrying out texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image;
the hardware synthesis module is used for carrying out direction correction on the left eye texture data and the right eye texture data according to direction information, and synthesizing an intermediate frame image according to the corrected left eye texture data and right eye texture data, wherein the direction information is the direction information which is detected by a sensor of the head-mounted display equipment and used for representing the rotation of the head of a user when the user wears the head-mounted display equipment; or
The hardware synthesis module is used for estimating the position of the object in the next frame image according to user position information and image position information of the object in the current frame image to generate an intermediate frame image, wherein the user position information is user position information which is detected by a sensor of the head-mounted display device and used for representing the position of the user in the space when the user wears the head-mounted display device; or
The hardware synthesis module is used for determining a moving object in the current frame image and a motion track of the moving object in an adjacent N frame image including the current frame image, estimating the position of the moving object in the next frame image according to the motion track, and generating an intermediate frame image, wherein N is a positive integer greater than or equal to 2;
wherein the intermediate frame image is for display after the current frame image.
2. The head mounted display device of claim 1, wherein the head mounted display device further comprises a display controller;
the display controller is used for generating a vertical synchronization VSYNC signal;
the hardware synthesis module is used for reading the VSYNC signal and carrying out texture synthesis according to the VSYNC signal.
3. The head mounted display device of claim 2, wherein the head mounted display device further comprises a Central Processing Unit (CPU);
the CPU is used for sending the left eye texture number and the right eye texture number to the hardware synthesis module;
the hardware synthesis module is used for obtaining cache addresses of the left eye texture data and the right eye texture data according to the left eye texture number and the right eye texture number, and reading the left eye texture data and the right eye texture data from a texture cache of the head-mounted display device based on the cache addresses.
4. The head-mounted display device of any one of claims 1-3, wherein the hardware synthesis module is further configured to perform distortion correction on the left eye texture data and the right eye texture data according to lens information of the head-mounted display device, and synthesize the current frame image according to the distortion-corrected left eye texture data and right eye texture data.
5. The head-mounted display device of claim 1, wherein the hardware synthesis module is a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), or another GPU.
6. An image rendering method, comprising:
a GPU of the head-mounted display device performs scene drawing on a scene to be displayed to generate left eye texture data and right eye texture data;
a hardware synthesis module of the head-mounted display device performs texture synthesis according to the left eye texture data and the right eye texture data to generate a current frame image;
a hardware synthesis module of the head-mounted display device generates an intermediate frame image according to the left eye texture data, the right eye texture data and a mode of generating the intermediate frame image; the intermediate frame image is used for displaying after the current frame image;
the mode for generating the intermediate frame image comprises the following steps:
the hardware synthesis module performs direction correction on the left eye texture data and the right eye texture data according to direction information, and synthesizes an intermediate frame image according to the corrected left eye texture data and right eye texture data, wherein the direction information is the direction information which is detected by a sensor of the head-mounted display device and used for representing the rotation of the head of a user when the user wears the head-mounted display device; or
The hardware synthesis module estimates the position of the object in the next frame image according to user position information and image position information of the object in the current frame image to generate an intermediate frame image, wherein the user position information is user position information which is detected by a sensor of the head-mounted display device and used for representing the position of the user in the space when the user wears the head-mounted display device; or
The hardware synthesis module determines a moving object in the current frame image and a motion track of the moving object in an adjacent N frame image including the current frame image, estimates the position of the moving object in the next frame image according to the motion track, and generates an intermediate frame image, wherein N is a positive integer greater than or equal to 2.
7. The method of claim 6, wherein the method further comprises:
the hardware synthesis module reads a vertical synchronization VSYNC signal of the head-mounted display equipment and carries out texture synthesis according to the VSYNC signal.
CN201611218181.0A 2016-12-26 2016-12-26 Head-mounted display device and image rendering method Active CN108241211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611218181.0A CN108241211B (en) 2016-12-26 2016-12-26 Head-mounted display device and image rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611218181.0A CN108241211B (en) 2016-12-26 2016-12-26 Head-mounted display device and image rendering method

Publications (2)

Publication Number Publication Date
CN108241211A CN108241211A (en) 2018-07-03
CN108241211B true CN108241211B (en) 2020-09-15

Family

ID=62701378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611218181.0A Active CN108241211B (en) 2016-12-26 2016-12-26 Head-mounted display device and image rendering method

Country Status (1)

Country Link
CN (1) CN108241211B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI862293B (en) * 2023-01-18 2024-11-11 宏達國際電子股份有限公司 Image quality adjusting method and host

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945138B (en) * 2017-12-08 2020-04-03 京东方科技集团股份有限公司 Picture processing method and device
CN109754380B (en) * 2019-01-02 2021-02-02 京东方科技集团股份有限公司 Image processing method, image processing device and display device
CN111190560B (en) * 2019-12-24 2022-09-06 青岛小鸟看看科技有限公司 Method, device, equipment and storage medium for acquiring hardware vertical synchronization signal
WO2022170621A1 (en) * 2021-02-12 2022-08-18 Qualcomm Incorporated Composition strategy searching based on dynamic priority and runtime statistics
CN114095655B (en) * 2021-11-17 2024-08-13 海信视像科技股份有限公司 Method and device for displaying streaming data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049926A (en) * 2012-12-24 2013-04-17 广东威创视讯科技股份有限公司 Distributed three-dimensional rendering system
CN103439793A (en) * 2013-07-18 2013-12-11 成都理想境界科技有限公司 Hmd
WO2016073557A1 (en) * 2014-11-04 2016-05-12 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds
WO2016118306A1 (en) * 2015-01-20 2016-07-28 Microsoft Technology Licensing, Llc Wearable display with bonded graphite heatpipe
CN106154553A (en) * 2016-08-01 2016-11-23 全球能源互联网研究院 A kind of electric inspection process intelligent helmet Binocular displays system and its implementation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049926A (en) * 2012-12-24 2013-04-17 广东威创视讯科技股份有限公司 Distributed three-dimensional rendering system
CN103439793A (en) * 2013-07-18 2013-12-11 成都理想境界科技有限公司 Hmd
WO2016073557A1 (en) * 2014-11-04 2016-05-12 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds
WO2016118306A1 (en) * 2015-01-20 2016-07-28 Microsoft Technology Licensing, Llc Wearable display with bonded graphite heatpipe
CN106154553A (en) * 2016-08-01 2016-11-23 全球能源互联网研究院 A kind of electric inspection process intelligent helmet Binocular displays system and its implementation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI862293B (en) * 2023-01-18 2024-11-11 宏達國際電子股份有限公司 Image quality adjusting method and host

Also Published As

Publication number Publication date
CN108241211A (en) 2018-07-03

Similar Documents

Publication Publication Date Title
CN108241211B (en) Head-mounted display device and image rendering method
US10733789B2 (en) Reduced artifacts in graphics processing systems
EP3552081B1 (en) Display synchronized image warping
US11270492B2 (en) Graphics processing systems
US8760470B2 (en) Mixed reality presentation system
JP6648385B2 (en) Stabilization of electronic display in graphics processing unit
US10997929B2 (en) Display scene processing method and device and storage medium
US10712817B1 (en) Image re-projection for foveated rendering
US10890966B2 (en) Graphics processing systems
TW202024866A (en) In-flight adaptive foveated rendering
US20210368152A1 (en) Information processing apparatus, information processing method, and program
KR20180136445A (en) Information processing apparatus, information processing method, and program
TWI622957B (en) Method and virtual reality device for improving image quality
TWI619092B (en) Method and device for improving image quality by using multi-resolution
US12020442B2 (en) Graphics processing systems
TWI698834B (en) Methods and devices for graphics processing
US12010288B2 (en) Information processing device, information processing method, and program
CN112230776B (en) Virtual reality display method, device and storage medium
US12034908B2 (en) Stereoscopic-image playback device and method for generating stereoscopic images
JP2020028096A (en) Image processing apparatus, control method of the same, and program
JP6949475B2 (en) Image processing equipment, image processing methods and programs
EP3217256B1 (en) Interactive display system and method
WO2024238104A1 (en) Efficient multi-view rendering in multi-user split xr systems
US20240013480A1 (en) Projection of an image in an extended reality device
JP2013186247A (en) Moving picture display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant