WO2023005042A1 - 图像渲染方法、装置、设备及计算机可读存储介质 - Google Patents

图像渲染方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2023005042A1
WO2023005042A1 PCT/CN2021/128030 CN2021128030W WO2023005042A1 WO 2023005042 A1 WO2023005042 A1 WO 2023005042A1 CN 2021128030 W CN2021128030 W CN 2021128030W WO 2023005042 A1 WO2023005042 A1 WO 2023005042A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
viewpoint
rendered
data
synchronization signal
Prior art date
Application number
PCT/CN2021/128030
Other languages
English (en)
French (fr)
Inventor
邱绪东
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Priority to US18/007,202 priority Critical patent/US20240265485A1/en
Publication of WO2023005042A1 publication Critical patent/WO2023005042A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the present application relates to the technical field of image processing, and in particular to an image rendering method, device, equipment, and computer-readable storage medium.
  • AR/VR uses image rendering technology to refresh the rendered virtual image to the display device, and users experience the effect of virtual display/enhanced display through the head-mounted display device. Since the rendering process takes time, it will cause a time delay between the actual and the perception. This delay must be controlled within a certain range, otherwise the user will experience discomfort such as dizziness. In order to alleviate this discomfort for users, ATW (Asynchronous Time warp, asynchronous time warp) technology has emerged.
  • ATW Asynchronous Time warp, asynchronous time warp
  • the current smart glasses assume that the screen refresh cycle is T, and the left eye is rendered at the first T/2. If the rendering of the left eye is not completed at T/2, the right eye is rendered at T/2, and the left eye is fixed at T/2. Eye ATW, start right eye ATW at time T, if it is not finished when starting ATW, use the previous frame instead. Under this rendering mechanism, the rendering process may have a waiting period and useless rendering, resulting in a waste of GPU resources for processing and low image rendering usage.
  • the main purpose of the present application is to provide an image rendering method, device, equipment and computer-readable storage medium, aiming to solve the technical problem that the existing rendering mechanism wastes GPU resources for processing and has low utilization rate of image rendering.
  • the present application provides an image rendering method, the method comprising the following steps:
  • the step of acquiring the current first data to be rendered of the first viewpoint and starting rendering includes:
  • stop rendering the third data to be rendered and obtain the current first data to be rendered from the first viewpoint to start rendering;
  • the method also includes:
  • the frame image of the first viewpoint currently displayed in the display device is refreshed by using the frame image of the first viewpoint.
  • the method also includes:
  • the frame image of the second viewpoint currently displayed in the display device is refreshed by using the frame image of the second viewpoint.
  • the method also includes:
  • T2 is increased, wherein the increased T2 is smaller than T1.
  • the method also includes:
  • the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of frame drops of the second viewpoint once;
  • T2 When it is detected that the ratio of the number of times of frame loss of the second viewpoint to the number of times of frame loss of the first viewpoint is greater than the preset ratio, T2 is decreased, wherein the reduced T2 is greater than 0.
  • the method also includes:
  • one of the first viewpoint and the second viewpoint is set as the left eye viewpoint, and the other is set as the right eye viewpoint;
  • the settings of the left and right eyes of the first viewpoint and the second viewpoint are exchanged every preset time period T3 since the synchronization signal is received for the first time.
  • the present application also provides an image rendering device, the device comprising:
  • the first rendering module is configured to acquire the current first data to be rendered of the first viewpoint and start rendering after receiving the synchronization signal sent according to the preset time period T1;
  • the second rendering module is configured to stop rendering the first data to be rendered if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, and acquire the current second The data to be rendered starts to render, where, 0 ⁇ T2 ⁇ T1;
  • the second rendering module is further configured to acquire the second data to be rendered after rendering of the first data to be rendered is completed if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal.
  • the rendering data starts to render
  • the cache module is configured to asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and store it in the display buffer, and to store the image in the display buffer at the time of receiving the synchronization signal
  • the frame image of the second viewpoint obtained from the latest rendering is stored in the display buffer after being subjected to asynchronous time warping.
  • the present application also provides an image rendering device, which includes: a memory, a processor, and an image rendering program stored in the memory and operable on the processor, the image When the rendering program is executed by the processor, the steps of the above-mentioned image rendering method are realized.
  • the present application also proposes a computer-readable storage medium, on which an image rendering program is stored, and when the image rendering program is executed by a processor, the above-mentioned image rendering can be realized. method steps.
  • the rendering starts by acquiring the current first data to be rendered in the first viewpoint; if the rendering of the first data to be rendered is not completed at T2 after receiving the synchronization signal, stop rendering the first Data to be rendered, and obtain the current second data to be rendered from the second viewpoint to start rendering; if the rendering of the first data to be rendered is completed before the time T2 after receiving the synchronization signal, then obtain the second data to be rendered after the rendering of the first data to be rendered is completed 2.
  • the data to be rendered starts rendering; at T2 time after receiving the synchronization signal, asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering and store it in the display buffer, and at the time of receiving the synchronization signal
  • the rendered frame image of the second viewpoint is asynchronously time-warped and stored in the display buffer.
  • the rendering of the first data to be rendered when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to avoid continuous rendering of the first data to be rendered and cause the GPU to do useless work and avoid wasting GPU resources;
  • the GPU can immediately acquire the second data to be rendered at the current moment of the second viewpoint for rendering, thus avoiding the waste of GPU resources due to the waiting state of the GPU, and the early start of the second
  • the rendering of the data to be rendered also improves the rendering completion rate of the second data to be rendered, thereby further reducing the possibility of the GPU doing useless work, and also improving the utilization rate of image rendering.
  • Fig. 1 is a schematic structural diagram of the hardware operating environment involved in the solution of the embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the image rendering method of the present application
  • FIG. 3 is a schematic diagram of a rendering process involved in an embodiment of the image rendering method of the present application
  • FIG. 4 is a schematic diagram of functional modules of a preferred embodiment of the image rendering device of the present application.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in the solution of the embodiment of the present application.
  • the image rendering device in this embodiment of the present application may be a device such as a smart phone, a personal computer, or a server, which is not specifically limited here.
  • the image rendering device may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • FIG. 1 is not limited to the image rendering device, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an image rendering program.
  • An operating system is a program that manages and controls the hardware and software resources of a device, and supports the operation of image rendering programs and other software or programs.
  • the user interface 1003 is mainly used for data communication with the client;
  • the network interface 1004 is mainly used for establishing a communication connection with the server;
  • the processor 1001 can be used to call the image rendering program stored in the memory 1005 , and do the following:
  • acquiring the current first data to be rendered of the first viewpoint and starting rendering includes:
  • stop rendering the third data to be rendered and obtain the current first data to be rendered from the first viewpoint to start rendering;
  • processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
  • the frame image of the first viewpoint currently displayed in the display device is refreshed by using the frame image of the first viewpoint.
  • processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
  • the frame image of the second viewpoint currently displayed in the display device is refreshed by using the frame image of the second viewpoint.
  • processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
  • T2 is increased, wherein the increased T2 is smaller than T1.
  • processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
  • the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of frame drops of the second viewpoint once;
  • T2 When it is detected that the ratio of the number of times of frame loss of the second viewpoint to the number of times of frame loss of the first viewpoint is greater than the preset ratio, T2 is decreased, wherein the reduced T2 is greater than 0.
  • processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
  • one of the first viewpoint and the second viewpoint is set as the left eye viewpoint, and the other is set as the right eye viewpoint;
  • the settings of the left and right eyes of the first viewpoint and the second viewpoint are exchanged every preset time period T3 since the synchronization signal is received for the first time.
  • FIG. 2 is a schematic flowchart of the first embodiment of the image rendering method of the present application.
  • the embodiment of the present application provides an embodiment of the image rendering method. It should be noted that although the logic sequence is shown in the flow chart, in some cases, the sequence shown or described can be executed in a different order than here. A step of.
  • the image rendering method of the present application is applied to a device or processor capable of image rendering, for example, it can be applied to an image processor GPU. The following uses GPU as an example to describe various embodiments.
  • the image rendering method includes:
  • Step S10 after receiving the synchronization signal sent according to the preset time period T1, acquire the current first data to be rendered at the first viewpoint and start rendering;
  • the synchronization signal is a signal for controlling the refresh frequency of the display device.
  • the synchronization signal may be a vertical synchronization signal (vertical synchronization, vsync), or other signals capable of controlling the refresh frequency of the display device.
  • a vsync is generated by the graphics card DAC (digital-to-analog converter) every time a frame is scanned, indicating the end of a frame and the beginning of a new frame.
  • the time period for sending the synchronization signal is referred to as T1, that is, a synchronization signal is sent every T1.
  • the GPU receives a synchronization signal, it will execute the same rendering process, but the data processed each time changes. The following is a specific description using a synchronization signal as an example.
  • the GPU When the GPU receives a synchronization signal, it acquires the current data to be rendered of the first viewpoint (hereinafter referred to as the first data to be rendered for distinction), and starts to render the first data to be rendered.
  • the first viewpoint may be the left eye And one of the right eyes, if the first viewpoint is the left eye then the second viewpoint is the right eye, if the first viewpoint is the right eye then the second viewpoint is the left eye.
  • the data to be rendered refers to the data that needs to be rendered by the GPU, such as vertex data, texture data, etc.
  • the data to be rendered can come from the CPU, or the output result of the previous step in the GPU data processing flow.
  • the source of the data to be rendered is There is no limitation; the rendering of the data to be rendered may include rendering operations such as vertex coloring and texture filling.
  • the rendering of the data to be rendered may include rendering operations such as vertex coloring and texture filling.
  • the data to be rendered at the same viewpoint will be updated with time, and the update period may be synchronized with the sending time period of the synchronization signal, or may not be synchronized, which is not limited in this embodiment; after receiving the synchronization signal, the GPU needs to obtain the first For the data to be rendered of the viewpoint, the data to be rendered at the current moment of the first viewpoint is obtained for rendering.
  • the user's head motion posture needs to be combined for rendering.
  • the GPU renders the first data to be rendered from the first viewpoint
  • the posture data at the current moment can also be obtained. Based on the posture Data to render the data to be rendered.
  • the GPU may immediately acquire the first current data to be rendered from the first viewpoint and start rendering, or start rendering after a certain period of time, which is not limited in this embodiment. For example, if the GPU has not completed the rendering work when it receives the synchronization signal, in one embodiment, the GPU can obtain the current first data to be rendered at the first viewpoint and start rendering after the current rendering work is completed. In another implementation manner, the GPU may also stop the work being rendered, and immediately obtain the current first data to be rendered of the first viewpoint and start rendering.
  • Step S20 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, stop rendering the first data to be rendered, and start acquiring the current second data to be rendered from the second viewpoint rendering, where, 0 ⁇ T2 ⁇ T1;
  • the GPU may detect whether rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal. If the GPU has not finished rendering the first data to be rendered at T2 after receiving the synchronization signal, it can stop rendering the first data to be rendered and obtain the current data to be rendered from the second viewpoint (hereinafter referred to as the second data to be rendered), and start rendering the second data to be rendered.
  • T2 can be a duration set in advance according to needs, T2 is greater than 0 and less than T1, specifically, it can be set according to the average rendering duration of the data to be rendered in the first viewpoint and the second viewpoint, for example, the first viewpoint and the second viewpoint
  • the average rendering time of the data to be rendered is similar, so T2 can be set to half of T1.
  • the frame image that the GPU performs asynchronous time warp is the frame image obtained by rendering the data to be rendered on the first viewpoint. Therefore, the first data to be rendered Continuing to render the rendering data just makes the GPU do useless work; for this, in this embodiment, when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to prevent the GPU from doing useless rendering work .
  • a timer may be set in the GPU to start timing after receiving the synchronization signal, and detect whether rendering of the first data to be rendered has been completed before the timing duration of the timer reaches T2.
  • Step S30 if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal, acquiring the second data to be rendered and starting rendering after the rendering of the first data to be rendered is completed;
  • the GPU may acquire the second data to be rendered and start rendering after the rendering of the first data to be rendered is completed.
  • the rendering of the first data to be rendered is completed before time T2
  • the rendering process of the GPU is in a paused waiting state, which obviously wastes the computing resources of the GPU;
  • the data has not been rendered yet when the next synchronization signal arrives, so that the rendering process of the GPU on the second data to be rendered becomes useless; for this, in this embodiment, when the first data to be rendered is rendered before time T2
  • the GPU can immediately obtain the second data to be rendered at the current moment of the second viewpoint for rendering, thereby avoiding the waste of GPU resources caused by the GPU waiting state, and because the rendering of the second data to be rendered is started earlier, the second data is also improved.
  • the rendering completion rate of the data to be rendered further reduces the possibility of GPU doing useless work, that is, increases the image rendering utilization rate.
  • the image rendering usage rate is the ratio of the number of times the frame images rendered by the GPU are used to the total number of GPU renderings (including rendered and unrendered).
  • Step S40 at T2 time after receiving the synchronization signal, asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering and store it in the display buffer, and at the time of receiving the synchronization signal
  • the rendered frame image of the second viewpoint is stored in the display buffer after undergoing asynchronous time warping.
  • the GPU performs asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at T2 time after receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warping into the display buffer; Asynchronous time warping is performed on the frame image of the second viewpoint obtained from the latest rendering, and the frame image obtained after the asynchronous time warping is stored in the display buffer.
  • asynchronous time warping means that the GPU uses another thread (hereinafter referred to as ATW thread) to time warp the frame image.
  • ATW thread and the rendering process that renders the data to be rendered can run in parallel. While the thread renders the second data to be rendered, the ATW thread also starts to time warp the frame image, and the two can be executed in parallel.
  • the display buffer is a buffer for storing frame images that can be displayed by the monitor, that is, the frame images displayed by the monitor will be obtained from the display buffer; the frame images corresponding to the first viewpoint and the second viewpoint can be stored in advance.
  • the location in the display buffer for example, the location in the display buffer used to store the frame image corresponding to the first viewpoint is called the first viewpoint buffer, and the location used to store the frame image corresponding to the second viewpoint is called Second view buffer.
  • frame image a frame of image
  • the data to be rendered will be updated with time, and the GPU will continue to render to obtain new frame images.
  • the frame image obtained from the latest rendering is obtained.
  • the GPU can store the frame image in a specific storage area, such as a buffer dedicated to storing the rendered frame image, such as a texture buffer, and the ATW thread obtains the latest rendered frame image from the specific storage area.
  • the frame image obtained by the latest rendering of the first viewpoint is the frame image of the second data.
  • the frame image obtained by the latest rendering of the second viewpoint is the frame image of the second The frame image obtained by rendering the data to be rendered; if the second data to be rendered has not been rendered when the next synchronization signal is received, then the frame image that is asynchronously time-warped when the next synchronization signal is received is the second viewpoint The frame image obtained from the last rendering.
  • the rendering after receiving the synchronization signal, starts by acquiring the current first data to be rendered of the first viewpoint; if the rendering of the first data to be rendered is not completed at T2 after receiving the synchronization signal, the rendering is stopped The first data to be rendered, and obtain the current second data to be rendered from the second viewpoint to start rendering; if the rendering of the first data to be rendered is completed before the time T2 after receiving the synchronization signal, then after the rendering of the first data to be rendered is completed Obtain the second data to be rendered and start rendering; perform asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and store it in the display buffer, and at the time of receiving the synchronization signal The frame image of the second viewpoint obtained from the latest rendering is asynchronously time-warped and stored in the display buffer.
  • the rendering of the first data to be rendered when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to avoid continuous rendering of the first data to be rendered and cause the GPU to do useless work and avoid wasting GPU resources;
  • the GPU can immediately obtain the second data to be rendered at the current moment of the second viewpoint for rendering, thereby avoiding the waste of GPU resources due to the occurrence of a waiting state on the GPU, and the early start of the first data to be rendered
  • the rendering of the second data to be rendered also improves the rendering completion rate of the second data to be rendered, thereby further reducing the possibility of GPU doing useless work and improving the image rendering utilization rate.
  • step S10 includes:
  • Step S101 when receiving the synchronization signal sent according to the preset time period T1, detecting whether there is third data to be rendered being rendered in the second viewpoint;
  • the GPU When the GPU receives the synchronization signal, it may detect whether there is data to be rendered in the second viewpoint (hereinafter referred to as third data to be rendered for distinction). It can be understood that the third data to be rendered is equivalent to the second data to be rendered in the previous synchronization signal phase, that is, it is equivalent to the GPU detecting whether the rendering of the second data to be rendered is completed before receiving the next synchronization signal .
  • Step S102 if there is, stop rendering the third data to be rendered, and acquire the current first data to be rendered at the first viewpoint to start rendering;
  • Step S103 if not, acquire the first data to be rendered and start rendering.
  • the GPU may directly acquire the first data to be rendered to start rendering.
  • the GPU may stop rendering the third data to be rendered, and obtain the current first data to be rendered from the first viewpoint to start rendering.
  • the third data to be rendered has not been rendered when the synchronization signal is received, it means that the rendering of the third data to be rendered takes a long time; and at the moment when the synchronization signal is received, the GPU needs to update the second viewpoint obtained from the latest rendering Asynchronous time warping is performed on the frame image of the data to be rendered.
  • the frame image of the GPU to perform asynchronous time warp is the frame image obtained by rendering the data to be rendered on the second viewpoint.
  • the third Continuing to render the data to be rendered just makes the GPU do useless work, and it will also take up the rendering time of the first data to be rendered, which may cause the rendering of the first data to be rendered may not be completed at T2, thus causing the GPU to render the first data to be rendered
  • the processing also becomes useless; for this, in this embodiment, when the third data to be rendered has not been completed at the moment when the synchronization signal is received, stop rendering the third data to be rendered, so as to prevent the GPU from doing useless rendering work, It also prevents the rendering of the third rendering data from occupying the rendering time of the first rendering data, improves the rendering completion rate of the first data to be rendered, thereby further reducing the possibility of GPU doing useless work, that is, increasing the image rendering usage rate.
  • the method further includes:
  • Step S50 acquiring the frame image of the first viewpoint of the first viewpoint currently cached in the display buffer at the moment of receiving the synchronization signal
  • the GPU When the GPU receives the synchronization signal, the GPU acquires the frame image of the first viewpoint currently cached in the display buffer (hereinafter referred to as the first viewpoint frame image for distinction). That is to say, the GPU performs asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warping into the display buffer, and the frame image will be displayed in The next time the GPU receives the synchronization signal, it is acquired as the frame image of the first viewpoint.
  • the first viewpoint frame image for distinction
  • Step S60 using the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently displayed in the display device.
  • the GPU uses the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently displayed on the display device.
  • the GPU may send the frame image of the first viewpoint to the display device, and the display device uses the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently being displayed.
  • the first viewpoint frame image sent by the GPU may be sent based on the MIPI (Mobile Industry Processor Interface, mobile industry processor interface) protocol.
  • the method also includes:
  • Step S70 acquiring the second viewpoint frame image of the second viewpoint currently cached in the display buffer at time T2 after receiving the synchronization signal;
  • the GPU acquires the frame image of the second viewpoint currently cached in the display buffer (hereinafter referred to as the frame image of the second viewpoint for distinction). That is to say, the GPU performs asynchronous time warp on the frame image of the second viewpoint rendered last time when receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warp into the display buffer, and the frame image will be received by the GPU
  • the time T2 after the synchronization signal is acquired as the second viewpoint frame image.
  • Step S80 using the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently displayed on the display device.
  • the GPU uses the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently displayed on the display device.
  • the GPU may send the frame image of the second viewpoint to the display device, and the display device uses the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently being displayed.
  • the second viewpoint frame image sent by the GPU may be sent based on the MIPI protocol.
  • FIG. 3 the processing flow of the rendering thread and the ATM thread in two periods T1 is shown.
  • the method further includes:
  • Step A10 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, accumulating the number of frame loss times of the first viewpoint once;
  • the GPU may change the size of T2 to adjust the duration of the data to be rendered available for rendering the first viewpoint and the second viewpoint within a cycle T1.
  • the GPU may count the number of frame drops of the first viewpoint, and may initialize the number of frame drops of the first viewpoint to 0 when the rendering process starts. If the GPU has not finished rendering the first data to be rendered at time T2 after receiving the synchronization signal, it will add one frame loss count for the first viewpoint, that is, add 1 to the original frame loss count.
  • the rendering of the first data to be rendered (the first data to be rendered here is relative to the synchronization signal received for the first time) at time T2 after receiving the synchronization signal for the first time has been rendered, then the first viewpoint The number of frames lost is still 0; the first data to be rendered at T2 after receiving the synchronization signal for the second time (the first data to be rendered here is relative to the synchronization signal received for the second time) is not rendered Completed, the number of frame drops in the first viewpoint will be added once, and become 1; the first data to be rendered at T2 after receiving the synchronization signal for the third time (here, the first data to be rendered is relative to the data received for the third time) In terms of the synchronization signal), the rendering has been completed, and the frame loss count of the first viewpoint is still 1; the first data to be rendered at T2 after receiving the synchronization signal for the fourth time (here, the first data to be rendered is relative to In terms of the synchronization signal received for the fourth time) rendering is not completed, the first viewpoint
  • Step A20 increasing T2 when it is detected that the number of times of frame loss at the first viewpoint reaches a preset number, wherein the increased T2 is smaller than T1.
  • the GPU may increase T2, but the increased T2 is still smaller than T1.
  • the preset number of times may be set in advance according to needs, and there is no limitation here. For example, if the preset number of times is 10, then when the number of times of frame loss of the first viewpoint is equal to 10, the GPU increases T2.
  • Increasing T2 can be based on adding a preset value to T2. For example, assuming that the original value is 10 and the preset value is 2, the increased T2 is 12; it can also be directly set T2 to a A larger value than the original value, such as the original value of 10, directly sets T2 to 12.
  • the GPU may reset the frame loss times of the first viewpoint, and start counting again, and further increase T2 when the frame loss times reach the preset number again. For example, increase T2 to 12 the first time the number of frame drops is reached, and increase T2 to 14 the second time the number of frame drops is reached.
  • An upper limit can be set to ensure that the increased T2 is smaller than T1.
  • T2 is increased to prolong the duration of data to be rendered available for rendering of the first viewpoint, so that the first The rendering success rate of the data to be rendered of the viewpoint is improved, thereby increasing the image rendering utilization rate of the first viewpoint.
  • the method further includes:
  • Step A30 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, add up the number of times of frame loss of the first viewpoint once;
  • Step A40 if the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of times of frame loss of the second viewpoint once;
  • the GPU may change the size of T2 to adjust the duration of the data to be rendered available for rendering the first viewpoint and the second viewpoint within a cycle T1.
  • the GPU may separately count the number of frame drops of the first viewpoint and the second viewpoint, and may initialize the number of frame drops of the first viewpoint and the second viewpoint to 0 when the rendering process starts.
  • the method of counting the number of frame loss times of the first viewpoint and the second viewpoint refer to the method for counting the number of frame loss times of the first viewpoint in step A10 of the third embodiment above, which will not be repeated here.
  • Step A50 increasing T2 when it is detected that the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint is greater than a preset ratio, wherein the increased T2 is smaller than T1;
  • T2 may be increased, but the increased T2 is still smaller than T1.
  • the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint refers to a result obtained by dividing the number of frame loss times of the first viewpoint by the number of frame loss times of the second viewpoint.
  • the preset ratio can be set in advance according to needs, and there is no limitation here, for example, it is set to 0.5.
  • the manner of increasing T2 may refer to the manner of increasing T2 in step A20 of the third embodiment above, and details are not repeated here.
  • Step A60 when it is detected that the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint is greater than the preset ratio, decrease T2, wherein the reduced T2 is greater than 0.
  • T2 When the GPU detects that the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint is greater than a preset ratio, T2 may be reduced, but the reduced T2 must be greater than 0.
  • the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint refers to the result obtained by dividing the frame loss times of the second viewpoint by the frame loss times of the first viewpoint.
  • Reducing T2 can be based on subtracting a preset value from T2. For example, assuming that the original value is 10 and the preset value is 2, then the reduced T2 is 8; it can also be directly set T2 to a A smaller value than the original value, such as the original value of 10, directly sets T2 to 8. An upper limit can be set to ensure that the increased T2 is less than T1, and a lower limit can be set to ensure that the decreased T2 is greater than 0.
  • T2 is increased to Extend the duration of the data to be rendered for rendering the first viewpoint.
  • T2 is reduced to prolong the time available for rendering of the second viewpoint.
  • the duration of the data to be rendered makes the rendering success rates of the first viewpoint and the second viewpoint equal, thereby balancing the image rendering utilization rates of the first viewpoint and the second viewpoint.
  • the method further includes:
  • Step A70 when the synchronization signal is received for the first time, one of the first viewpoint and the second viewpoint is set as the left-eye viewpoint, and the other is set as the right-eye viewpoint;
  • the GPU may set one of the first viewpoint and the second viewpoint as the left-eye viewpoint, and the other as the right-eye viewpoint.
  • the first viewpoint may be set as the left-eye viewpoint
  • the second viewpoint may be set as the right-eye viewpoint
  • the first viewpoint may be set as the right-eye viewpoint
  • the second viewpoint may be set as the left-eye viewpoint.
  • Step A80 swapping the left and right eye viewpoint settings of the first viewpoint and the second viewpoint every preset time period T3 since the synchronization signal is received for the first time.
  • the GPU can exchange the left and right eye view settings of the first view point and the second view point every preset time period T3 from the first time the synchronization signal is received, that is, if the first view point is the left eye view point and the second view point is the right eye viewpoint, then the first viewpoint is the right eye viewpoint and the second viewpoint is the left eye viewpoint after swapping, if the first viewpoint is the right eye viewpoint and the second viewpoint is the left eye viewpoint, then the first viewpoint is the left eye viewpoint after the swap Viewpoint
  • the second viewpoint is the right eye viewpoint.
  • T3 can be set according to specific needs; because within a time period T1, for the user, the picture he sees is to refresh the picture of one eye first, and then refresh the picture of the other eye.
  • the setting of the left and right eye viewpoints of the first viewpoint and the second viewpoint is equivalent to a change in the refresh sequence of the left and right eye images.
  • T3 can be set much larger than T1. Since in a time period T1, the data to be rendered of the first viewpoint is rendered first, and then the data to be rendered of the second viewpoint is rendered, the rendering time of the data to be rendered of the two viewpoints may be different, but the data available for rendering the two viewpoints The duration of the data to be rendered is relatively fixed through T2, which may lead to an unbalanced rendering success rate of the two viewpoints, one with a lower rendering success rate and the other with a higher rendering success rate.
  • the picture of the other eye is relatively stuck; in this embodiment, in order to avoid this problem, the left and right eye viewpoints are rendered successfully by exchanging the left and right eye viewpoint settings of the first viewpoint and the second viewpoint on a regular basis
  • the rate gradually balances over time, so that the user's eyes feel more balanced and harmonious, and the user experience is improved.
  • the embodiment of the present application also proposes an image rendering device.
  • the device includes:
  • the first rendering module 10 is configured to acquire the current first data to be rendered of the first viewpoint and start rendering after receiving the synchronization signal sent according to the preset time period T1;
  • the second rendering module 20 is configured to stop rendering the first data to be rendered if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, and obtain the current first data to be rendered of the second viewpoint 2.
  • the data to be rendered starts to be rendered, where, 0 ⁇ T2 ⁇ T1;
  • the second rendering module 20 is further configured to acquire the second data after the rendering of the first data to be rendered is completed if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal.
  • the data to be rendered starts to render;
  • the cache module 30 is configured to perform asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and then store it in the display buffer, and at the time of receiving the synchronization signal Asynchronous time warping is performed on the frame image of the second viewpoint obtained from the latest rendering, and then stored in the display buffer.
  • the first rendering module 10 includes:
  • a detection unit configured to detect whether there is third data to be rendered being rendered in the second viewpoint when receiving the synchronization signal sent according to the preset time period T1;
  • the first rendering unit is configured to, if any, stop rendering the third data to be rendered, and acquire the current first data to be rendered at the first viewpoint to start rendering;
  • the second rendering unit is configured to acquire the first data to be rendered and start rendering if there is none.
  • the device also includes:
  • the first acquisition module is used to acquire the first viewpoint frame image of the first viewpoint currently cached in the display buffer at the moment of receiving the synchronization signal;
  • the first refreshing module is configured to refresh the frame image of the first viewpoint currently displayed in the display device by using the frame image of the first viewpoint.
  • the device also includes:
  • a second acquisition module configured to acquire a second viewpoint frame image of a second viewpoint currently cached in the display buffer at time T2 after receiving the synchronization signal
  • the second refreshing module is configured to refresh the frame image of the second viewpoint currently displayed in the display device by using the frame image of the second viewpoint.
  • the device also includes:
  • the first counting module is configured to add up the number of frame drops of the first viewpoint once if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal;
  • the first adjustment module is configured to increase T2 when it is detected that the number of times of frame loss at the first viewpoint reaches a preset number, wherein the increased T2 is smaller than T1.
  • the device also includes:
  • the second counting module is configured to add up the number of frame drops of the first viewpoint once if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal;
  • the third counting module is configured to add up the number of frame drops of the second viewpoint once if the second viewpoint has data to be rendered that is being rendered at the moment when the synchronization signal is received;
  • the second adjustment module is used to increase T2 when it is detected that the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint is greater than the preset ratio, wherein the increased T2 is smaller than T1;
  • the third adjustment module is configured to decrease T2 when it is detected that the ratio of the number of frame loss of the second viewpoint to the number of frame loss of the first viewpoint is greater than the preset ratio, wherein the reduced T2 is greater than 0.
  • the device also includes:
  • a setting module configured to set one of the first viewpoint and the second viewpoint as the left-eye viewpoint and the other as the right-eye viewpoint when the synchronization signal is received for the first time;
  • the swapping module is configured to swap the left and right eye point settings of the first point of view and the second point of view every preset time period T3 since the synchronization signal is received for the first time.
  • the extended content of the specific implementation manner of the image rendering device of the present application is basically the same as that of the embodiments of the image rendering method described above, and will not be repeated here.
  • the embodiment of the present application also proposes a computer-readable storage medium, on which an image rendering program is stored, and when the image rendering program is executed by a processor, the steps of the image rendering method described below are implemented.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了一种图像渲染方法、装置、设备及计算机可读存储介质,所述方法包括:当接收到按照时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;若在接收到同步信号后的T2时刻第一待渲染数据未渲染完成,则停止渲染并获取第二视点当前的第二待渲染数据开始渲染;若在接收到同步信号后的T2时刻前渲染完成,则获取第二待渲染数据开始渲染;在接收到同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入显示缓冲区。本申请避免了GPU资源浪费,并提高了图像渲染使用率。

Description

图像渲染方法、装置、设备及计算机可读存储介质
本申请要求于2021年7月27日提交中国专利局、申请号为202110853251.4、发明名称为“图像渲染方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像渲染方法、装置、设备及计算机可读存储介质。
背景技术
AR/VR通过图像渲染技术,将渲染的虚拟图像刷新到显示设备,用户通过头戴显示设备体验到虚拟显示/增强显示的效果。由于渲染过程需要时间,就会造成实际和感知之间的时间延迟,这种延迟必须控制在一定范围,否则用户会出现眩晕等不适感。为了缓解用户这种不适,出现了ATW(Asynchronous Time warp,异步时间扭曲)技术。
现行的智能眼镜假设屏幕刷新周期是T,在前T/2渲染左眼,左眼渲染完成如果还未到T/2时刻,等到T/2时刻渲染右眼,固定在T/2时刻启动左眼ATW,在T时刻启动右眼ATW,如果启动ATW时没做完,就用上一帧来代替。这种渲染机制下,渲染进程可能存在等待空期和做无用的渲染,导致既浪费GPU资源做处理,图像渲染使用率又不高。
发明内容
本申请的主要目的在于提供一种图像渲染方法、装置、设备及计算机可读存储介质,旨在解决现有的渲染机制既浪费GPU资源做处理,图像渲染使用率又不高的技术问题。
为实现上述目的,本申请提供一种图像渲染方法,所述方法包括以下步骤:
当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前 的第一待渲染数据开始渲染;
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
可选地,所述当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染的步骤包括:
当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;
若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;
若没有,则获取所述第一待渲染数据开始渲染。
可选地,所述方法还包括:
在接收到所述同步信号的时刻获取所述显示缓冲区中当前缓存的第一视点的第一视点帧图像;
采用所述第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。
可选地,所述方法还包括:
在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;
采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
可选地,所述方法还包括:
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
可选地,所述方法还包括:
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;
当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;
当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
可选地,所述方法还包括:
当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;
从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
为实现上述目的,本申请还提供一种图像渲染装置,所述装置包括:
第一渲染模块,用于当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;
第二渲染模块,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
所述第二渲染模块还用于若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
缓存模块,用于在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
为实现上述目的,本申请还提供一种图像渲染设备,所述图像渲染设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的图像渲染程序,所述图像渲染程序被所述处理器执行时实现如上所述的图像渲染方法的步骤。
此外,为实现上述目的,本申请还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现如上所述的图像渲染方法的步骤。
本申请中,通过在接收到同步信号后,获取第一视点当前的第一待渲染数据开始渲染;若在接收到同步信号后的T2时刻第一待渲染数据未渲染完成,则停止渲染第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染;若在接收到同步信号后的T2时刻前第一待渲染数据渲染完成,则在第一待渲染数据渲染完成后获取第二待渲染数据开始渲染;在接收到同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入显示缓冲区。在本申请中,通过在T2时刻第一待渲染数据还未渲染完成时,停止渲染第一待渲染数据,以避免对第一待渲染数据继续渲染而造成GPU做无用功,避免浪费GPU资源;当第一待渲染数据在T2时刻前就渲染完成时,GPU可以立即获取第二视点当前时刻的第二待渲染数据进行渲染,从而避免GPU因出现等待状态而浪费GPU资源,并且由于提早开始第二待渲染数据的渲染,也提高了第二待渲染数据的渲染完成率,从而进一步降低了GPU做无用功的可能性,也提高了图像渲染使用率。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的结构示意图;
图2为本申请图像渲染方法第一实施例的流程示意图;
图3为本申请图像渲染方法实施例涉及的一种渲染流程示意图;
图4为本申请图像渲染装置较佳实施例的功能模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的设备结构示意图。
需要说明的是,本申请实施例图像渲染设备可以是智能手机、个人计算机和服务器等设备,在此不做具体限制。
如图1所示,该图像渲染设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
本领域技术人员可以理解,图1中示出的设备结构并不构成对图像渲染设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及图像渲染程序。操作系统是管理和控制设备硬件和软件资源的程序,支持图像渲染程序以及其它软件或程序的运行。在图1所示的设备中,用户接口1003主要用于与客户端进行数据通信;网络接口1004主要用于与服务器建立通信连接;而处理器1001可以用于调用存储器1005中存储的图像渲染程序,并执行以下操作:
当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前 的第一待渲染数据开始渲染;
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
进一步地,所述当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染的包括:
当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;
若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;
若没有,则获取所述第一待渲染数据开始渲染。
进一步地,处理器1001还可以用于调用存储器1005中存储的图像渲染程序,执行以下操作:
在接收到所述同步信号的时刻获取所述显示缓冲区中当前缓存的第一视点的第一视点帧图像;
采用所述第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。
进一步地,处理器1001还可以用于调用存储器1005中存储的图像渲染程序,执行以下操作:
在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;
采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
进一步地,处理器1001还可以用于调用存储器1005中存储的图像渲染 程序,执行以下操作:
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
进一步地,处理器1001还可以用于调用存储器1005中存储的图像渲染程序,执行以下操作:
若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;
当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;
当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
进一步地,处理器1001还可以用于调用存储器1005中存储的图像渲染程序,执行以下操作:
当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;
从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
基于上述的结构,提出图像渲染方法的各个实施例。
参照图2,图2为本申请图像渲染方法第一实施例的流程示意图。
本申请实施例提供了图像渲染方法的实施例,需要说明的是,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。本申请图像渲染方法应用于可做图像渲染的设备或处理器,例如可应用于图像处理器GPU,以下以GPU为例进行各实施例的阐述。在本实施例中,图像渲染方法包括:
步骤S10,当接收到按照预设的时间周期T1发送的同步信号后,获取第 一视点当前的第一待渲染数据开始渲染;
同步信号是控制显示设备刷新频率的信号,在本实施例中,同步信号可以是垂直同步信号(vertical synchronization,vsync),也可以是其他能够控制显示设备刷新频率的信号。一般是由显卡DAC(数字模拟转换器)每完成一帧的扫描后产生一个vsync,预示着一帧结束,新一帧的开始。在本实施例中,将发送同步信号的时间周期称为T1,也即,每隔T1发送一个同步信号。每当GPU接收到同步信号后,将执行相同的渲染流程,只是每一次处理的数据有变化,以下是以一个同步信号为例进行的具体说明。
当GPU接收到一个同步信号后,获取第一视点当前的待渲染数据(为示区别以下称为第一待渲染数据),并开始对第一待渲染数据进行渲染。其中,VR(Virtual Reality,虚拟现实)/AR(Augmented Reality,增强现实)场景中,通过针对左右眼渲染出不同的图像来形成3D视觉效果,在本实施例中,第一视点可以是左眼和右眼中的其中一个,若第一视点是左眼那么第二视点就是右眼,若第一视点是右眼那么第二视点就是左眼。待渲染数据是指需要由GPU进行渲染的数据,例如顶点数据、纹理数据等,待渲染数据可以来自于CPU,或来自于GPU数据处理流程中的前一步输出结果,在此对待渲染数据的来源不做限制;对待渲染数据进行渲染可以包括顶点着色、纹理填充等渲染操作,具体可参考现有的GPU图像渲染原理,在此不进行详细赘述。同一视点的待渲染数据会随着时间更新,更新的周期可能与同步信号的发送时间周期同步,也可能不同步,在本实施例中不做限制;GPU在接收到同步信号后需要获取第一视点的待渲染数据时,是获取第一视点当前时刻的待渲染数据进行渲染。需要说明的是,VR/AR场景中,需要结合用户的头部运动姿态来进行渲染,GPU在对第一视点的第一待渲染数据进行渲染时,还可获取当前时刻的姿态数据,基于姿态数据来对待渲染数据进行渲染。
需要说明的是,GPU在接收到同步信号时,可以立即获取第一视点当前的第一待渲染数据开始渲染,也可以是在一定时长后才开始渲染,对此本实施例并不做限制。例如,若GPU在接收到同步信号时,有正在渲染的工作没完成,则在一实施方式中,GPU可以将当前渲染工作完成后再获取第一视点当前的第一待渲染数据开始渲染,在另一实施方式中,GPU也可以停止正在渲染的工作,立即获取第一视点当前的第一待渲染数据开始渲染。
步骤S20,若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
在接收到同步信号后,GPU可以检测在接收到同步信号后的T2时刻前,第一待渲染数据是否渲染完成。若GPU在接收到同步信号后的T2时刻第一待渲染数据还未渲染完成,则可以停止渲染第一待渲染数据,并获取第二视点当前的待渲染数据(为示区别以下称为第二待渲染数据),并开始对第二待渲染数据进行渲染。其中,T2可以是预先根据需要设置的一个时长,T2大于0且小于T1,具体可以根据第一视点和第二视点的待渲染数据的平均渲染时长来设置,例如第一视点和第二视点的待渲染数据的平均渲染时长差不多,则可以将T2设置为T1的一半。当在接收到同步信号后的T2时刻第一待渲染数据还未渲染完成,说明第一待渲染数据的渲染耗费了较长时间;而在T2时刻GPU要对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲,此时,由于第一待渲染数据还未渲染完成,GPU进行异步时间扭曲的帧图像是第一视点上一个待渲染数据渲染得到的帧图像,因此,对第一待渲染数据继续进行渲染只是让GPU做无用功;对此,在本实施例中,在T2时刻第一待渲染数据还未渲染完成时,停止渲染第一待渲染数据,以避免GPU做无用的渲染工作。
具体地,GPU中可以设置一个计时器,从接收到同步信号后开始计时,检测在计时器的计时时长到达T2之前,是否已完成第一待渲染数据的渲染。
步骤S30,若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
若第一待渲染数据在GPU接收到同步信号后的T2时刻前渲染完成,则GPU可以在第一待渲染数据渲染完成后,获取第二待渲染数据开始渲染。当第一待渲染数据在T2时刻前渲染完成时,说明第一待渲染数据的渲染耗费的较短的时长;若等到T2时刻才对第二视点的第二待渲染数据进行渲染,从第一待渲染数据渲染完成到T2时刻的这段时间GPU的渲染进程处于暂停等待状态,显然浪费了GPU的计算资源;并且,很可能由于第二待渲染数据需要耗费较长时间,导致第二待渲染数据在下一同步信号到时还未渲染完成,从 而导致GPU对第二待渲染数据的渲染处理也变成无用功;对此,在本实施例中,当第一待渲染数据在T2时刻前就渲染完成时,GPU可以立即获取第二视点当前时刻的第二待渲染数据进行渲染,从而避免GPU因出现等待状态而浪费GPU资源,并且由于提早开始第二待渲染数据的渲染,也提高了第二待渲染数据的渲染完成率,从而进一步降低了GPU做无用功的可能性,也即提高了图像渲染使用率。图像渲染使用率即GPU渲染出的帧图像被使用的次数占GPU渲染总次数(包括渲染完成和未渲染完成)的比例。
步骤S40,在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
GPU在接收到同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲,将异步时间扭曲后得到的帧图像存入显示缓冲区;在接收到同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲,将异步时间扭曲后得到的帧图像存入显示缓冲区。其中,异步时间扭曲是指GPU通过另一个线程(以下称为ATW线程)来对帧图像进行时间扭曲,ATW线程与渲染待渲染数据的渲染进程是可以并行运行的,例如在T2时刻GPU通过渲染线程对第二待渲染数据进行渲染的同时,ATW线程也开始对帧图像进行时间扭曲,两者可以并行执行。显示缓冲区是一个用于存放可以供显示器显示的帧图像的缓冲区,也即,显示器显示的帧图像将从显示缓冲区中获取;预先可以设置第一视点和第二视点对应的帧图像存放于显示缓冲区中的位置,例如,将显示缓冲区中用于存放第一视点对应的帧图像的位置称为第一视点缓冲区,将用于存放第二视点对应的帧图像的位置称为第二视点缓冲区。需要说明的是,GPU对待渲染数据进行渲染完成后,将得到一帧图像(以下称为“帧图像”),待渲染数据随着时间更新,GPU也会不断渲染得到新的帧图像,GPU在需要获取帧图像进行异步时间扭曲时,获取的是最近一次渲染得到的帧图像。GPU可将帧图像存放于一个特定的存储区域,例如一个专门用于存放渲染得到的帧图像的缓冲区,如纹理缓冲区,ATW线程从该特定的存储区域获取最近一次渲染得到的帧图像。
可以理解的是,如果第一待渲染数据在接收到同步信号后的T2时刻前就 渲染完成了,那么在T2时刻进行异步时间扭曲时,第一视点最近一次渲染得到的帧图像就是对该第一待渲染数据渲染得到的帧图像;如果第一待渲染数据在接收到同步信号后的T2时刻还未渲染完成,那么在T2时刻进行异步时间扭曲的帧图像就是第一视点上一次渲染得到的帧图像。如果第二待渲染数据在接收到下一个同步信号之前就渲染完成了,那么在接收到下一个同步信号的时刻进行异步时间扭曲时,第二视点最近一次渲染得到的帧图像就是对该第二待渲染数据渲染得到的帧图像;如果第二待渲染数据在接收到下一个同步信号的时刻还未渲染完成,那么在接收到下一个同步信号的时刻进行异步时间扭曲的帧图像就是第二视点上一次渲染得到的帧图像。
在本实施例中,通过在接收到同步信号后,获取第一视点当前的第一待渲染数据开始渲染;若在接收到同步信号后的T2时刻第一待渲染数据未渲染完成,则停止渲染第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染;若在接收到同步信号后的T2时刻前第一待渲染数据渲染完成,则在第一待渲染数据渲染完成后获取第二待渲染数据开始渲染;在接收到同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入显示缓冲区。在本实施例中,通过在T2时刻第一待渲染数据还未渲染完成时,停止渲染第一待渲染数据,以避免对第一待渲染数据继续渲染而造成GPU做无用功,避免浪费GPU资源;当第一待渲染数据在T2时刻前就渲染完成时,GPU可以立即获取第二视点当前时刻的第二待渲染数据进行渲染,从而避免GPU因出现等待状态而浪费GPU资源,并且由于提早开始第二待渲染数据的渲染,也提高了第二待渲染数据的渲染完成率,从而进一步降低了GPU做无用功的可能性,也提高了图像渲染使用率。
进一步地,在一实施方式中,所述步骤S10包括:
步骤S101,当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;
当GPU接收到同步信号时,可检测第二视点是否有正在渲染的待渲染数据(为示区别以下称为第三待渲染数据)。可以理解的是,第三待渲染数据也就相当于是上一个同步信号阶段中的第二待渲染数据,也即,相当于是GPU 检测在接收到下一个同步信号前第二待渲染数据是否渲染完成。
步骤S102,若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;
步骤S103,若没有,则获取所述第一待渲染数据开始渲染。
若没有正在渲染的第三待渲染数据,则GPU可以直接获取第一待渲染数据开始渲染。
若有正在渲染的第三待渲染数据,则GPU可以停止渲染该第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染。当在接收到同步信号时第三待渲染数据还未渲染完成,说明第三待渲染数据的渲染耗费了较长时间;而在接收到同步信号的时刻GPU要对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲,此时,由于第三待渲染数据还未渲染完成,GPU进行异步时间扭曲的帧图像是第二视点上一个待渲染数据渲染得到的帧图像,因此,对第三待渲染数据继续进行渲染只是让GPU做无用功,并且还会占用第一待渲染数据的渲染时间,导致第一待渲染数据可能无法在T2时刻渲染完成,从而导致GPU对第一待渲染数据的渲染处理也变成无用功;对此,在本实施例中,当在接收到同步信号的时刻第三待渲染数据还未完成时,停止渲染第三待渲染数据,以避免GPU做无用的渲染工作,也避免第三渲染数据的渲染占用第一渲染数据的渲染时间,提高了第一待渲染数据的渲染完成率,从而进一步降低了GPU做无用功的可能性,也即提高了图像渲染使用率。
进一步地,基于上述第一实施例,提出本申请图像渲染方法的第二实施例,在本实施例中,所述方法还包括:
步骤S50,在接收到所述同步信号的时刻获取所述显示缓冲区中当前缓存的第一视点的第一视点帧图像;
在GPU接收到同步信号的时刻,GPU获取显示缓冲区中当前缓存的第一视点的帧图像(以下称为第一视点帧图像以示区分)。也即,GPU在接收到同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲,将异步时间扭曲后得到的帧图像存入显示缓冲区,该帧图像会在GPU下一次接收到同步信号时被作为第一视点帧图像获取。
步骤S60,采用所述第一视点帧图像刷新显示设备中当前显示的第一视 点的帧图像。
在获取到第一视点帧图像后,GPU采用第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。具体地,GPU可以将第一视点帧图像发送给显示设备,由显示设备采用第一视点帧图像刷新当前正在显示的第一视点的帧图像。在一实施方式中,GPU发送第一视点帧图像可以基于MIPI(Mobile Industry Processor Interface,移动行业处理器接口)协议发送。
进一步地,在一实施方式中,所述方法还包括:
步骤S70,在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;
在GPU接收到同步信号后的T2时刻,GPU获取显示缓冲区中当前缓存的第二视点的帧图像(以下称为第二视点帧图像以示区分)。也即,GPU在接收到同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲,将异步时间扭曲后得到的帧图像存入显示缓冲区,该帧图像会在GPU接收到该同步信号后的T2时刻被作为第二视点帧图像获取。
步骤S80,采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
在获取到第二视点帧图像后,GPU采用第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。具体地,GPU可以将第二视点帧图像发送给显示设备,由显示设备采用第二视点帧图像刷新当前正在显示的第二视点的帧图像。在一实施方式中,GPU发送第二视点帧图像可以基于MIPI协议发送。
在一实施方式中,如图3所示,示出了两个周期T1内渲染线程、ATM线程的处理流程。
进一步地,基于上述第一和/或第二实施例,提出本申请图像渲染方法的第三实施例,在本实施例中,所述方法还包括:
步骤A10,若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
在本实施例中,GPU可以对T2的大小进行更改,以调整一个周期T1内可供渲染第一视点和第二视点的待渲染数据的时长。
具体地,GPU可以对第一视点的丢帧次数进行统计,在渲染进程开始时,可以对第一视点的丢帧次数初始化为0。若GPU在接收到同步信号后的T2时刻第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次,也即,在原丢帧次数上加上1。例如,第一次接收到同步信号后的T2时刻第一待渲染数据(这里的第一待渲染数据是相对于第一次接收到的该同步信号而言的)已渲染完成,则第一视点的丢帧次数仍然为0;第二次接收到同步信号后的T2时刻第一待渲染数据(这里的第一待渲染数据是相对于第二次接收到的该同步信号而言的)未渲染完成,则第一视点的丢帧次数累加一次,变成1;第三次接收到同步信号后的T2时刻第一待渲染数据(这里的第一待渲染数据是相对于第三次接收到的该同步信号而言的)已渲染完成,则第一视点的丢帧次数仍然为1;第四次接收到同步信号后的T2时刻第一待渲染数据(这里的第一待渲染数据是相对于第四次接收到的该同步信号而言的)未渲染完成,则第一视点的丢帧次数累加一次,变成2;依次类推。
步骤A20,当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
当第一视点的丢帧次数累加达到预设次数时,GPU可以增大T2,但增大后的T2仍然小于T1。其中,预设次数可以是预先根据需要设置的,在此不做限制,例如,预设次数为10次,则当第一视点的丢帧次数等于10时,GPU增大T2。增大T2可以是在T2的基础上加上一个预设的值,例如,假设原始值是10,预设值是2,则增大后的T2为12;也可以是直接将T2设置为一个比原始值更大的值,例如原始值为10,直接将T2设置为12。
进一步地,在一实施方式中,GPU可以在每次增大T2后,重置第一视点的丢帧次数,并开始重新计数,当丢帧次数再次达到预设次数时,对T2进一步增加。例如,第一次达到丢帧次数时增大T2到12,第二次达到丢帧次数时增大T2到14。可以设置一个上限,保证增大后的T2是小于T1的。
在本实施例中,通过对第一视点的丢帧次数进行累计,当丢帧次数大于预设次数时,增大T2,以延长可供渲染第一视点的待渲染数据的时长,使得第一视点的待渲染数据的渲染成功率提高,从而提高第一视点的图像渲染使用率。
进一步地,基于上述第一和/或第二实施例,提出本申请图像渲染方法的第四实施例,在本实施例中,所述方法还包括:
步骤A30,若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
步骤A40,若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;
在本实施例中,GPU可以对T2的大小进行更改,以调整一个周期T1内可供渲染第一视点和第二视点的待渲染数据的时长。
具体地,GPU可以对第一视点和第二视点的丢帧次数分别进行统计,在渲染进程开始时,可以对第一视点和第二视点的丢帧次数初始化为0。第一视点和第二视点的丢帧次数统计方式可参照上述第三实施例步骤A10中第一视点的丢帧次数统计方法,在此不做重复赘述。
步骤A50,当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;
当GPU检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,可以增大T2,但增大后的T2仍然小于T1。其中,第一视点的丢帧次数与第二视点的丢帧次数的比值是指第一视点的丢帧次数除以第二视点的丢帧次数得到的结果。预设比例可以是预先根据需要设置的,在此不做限制,例如设置为0.5。增大T2的方式可参照上述第三实施例步骤A20中增大T2的方式,在此不做重复赘述。
步骤A60,当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
当GPU检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于预设比值时,可以减小T2,但减小后的T2要大于0。其中,第二视点的丢帧次数与第一视点的丢帧次数的比值是指第二视点的丢帧次数除以第一视点的丢帧次数得到的结果。减小T2可以是在T2的基础上减去一个预设的值,例如,假设原始值是10,预设值是2,则减小后的T2为8;也可以是直接将T2设置为一个比原始值更小的值,例如原始值为10,直接将T2设置为8。可以设置一个上限,保证增大后的T2是小于T1的,设置一个下限,保证减小后的T2是大于0的。
在一实施方式中,每次增大和减小T2后,重置第一视点和第二视点的丢帧次数,并开始重新计数,当比值再次大于预设比值时,对应地对T2进一步增加或减少。
在本实施例中,通过对第一视点和第二视点的丢帧次数进行累计,当第一视点的丢帧次数与第二视点的丢帧次数比值大于预设阈值时,增大T2,以延长可供渲染第一视点的待渲染数据的时长,当第二视点的丢帧次数与第一视点的丢帧次数比值大于预设阈值时,减小T2,以延长可供渲染第二视点的待渲染数据的时长,使得第一视点和第二视点的渲染成功率相当,从而平衡第一视点和第二视点的图像渲染使用率。
进一步地,基于上述第一、第二、第三和/或第四实施例,提出本申请图像渲染方法的第五实施例,在本实施例中,所述方法还包括:
步骤A70,当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;
当第一次接收到同步信号时,GPU可以将第一视点和第二视点中的其中一个设置为左眼视点,另一个设置为右眼视点。例如,可以将第一视点设置为左眼视点,将第二视点设置为右眼视点,或者,将第一视点设置为右眼视点,将第二视点设置为左眼视点。
步骤A80,从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
GPU可以从第一次接收到同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换,也即,如果第一视点是左眼视点第二视点是右眼视点,则互换后第一视点是右眼视点第二视点是左眼视点,如果第一视点是右眼视点第二视点是左眼视点,则互换后第一视点是左眼视点第二视点是右眼视点。其中,T3可以根据具体的需要进行设置;由于在一个时间周期T1内,对用户而言其看到的画面是先刷新一只眼的画面,再刷新另一只眼的画面,因此,互换第一视点和第二视点的左右眼视点设置,相当于左右眼画面刷新的顺序变化了,为避免刷新顺序频繁变化导致用户看到的画面出现卡顿,T3可以设置得远大于T1。由于在一个时间周期T1内,是先渲染第一视点的待渲染数据,再渲染第二视点的待渲染数据,两个视点的待 渲染数据的渲染时长可能不同,但可供渲染两个视点的待渲染数据的时长是通过T2相对固定的,从而可能导致两个视点的渲染成功率不平衡,一个渲染成功率较低,另一个渲染成功率较高,对用户而言就是一只眼画面较流程,另一只眼画面较卡顿;本实施例中,为避免这个问题,通过定期的互换第一视点和第二视点的左右眼视点设置,使得左眼视点和右眼视点的渲染成功率随着时间的推移逐渐平衡,从而使得用户双眼的感受更加平衡和和谐,提高用户体验。
此外,本申请实施例还提出一种图像渲染装置,参照图4,所述装置包括:
第一渲染模块10,用于当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;
第二渲染模块20,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
所述第二渲染模块20还用于若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
缓存模块30,用于在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
进一步地,所述第一渲染模块10包括:
检测单元,用于当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;
第一渲染单元,用于若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;
第二渲染单元,用于若没有,则获取所述第一待渲染数据开始渲染。
进一步地,所述装置还包括:
第一获取模块,用于在接收到所述同步信号的时刻获取所述显示缓冲区 中当前缓存的第一视点的第一视点帧图像;
第一刷新模块,用于采用所述第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。
进一步地,所述装置还包括:
第二获取模块,用于在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;
第二刷新模块,用于采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
进一步地,所述装置还包括:
第一计数模块,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
第一调整模块,用于当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
进一步地,所述装置还包括:
第二计数模块,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
第三计数模块,用于若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;
第二调整模块,用于当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;
第三调整模块,用于当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
进一步地,所述装置还包括:
设置模块,用于当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;
互换模块,用于从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
本申请图像渲染装置的具体实施方式的拓展内容与上述图像渲染方法各实施例基本相同,在此不做赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述存储介质上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现如下所述的图像渲染方法的步骤。
本申请图像渲染设备和计算机可读存储介质的各实施例,均可参照本申请图像渲染方法各个实施例,此处不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (10)

  1. 一种图像渲染方法,其特征在于,所述方法包括以下步骤:
    当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;
    若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
    若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
    在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
  2. 如权利要求1所述的图像渲染方法,其特征在于,所述当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染的步骤包括:
    当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;
    若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;
    若没有,则获取所述第一待渲染数据开始渲染。
  3. 如权利要求1所述的图像渲染方法,其特征在于,所述方法还包括:
    在接收到所述同步信号的时刻获取所述显示缓冲区中当前缓存的第一视点的第一视点帧图像;
    采用所述第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。
  4. 如权利要求1所述的图像渲染方法,其特征在于,所述方法还包括:
    在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;
    采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
  5. 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:
    若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
    当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
  6. 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:
    若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;
    若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;
    当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;
    当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
  7. 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:
    当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;
    从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
  8. 一种图像渲染装置,其特征在于,所述装置包括:
    第一渲染模块,用于当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;
    第二渲染模块,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;
    所述第二渲染模块还用于若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;
    缓存模块,用于在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
  9. 一种图像渲染设备,其特征在于,所述图像渲染设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的图像渲染程序,所述图像渲染程序被所述处理器执行时实现如权利要求1至7中任一项所述的图像渲染方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现如权利要求1至7中任一项所述的图像渲染方法的步骤
PCT/CN2021/128030 2021-07-27 2021-11-02 图像渲染方法、装置、设备及计算机可读存储介质 WO2023005042A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/007,202 US20240265485A1 (en) 2021-07-27 2021-11-02 Image rendering method, device, equipment and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110853251.4 2021-07-27
CN202110853251.4A CN113538648B (zh) 2021-07-27 2021-07-27 图像渲染方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023005042A1 true WO2023005042A1 (zh) 2023-02-02

Family

ID=78089272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128030 WO2023005042A1 (zh) 2021-07-27 2021-11-02 图像渲染方法、装置、设备及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20240265485A1 (zh)
CN (1) CN113538648B (zh)
WO (1) WO2023005042A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116059637A (zh) * 2023-04-06 2023-05-05 广州趣丸网络科技有限公司 虚拟对象渲染方法、装置、存储介质及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538648B (zh) * 2021-07-27 2024-04-30 歌尔科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190043448A1 (en) * 2018-06-29 2019-02-07 Intel Corporation Computers for supporting multiple virtual reality display devices and related methods
CN109358830A (zh) * 2018-09-20 2019-02-19 京东方科技集团股份有限公司 消除ar/vr画面撕裂的双屏显示方法及ar/vr显示设备
CN109887065A (zh) * 2019-02-11 2019-06-14 京东方科技集团股份有限公司 图像渲染方法及其装置
CN112230776A (zh) * 2020-10-29 2021-01-15 北京京东方光电科技有限公司 虚拟现实显示方法、装置及存储介质
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090100096A1 (en) * 2005-08-01 2009-04-16 Phanfare, Inc. Systems, Devices, and Methods for Transferring Digital Information
JP4312238B2 (ja) * 2007-02-13 2009-08-12 株式会社ソニー・コンピュータエンタテインメント 画像変換装置および画像変換方法
US20100020088A1 (en) * 2007-02-28 2010-01-28 Panasonic Corporation Graphics rendering device and graphics rendering method
CN105224288B (zh) * 2014-06-27 2018-01-23 北京大学深圳研究生院 双目三维图形渲染方法及相关系统
WO2018086295A1 (zh) * 2016-11-08 2018-05-17 华为技术有限公司 一种应用界面显示方法及装置
US10861215B2 (en) * 2018-04-30 2020-12-08 Qualcomm Incorporated Asynchronous time and space warp with determination of region of interest
CN108921951B (zh) * 2018-07-02 2023-06-20 京东方科技集团股份有限公司 虚拟现实图像显示方法及其装置、虚拟现实设备
US11127214B2 (en) * 2018-09-17 2021-09-21 Qualcomm Incorporated Cross layer traffic optimization for split XR
WO2020062052A1 (en) * 2018-09-28 2020-04-02 Qualcomm Incorporated Smart and dynamic janks reduction technology
CN109920040B (zh) * 2019-03-01 2023-10-27 京东方科技集团股份有限公司 显示场景处理方法和装置、存储介质
US20220366629A1 (en) * 2019-07-01 2022-11-17 Nippon Telegraph And Telephone Corporation Delay measurement apparatus, delay measurement method and program
CN111652962B (zh) * 2020-06-08 2024-04-23 北京联想软件有限公司 图像渲染方法、头戴式显示设备及存储介质
CN112347408B (zh) * 2021-01-07 2021-04-27 北京小米移动软件有限公司 渲染方法、装置、电子设备及存储介质
KR20220129946A (ko) * 2021-03-17 2022-09-26 삼성전자주식회사 복수의 주사율들에 기반하여 컨텐트를 표시하는 전자 장치 및 그 동작 방법
US20230062363A1 (en) * 2021-08-31 2023-03-02 Apple Inc. Techniques for synchronizing ultra-wide band communications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190043448A1 (en) * 2018-06-29 2019-02-07 Intel Corporation Computers for supporting multiple virtual reality display devices and related methods
CN109358830A (zh) * 2018-09-20 2019-02-19 京东方科技集团股份有限公司 消除ar/vr画面撕裂的双屏显示方法及ar/vr显示设备
CN109887065A (zh) * 2019-02-11 2019-06-14 京东方科技集团股份有限公司 图像渲染方法及其装置
CN112230776A (zh) * 2020-10-29 2021-01-15 北京京东方光电科技有限公司 虚拟现实显示方法、装置及存储介质
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116059637A (zh) * 2023-04-06 2023-05-05 广州趣丸网络科技有限公司 虚拟对象渲染方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
US20240265485A1 (en) 2024-08-08
CN113538648B (zh) 2024-04-30
CN113538648A (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
CN109992232B (zh) 图像更新方法、装置、终端及存储介质
CN106296566B (zh) 一种虚拟现实移动端动态时间帧补偿渲染系统及方法
WO2023005042A1 (zh) 图像渲染方法、装置、设备及计算机可读存储介质
WO2020207250A1 (zh) 垂直同步方法、装置、终端及存储介质
JP6894976B2 (ja) 画像円滑性向上方法および装置
JP5638666B2 (ja) シームレスな表示移行
CN109920040B (zh) 显示场景处理方法和装置、存储介质
CN109819232B (zh) 一种图像处理方法及图像处理装置、显示装置
US12056854B2 (en) Systems and methods for frame time smoothing based on modified animation advancement and use of post render queues
WO2022089046A1 (zh) 虚拟现实显示方法、装置及存储介质
CN104268113B (zh) Dpi接口的lcd控制器以及其自适应带宽的方法
WO2020078172A1 (zh) 帧率控制方法、装置、终端及存储介质
US20150189021A1 (en) Information Processing Apparatus And Information Processing Method
WO2023000598A1 (zh) 增强现实设备的帧率调整方法、系统、设备及存储介质
US20240020913A1 (en) Image processing method, image processing device and computer readable storage medium
CN117576358A (zh) 一种云渲染方法及装置
EP1947602B1 (en) Information processing device, graphic processor, control processor, and information processing method
JP2002244646A (ja) データ処理システム及びデータ処理方法、コンピュータプログラム、記録媒体
JP2000029456A (ja) ディスプレイ描画・表示方法およびディスプレイ装置
CN114610255A (zh) 画面绘制方法、装置、存储介质以及终端
CN115576871A (zh) 一种虚拟现实渲染方法、装置、终端及计算机存储介质
CN118819275A (zh) 一种电子设备控制方法、装置、存储介质及电子设备
CN109874003A (zh) Vr显示控制方法、vr显示控制装置和显示装置
TWI834223B (zh) 計算設備顯示圖像的方法以及計算設備
EP4156623B1 (en) Video transmission method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21951614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.06.2024)