WO2023005042A1 - 图像渲染方法、装置、设备及计算机可读存储介质 - Google Patents
图像渲染方法、装置、设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2023005042A1 WO2023005042A1 PCT/CN2021/128030 CN2021128030W WO2023005042A1 WO 2023005042 A1 WO2023005042 A1 WO 2023005042A1 CN 2021128030 W CN2021128030 W CN 2021128030W WO 2023005042 A1 WO2023005042 A1 WO 2023005042A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- viewpoint
- rendered
- data
- synchronization signal
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Definitions
- the present application relates to the technical field of image processing, and in particular to an image rendering method, device, equipment, and computer-readable storage medium.
- AR/VR uses image rendering technology to refresh the rendered virtual image to the display device, and users experience the effect of virtual display/enhanced display through the head-mounted display device. Since the rendering process takes time, it will cause a time delay between the actual and the perception. This delay must be controlled within a certain range, otherwise the user will experience discomfort such as dizziness. In order to alleviate this discomfort for users, ATW (Asynchronous Time warp, asynchronous time warp) technology has emerged.
- ATW Asynchronous Time warp, asynchronous time warp
- the current smart glasses assume that the screen refresh cycle is T, and the left eye is rendered at the first T/2. If the rendering of the left eye is not completed at T/2, the right eye is rendered at T/2, and the left eye is fixed at T/2. Eye ATW, start right eye ATW at time T, if it is not finished when starting ATW, use the previous frame instead. Under this rendering mechanism, the rendering process may have a waiting period and useless rendering, resulting in a waste of GPU resources for processing and low image rendering usage.
- the main purpose of the present application is to provide an image rendering method, device, equipment and computer-readable storage medium, aiming to solve the technical problem that the existing rendering mechanism wastes GPU resources for processing and has low utilization rate of image rendering.
- the present application provides an image rendering method, the method comprising the following steps:
- the step of acquiring the current first data to be rendered of the first viewpoint and starting rendering includes:
- stop rendering the third data to be rendered and obtain the current first data to be rendered from the first viewpoint to start rendering;
- the method also includes:
- the frame image of the first viewpoint currently displayed in the display device is refreshed by using the frame image of the first viewpoint.
- the method also includes:
- the frame image of the second viewpoint currently displayed in the display device is refreshed by using the frame image of the second viewpoint.
- the method also includes:
- T2 is increased, wherein the increased T2 is smaller than T1.
- the method also includes:
- the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of frame drops of the second viewpoint once;
- T2 When it is detected that the ratio of the number of times of frame loss of the second viewpoint to the number of times of frame loss of the first viewpoint is greater than the preset ratio, T2 is decreased, wherein the reduced T2 is greater than 0.
- the method also includes:
- one of the first viewpoint and the second viewpoint is set as the left eye viewpoint, and the other is set as the right eye viewpoint;
- the settings of the left and right eyes of the first viewpoint and the second viewpoint are exchanged every preset time period T3 since the synchronization signal is received for the first time.
- the present application also provides an image rendering device, the device comprising:
- the first rendering module is configured to acquire the current first data to be rendered of the first viewpoint and start rendering after receiving the synchronization signal sent according to the preset time period T1;
- the second rendering module is configured to stop rendering the first data to be rendered if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, and acquire the current second The data to be rendered starts to render, where, 0 ⁇ T2 ⁇ T1;
- the second rendering module is further configured to acquire the second data to be rendered after rendering of the first data to be rendered is completed if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal.
- the rendering data starts to render
- the cache module is configured to asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and store it in the display buffer, and to store the image in the display buffer at the time of receiving the synchronization signal
- the frame image of the second viewpoint obtained from the latest rendering is stored in the display buffer after being subjected to asynchronous time warping.
- the present application also provides an image rendering device, which includes: a memory, a processor, and an image rendering program stored in the memory and operable on the processor, the image When the rendering program is executed by the processor, the steps of the above-mentioned image rendering method are realized.
- the present application also proposes a computer-readable storage medium, on which an image rendering program is stored, and when the image rendering program is executed by a processor, the above-mentioned image rendering can be realized. method steps.
- the rendering starts by acquiring the current first data to be rendered in the first viewpoint; if the rendering of the first data to be rendered is not completed at T2 after receiving the synchronization signal, stop rendering the first Data to be rendered, and obtain the current second data to be rendered from the second viewpoint to start rendering; if the rendering of the first data to be rendered is completed before the time T2 after receiving the synchronization signal, then obtain the second data to be rendered after the rendering of the first data to be rendered is completed 2.
- the data to be rendered starts rendering; at T2 time after receiving the synchronization signal, asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering and store it in the display buffer, and at the time of receiving the synchronization signal
- the rendered frame image of the second viewpoint is asynchronously time-warped and stored in the display buffer.
- the rendering of the first data to be rendered when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to avoid continuous rendering of the first data to be rendered and cause the GPU to do useless work and avoid wasting GPU resources;
- the GPU can immediately acquire the second data to be rendered at the current moment of the second viewpoint for rendering, thus avoiding the waste of GPU resources due to the waiting state of the GPU, and the early start of the second
- the rendering of the data to be rendered also improves the rendering completion rate of the second data to be rendered, thereby further reducing the possibility of the GPU doing useless work, and also improving the utilization rate of image rendering.
- Fig. 1 is a schematic structural diagram of the hardware operating environment involved in the solution of the embodiment of the present application
- FIG. 2 is a schematic flowchart of the first embodiment of the image rendering method of the present application
- FIG. 3 is a schematic diagram of a rendering process involved in an embodiment of the image rendering method of the present application
- FIG. 4 is a schematic diagram of functional modules of a preferred embodiment of the image rendering device of the present application.
- FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in the solution of the embodiment of the present application.
- the image rendering device in this embodiment of the present application may be a device such as a smart phone, a personal computer, or a server, which is not specifically limited here.
- the image rendering device may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
- the communication bus 1002 is used to realize connection and communication between these components.
- the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
- the memory 1005 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
- FIG. 1 is not limited to the image rendering device, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an image rendering program.
- An operating system is a program that manages and controls the hardware and software resources of a device, and supports the operation of image rendering programs and other software or programs.
- the user interface 1003 is mainly used for data communication with the client;
- the network interface 1004 is mainly used for establishing a communication connection with the server;
- the processor 1001 can be used to call the image rendering program stored in the memory 1005 , and do the following:
- acquiring the current first data to be rendered of the first viewpoint and starting rendering includes:
- stop rendering the third data to be rendered and obtain the current first data to be rendered from the first viewpoint to start rendering;
- processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
- the frame image of the first viewpoint currently displayed in the display device is refreshed by using the frame image of the first viewpoint.
- processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
- the frame image of the second viewpoint currently displayed in the display device is refreshed by using the frame image of the second viewpoint.
- processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
- T2 is increased, wherein the increased T2 is smaller than T1.
- processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
- the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of frame drops of the second viewpoint once;
- T2 When it is detected that the ratio of the number of times of frame loss of the second viewpoint to the number of times of frame loss of the first viewpoint is greater than the preset ratio, T2 is decreased, wherein the reduced T2 is greater than 0.
- processor 1001 can also be used to call the image rendering program stored in the memory 1005, and perform the following operations:
- one of the first viewpoint and the second viewpoint is set as the left eye viewpoint, and the other is set as the right eye viewpoint;
- the settings of the left and right eyes of the first viewpoint and the second viewpoint are exchanged every preset time period T3 since the synchronization signal is received for the first time.
- FIG. 2 is a schematic flowchart of the first embodiment of the image rendering method of the present application.
- the embodiment of the present application provides an embodiment of the image rendering method. It should be noted that although the logic sequence is shown in the flow chart, in some cases, the sequence shown or described can be executed in a different order than here. A step of.
- the image rendering method of the present application is applied to a device or processor capable of image rendering, for example, it can be applied to an image processor GPU. The following uses GPU as an example to describe various embodiments.
- the image rendering method includes:
- Step S10 after receiving the synchronization signal sent according to the preset time period T1, acquire the current first data to be rendered at the first viewpoint and start rendering;
- the synchronization signal is a signal for controlling the refresh frequency of the display device.
- the synchronization signal may be a vertical synchronization signal (vertical synchronization, vsync), or other signals capable of controlling the refresh frequency of the display device.
- a vsync is generated by the graphics card DAC (digital-to-analog converter) every time a frame is scanned, indicating the end of a frame and the beginning of a new frame.
- the time period for sending the synchronization signal is referred to as T1, that is, a synchronization signal is sent every T1.
- the GPU receives a synchronization signal, it will execute the same rendering process, but the data processed each time changes. The following is a specific description using a synchronization signal as an example.
- the GPU When the GPU receives a synchronization signal, it acquires the current data to be rendered of the first viewpoint (hereinafter referred to as the first data to be rendered for distinction), and starts to render the first data to be rendered.
- the first viewpoint may be the left eye And one of the right eyes, if the first viewpoint is the left eye then the second viewpoint is the right eye, if the first viewpoint is the right eye then the second viewpoint is the left eye.
- the data to be rendered refers to the data that needs to be rendered by the GPU, such as vertex data, texture data, etc.
- the data to be rendered can come from the CPU, or the output result of the previous step in the GPU data processing flow.
- the source of the data to be rendered is There is no limitation; the rendering of the data to be rendered may include rendering operations such as vertex coloring and texture filling.
- the rendering of the data to be rendered may include rendering operations such as vertex coloring and texture filling.
- the data to be rendered at the same viewpoint will be updated with time, and the update period may be synchronized with the sending time period of the synchronization signal, or may not be synchronized, which is not limited in this embodiment; after receiving the synchronization signal, the GPU needs to obtain the first For the data to be rendered of the viewpoint, the data to be rendered at the current moment of the first viewpoint is obtained for rendering.
- the user's head motion posture needs to be combined for rendering.
- the GPU renders the first data to be rendered from the first viewpoint
- the posture data at the current moment can also be obtained. Based on the posture Data to render the data to be rendered.
- the GPU may immediately acquire the first current data to be rendered from the first viewpoint and start rendering, or start rendering after a certain period of time, which is not limited in this embodiment. For example, if the GPU has not completed the rendering work when it receives the synchronization signal, in one embodiment, the GPU can obtain the current first data to be rendered at the first viewpoint and start rendering after the current rendering work is completed. In another implementation manner, the GPU may also stop the work being rendered, and immediately obtain the current first data to be rendered of the first viewpoint and start rendering.
- Step S20 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, stop rendering the first data to be rendered, and start acquiring the current second data to be rendered from the second viewpoint rendering, where, 0 ⁇ T2 ⁇ T1;
- the GPU may detect whether rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal. If the GPU has not finished rendering the first data to be rendered at T2 after receiving the synchronization signal, it can stop rendering the first data to be rendered and obtain the current data to be rendered from the second viewpoint (hereinafter referred to as the second data to be rendered), and start rendering the second data to be rendered.
- T2 can be a duration set in advance according to needs, T2 is greater than 0 and less than T1, specifically, it can be set according to the average rendering duration of the data to be rendered in the first viewpoint and the second viewpoint, for example, the first viewpoint and the second viewpoint
- the average rendering time of the data to be rendered is similar, so T2 can be set to half of T1.
- the frame image that the GPU performs asynchronous time warp is the frame image obtained by rendering the data to be rendered on the first viewpoint. Therefore, the first data to be rendered Continuing to render the rendering data just makes the GPU do useless work; for this, in this embodiment, when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to prevent the GPU from doing useless rendering work .
- a timer may be set in the GPU to start timing after receiving the synchronization signal, and detect whether rendering of the first data to be rendered has been completed before the timing duration of the timer reaches T2.
- Step S30 if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal, acquiring the second data to be rendered and starting rendering after the rendering of the first data to be rendered is completed;
- the GPU may acquire the second data to be rendered and start rendering after the rendering of the first data to be rendered is completed.
- the rendering of the first data to be rendered is completed before time T2
- the rendering process of the GPU is in a paused waiting state, which obviously wastes the computing resources of the GPU;
- the data has not been rendered yet when the next synchronization signal arrives, so that the rendering process of the GPU on the second data to be rendered becomes useless; for this, in this embodiment, when the first data to be rendered is rendered before time T2
- the GPU can immediately obtain the second data to be rendered at the current moment of the second viewpoint for rendering, thereby avoiding the waste of GPU resources caused by the GPU waiting state, and because the rendering of the second data to be rendered is started earlier, the second data is also improved.
- the rendering completion rate of the data to be rendered further reduces the possibility of GPU doing useless work, that is, increases the image rendering utilization rate.
- the image rendering usage rate is the ratio of the number of times the frame images rendered by the GPU are used to the total number of GPU renderings (including rendered and unrendered).
- Step S40 at T2 time after receiving the synchronization signal, asynchronously time-warp the frame image of the first viewpoint obtained from the latest rendering and store it in the display buffer, and at the time of receiving the synchronization signal
- the rendered frame image of the second viewpoint is stored in the display buffer after undergoing asynchronous time warping.
- the GPU performs asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at T2 time after receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warping into the display buffer; Asynchronous time warping is performed on the frame image of the second viewpoint obtained from the latest rendering, and the frame image obtained after the asynchronous time warping is stored in the display buffer.
- asynchronous time warping means that the GPU uses another thread (hereinafter referred to as ATW thread) to time warp the frame image.
- ATW thread and the rendering process that renders the data to be rendered can run in parallel. While the thread renders the second data to be rendered, the ATW thread also starts to time warp the frame image, and the two can be executed in parallel.
- the display buffer is a buffer for storing frame images that can be displayed by the monitor, that is, the frame images displayed by the monitor will be obtained from the display buffer; the frame images corresponding to the first viewpoint and the second viewpoint can be stored in advance.
- the location in the display buffer for example, the location in the display buffer used to store the frame image corresponding to the first viewpoint is called the first viewpoint buffer, and the location used to store the frame image corresponding to the second viewpoint is called Second view buffer.
- frame image a frame of image
- the data to be rendered will be updated with time, and the GPU will continue to render to obtain new frame images.
- the frame image obtained from the latest rendering is obtained.
- the GPU can store the frame image in a specific storage area, such as a buffer dedicated to storing the rendered frame image, such as a texture buffer, and the ATW thread obtains the latest rendered frame image from the specific storage area.
- the frame image obtained by the latest rendering of the first viewpoint is the frame image of the second data.
- the frame image obtained by the latest rendering of the second viewpoint is the frame image of the second The frame image obtained by rendering the data to be rendered; if the second data to be rendered has not been rendered when the next synchronization signal is received, then the frame image that is asynchronously time-warped when the next synchronization signal is received is the second viewpoint The frame image obtained from the last rendering.
- the rendering after receiving the synchronization signal, starts by acquiring the current first data to be rendered of the first viewpoint; if the rendering of the first data to be rendered is not completed at T2 after receiving the synchronization signal, the rendering is stopped The first data to be rendered, and obtain the current second data to be rendered from the second viewpoint to start rendering; if the rendering of the first data to be rendered is completed before the time T2 after receiving the synchronization signal, then after the rendering of the first data to be rendered is completed Obtain the second data to be rendered and start rendering; perform asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and store it in the display buffer, and at the time of receiving the synchronization signal The frame image of the second viewpoint obtained from the latest rendering is asynchronously time-warped and stored in the display buffer.
- the rendering of the first data to be rendered when the rendering of the first data to be rendered has not been completed at time T2, the rendering of the first data to be rendered is stopped, so as to avoid continuous rendering of the first data to be rendered and cause the GPU to do useless work and avoid wasting GPU resources;
- the GPU can immediately obtain the second data to be rendered at the current moment of the second viewpoint for rendering, thereby avoiding the waste of GPU resources due to the occurrence of a waiting state on the GPU, and the early start of the first data to be rendered
- the rendering of the second data to be rendered also improves the rendering completion rate of the second data to be rendered, thereby further reducing the possibility of GPU doing useless work and improving the image rendering utilization rate.
- step S10 includes:
- Step S101 when receiving the synchronization signal sent according to the preset time period T1, detecting whether there is third data to be rendered being rendered in the second viewpoint;
- the GPU When the GPU receives the synchronization signal, it may detect whether there is data to be rendered in the second viewpoint (hereinafter referred to as third data to be rendered for distinction). It can be understood that the third data to be rendered is equivalent to the second data to be rendered in the previous synchronization signal phase, that is, it is equivalent to the GPU detecting whether the rendering of the second data to be rendered is completed before receiving the next synchronization signal .
- Step S102 if there is, stop rendering the third data to be rendered, and acquire the current first data to be rendered at the first viewpoint to start rendering;
- Step S103 if not, acquire the first data to be rendered and start rendering.
- the GPU may directly acquire the first data to be rendered to start rendering.
- the GPU may stop rendering the third data to be rendered, and obtain the current first data to be rendered from the first viewpoint to start rendering.
- the third data to be rendered has not been rendered when the synchronization signal is received, it means that the rendering of the third data to be rendered takes a long time; and at the moment when the synchronization signal is received, the GPU needs to update the second viewpoint obtained from the latest rendering Asynchronous time warping is performed on the frame image of the data to be rendered.
- the frame image of the GPU to perform asynchronous time warp is the frame image obtained by rendering the data to be rendered on the second viewpoint.
- the third Continuing to render the data to be rendered just makes the GPU do useless work, and it will also take up the rendering time of the first data to be rendered, which may cause the rendering of the first data to be rendered may not be completed at T2, thus causing the GPU to render the first data to be rendered
- the processing also becomes useless; for this, in this embodiment, when the third data to be rendered has not been completed at the moment when the synchronization signal is received, stop rendering the third data to be rendered, so as to prevent the GPU from doing useless rendering work, It also prevents the rendering of the third rendering data from occupying the rendering time of the first rendering data, improves the rendering completion rate of the first data to be rendered, thereby further reducing the possibility of GPU doing useless work, that is, increasing the image rendering usage rate.
- the method further includes:
- Step S50 acquiring the frame image of the first viewpoint of the first viewpoint currently cached in the display buffer at the moment of receiving the synchronization signal
- the GPU When the GPU receives the synchronization signal, the GPU acquires the frame image of the first viewpoint currently cached in the display buffer (hereinafter referred to as the first viewpoint frame image for distinction). That is to say, the GPU performs asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warping into the display buffer, and the frame image will be displayed in The next time the GPU receives the synchronization signal, it is acquired as the frame image of the first viewpoint.
- the first viewpoint frame image for distinction
- Step S60 using the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently displayed in the display device.
- the GPU uses the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently displayed on the display device.
- the GPU may send the frame image of the first viewpoint to the display device, and the display device uses the frame image of the first viewpoint to refresh the frame image of the first viewpoint currently being displayed.
- the first viewpoint frame image sent by the GPU may be sent based on the MIPI (Mobile Industry Processor Interface, mobile industry processor interface) protocol.
- the method also includes:
- Step S70 acquiring the second viewpoint frame image of the second viewpoint currently cached in the display buffer at time T2 after receiving the synchronization signal;
- the GPU acquires the frame image of the second viewpoint currently cached in the display buffer (hereinafter referred to as the frame image of the second viewpoint for distinction). That is to say, the GPU performs asynchronous time warp on the frame image of the second viewpoint rendered last time when receiving the synchronization signal, and stores the frame image obtained after the asynchronous time warp into the display buffer, and the frame image will be received by the GPU
- the time T2 after the synchronization signal is acquired as the second viewpoint frame image.
- Step S80 using the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently displayed on the display device.
- the GPU uses the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently displayed on the display device.
- the GPU may send the frame image of the second viewpoint to the display device, and the display device uses the frame image of the second viewpoint to refresh the frame image of the second viewpoint currently being displayed.
- the second viewpoint frame image sent by the GPU may be sent based on the MIPI protocol.
- FIG. 3 the processing flow of the rendering thread and the ATM thread in two periods T1 is shown.
- the method further includes:
- Step A10 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, accumulating the number of frame loss times of the first viewpoint once;
- the GPU may change the size of T2 to adjust the duration of the data to be rendered available for rendering the first viewpoint and the second viewpoint within a cycle T1.
- the GPU may count the number of frame drops of the first viewpoint, and may initialize the number of frame drops of the first viewpoint to 0 when the rendering process starts. If the GPU has not finished rendering the first data to be rendered at time T2 after receiving the synchronization signal, it will add one frame loss count for the first viewpoint, that is, add 1 to the original frame loss count.
- the rendering of the first data to be rendered (the first data to be rendered here is relative to the synchronization signal received for the first time) at time T2 after receiving the synchronization signal for the first time has been rendered, then the first viewpoint The number of frames lost is still 0; the first data to be rendered at T2 after receiving the synchronization signal for the second time (the first data to be rendered here is relative to the synchronization signal received for the second time) is not rendered Completed, the number of frame drops in the first viewpoint will be added once, and become 1; the first data to be rendered at T2 after receiving the synchronization signal for the third time (here, the first data to be rendered is relative to the data received for the third time) In terms of the synchronization signal), the rendering has been completed, and the frame loss count of the first viewpoint is still 1; the first data to be rendered at T2 after receiving the synchronization signal for the fourth time (here, the first data to be rendered is relative to In terms of the synchronization signal received for the fourth time) rendering is not completed, the first viewpoint
- Step A20 increasing T2 when it is detected that the number of times of frame loss at the first viewpoint reaches a preset number, wherein the increased T2 is smaller than T1.
- the GPU may increase T2, but the increased T2 is still smaller than T1.
- the preset number of times may be set in advance according to needs, and there is no limitation here. For example, if the preset number of times is 10, then when the number of times of frame loss of the first viewpoint is equal to 10, the GPU increases T2.
- Increasing T2 can be based on adding a preset value to T2. For example, assuming that the original value is 10 and the preset value is 2, the increased T2 is 12; it can also be directly set T2 to a A larger value than the original value, such as the original value of 10, directly sets T2 to 12.
- the GPU may reset the frame loss times of the first viewpoint, and start counting again, and further increase T2 when the frame loss times reach the preset number again. For example, increase T2 to 12 the first time the number of frame drops is reached, and increase T2 to 14 the second time the number of frame drops is reached.
- An upper limit can be set to ensure that the increased T2 is smaller than T1.
- T2 is increased to prolong the duration of data to be rendered available for rendering of the first viewpoint, so that the first The rendering success rate of the data to be rendered of the viewpoint is improved, thereby increasing the image rendering utilization rate of the first viewpoint.
- the method further includes:
- Step A30 if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, add up the number of times of frame loss of the first viewpoint once;
- Step A40 if the second viewpoint has data to be rendered that is being rendered at the time when the synchronization signal is received, add up the number of times of frame loss of the second viewpoint once;
- the GPU may change the size of T2 to adjust the duration of the data to be rendered available for rendering the first viewpoint and the second viewpoint within a cycle T1.
- the GPU may separately count the number of frame drops of the first viewpoint and the second viewpoint, and may initialize the number of frame drops of the first viewpoint and the second viewpoint to 0 when the rendering process starts.
- the method of counting the number of frame loss times of the first viewpoint and the second viewpoint refer to the method for counting the number of frame loss times of the first viewpoint in step A10 of the third embodiment above, which will not be repeated here.
- Step A50 increasing T2 when it is detected that the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint is greater than a preset ratio, wherein the increased T2 is smaller than T1;
- T2 may be increased, but the increased T2 is still smaller than T1.
- the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint refers to a result obtained by dividing the number of frame loss times of the first viewpoint by the number of frame loss times of the second viewpoint.
- the preset ratio can be set in advance according to needs, and there is no limitation here, for example, it is set to 0.5.
- the manner of increasing T2 may refer to the manner of increasing T2 in step A20 of the third embodiment above, and details are not repeated here.
- Step A60 when it is detected that the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint is greater than the preset ratio, decrease T2, wherein the reduced T2 is greater than 0.
- T2 When the GPU detects that the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint is greater than a preset ratio, T2 may be reduced, but the reduced T2 must be greater than 0.
- the ratio of the frame loss times of the second viewpoint to the frame loss times of the first viewpoint refers to the result obtained by dividing the frame loss times of the second viewpoint by the frame loss times of the first viewpoint.
- Reducing T2 can be based on subtracting a preset value from T2. For example, assuming that the original value is 10 and the preset value is 2, then the reduced T2 is 8; it can also be directly set T2 to a A smaller value than the original value, such as the original value of 10, directly sets T2 to 8. An upper limit can be set to ensure that the increased T2 is less than T1, and a lower limit can be set to ensure that the decreased T2 is greater than 0.
- T2 is increased to Extend the duration of the data to be rendered for rendering the first viewpoint.
- T2 is reduced to prolong the time available for rendering of the second viewpoint.
- the duration of the data to be rendered makes the rendering success rates of the first viewpoint and the second viewpoint equal, thereby balancing the image rendering utilization rates of the first viewpoint and the second viewpoint.
- the method further includes:
- Step A70 when the synchronization signal is received for the first time, one of the first viewpoint and the second viewpoint is set as the left-eye viewpoint, and the other is set as the right-eye viewpoint;
- the GPU may set one of the first viewpoint and the second viewpoint as the left-eye viewpoint, and the other as the right-eye viewpoint.
- the first viewpoint may be set as the left-eye viewpoint
- the second viewpoint may be set as the right-eye viewpoint
- the first viewpoint may be set as the right-eye viewpoint
- the second viewpoint may be set as the left-eye viewpoint.
- Step A80 swapping the left and right eye viewpoint settings of the first viewpoint and the second viewpoint every preset time period T3 since the synchronization signal is received for the first time.
- the GPU can exchange the left and right eye view settings of the first view point and the second view point every preset time period T3 from the first time the synchronization signal is received, that is, if the first view point is the left eye view point and the second view point is the right eye viewpoint, then the first viewpoint is the right eye viewpoint and the second viewpoint is the left eye viewpoint after swapping, if the first viewpoint is the right eye viewpoint and the second viewpoint is the left eye viewpoint, then the first viewpoint is the left eye viewpoint after the swap Viewpoint
- the second viewpoint is the right eye viewpoint.
- T3 can be set according to specific needs; because within a time period T1, for the user, the picture he sees is to refresh the picture of one eye first, and then refresh the picture of the other eye.
- the setting of the left and right eye viewpoints of the first viewpoint and the second viewpoint is equivalent to a change in the refresh sequence of the left and right eye images.
- T3 can be set much larger than T1. Since in a time period T1, the data to be rendered of the first viewpoint is rendered first, and then the data to be rendered of the second viewpoint is rendered, the rendering time of the data to be rendered of the two viewpoints may be different, but the data available for rendering the two viewpoints The duration of the data to be rendered is relatively fixed through T2, which may lead to an unbalanced rendering success rate of the two viewpoints, one with a lower rendering success rate and the other with a higher rendering success rate.
- the picture of the other eye is relatively stuck; in this embodiment, in order to avoid this problem, the left and right eye viewpoints are rendered successfully by exchanging the left and right eye viewpoint settings of the first viewpoint and the second viewpoint on a regular basis
- the rate gradually balances over time, so that the user's eyes feel more balanced and harmonious, and the user experience is improved.
- the embodiment of the present application also proposes an image rendering device.
- the device includes:
- the first rendering module 10 is configured to acquire the current first data to be rendered of the first viewpoint and start rendering after receiving the synchronization signal sent according to the preset time period T1;
- the second rendering module 20 is configured to stop rendering the first data to be rendered if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal, and obtain the current first data to be rendered of the second viewpoint 2.
- the data to be rendered starts to be rendered, where, 0 ⁇ T2 ⁇ T1;
- the second rendering module 20 is further configured to acquire the second data after the rendering of the first data to be rendered is completed if the rendering of the first data to be rendered is completed before time T2 after receiving the synchronization signal.
- the data to be rendered starts to render;
- the cache module 30 is configured to perform asynchronous time warping on the frame image of the first viewpoint obtained from the latest rendering at time T2 after receiving the synchronization signal and then store it in the display buffer, and at the time of receiving the synchronization signal Asynchronous time warping is performed on the frame image of the second viewpoint obtained from the latest rendering, and then stored in the display buffer.
- the first rendering module 10 includes:
- a detection unit configured to detect whether there is third data to be rendered being rendered in the second viewpoint when receiving the synchronization signal sent according to the preset time period T1;
- the first rendering unit is configured to, if any, stop rendering the third data to be rendered, and acquire the current first data to be rendered at the first viewpoint to start rendering;
- the second rendering unit is configured to acquire the first data to be rendered and start rendering if there is none.
- the device also includes:
- the first acquisition module is used to acquire the first viewpoint frame image of the first viewpoint currently cached in the display buffer at the moment of receiving the synchronization signal;
- the first refreshing module is configured to refresh the frame image of the first viewpoint currently displayed in the display device by using the frame image of the first viewpoint.
- the device also includes:
- a second acquisition module configured to acquire a second viewpoint frame image of a second viewpoint currently cached in the display buffer at time T2 after receiving the synchronization signal
- the second refreshing module is configured to refresh the frame image of the second viewpoint currently displayed in the display device by using the frame image of the second viewpoint.
- the device also includes:
- the first counting module is configured to add up the number of frame drops of the first viewpoint once if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal;
- the first adjustment module is configured to increase T2 when it is detected that the number of times of frame loss at the first viewpoint reaches a preset number, wherein the increased T2 is smaller than T1.
- the device also includes:
- the second counting module is configured to add up the number of frame drops of the first viewpoint once if the rendering of the first data to be rendered has not been completed at time T2 after receiving the synchronization signal;
- the third counting module is configured to add up the number of frame drops of the second viewpoint once if the second viewpoint has data to be rendered that is being rendered at the moment when the synchronization signal is received;
- the second adjustment module is used to increase T2 when it is detected that the ratio of the number of frame loss times of the first viewpoint to the number of frame loss times of the second viewpoint is greater than the preset ratio, wherein the increased T2 is smaller than T1;
- the third adjustment module is configured to decrease T2 when it is detected that the ratio of the number of frame loss of the second viewpoint to the number of frame loss of the first viewpoint is greater than the preset ratio, wherein the reduced T2 is greater than 0.
- the device also includes:
- a setting module configured to set one of the first viewpoint and the second viewpoint as the left-eye viewpoint and the other as the right-eye viewpoint when the synchronization signal is received for the first time;
- the swapping module is configured to swap the left and right eye point settings of the first point of view and the second point of view every preset time period T3 since the synchronization signal is received for the first time.
- the extended content of the specific implementation manner of the image rendering device of the present application is basically the same as that of the embodiments of the image rendering method described above, and will not be repeated here.
- the embodiment of the present application also proposes a computer-readable storage medium, on which an image rendering program is stored, and when the image rendering program is executed by a processor, the steps of the image rendering method described below are implemented.
- the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
- the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD) contains several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
- a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (10)
- 一种图像渲染方法,其特征在于,所述方法包括以下步骤:当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
- 如权利要求1所述的图像渲染方法,其特征在于,所述当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染的步骤包括:当接收到按照预设的时间周期T1发送的同步信号时,检测第二视点是否有正在渲染的第三待渲染数据;若有,则停止渲染所述第三待渲染数据,并获取第一视点当前的第一待渲染数据开始渲染;若没有,则获取所述第一待渲染数据开始渲染。
- 如权利要求1所述的图像渲染方法,其特征在于,所述方法还包括:在接收到所述同步信号的时刻获取所述显示缓冲区中当前缓存的第一视点的第一视点帧图像;采用所述第一视点帧图像刷新显示设备中当前显示的第一视点的帧图像。
- 如权利要求1所述的图像渲染方法,其特征在于,所述方法还包括:在接收到所述同步信号后的T2时刻获取所述显示缓冲区中当前缓存的第二视点的第二视点帧图像;采用所述第二视点帧图像刷新显示设备中当前显示的第二视点的帧图像。
- 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;当检测到第一视点的丢帧次数达到预设次数时,增大T2,其中,增大后的T2小于T1。
- 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则对第一视点的丢帧次数累加一次;若在接收到所述同步信号的时刻第二视点有正在渲染的待渲染数据,则对第二视点的丢帧次数累加一次;当检测到第一视点的丢帧次数与第二视点的丢帧次数的比值大于预设比值时,增大T2,其中,增大后的T2小于T1;当检测到第二视点的丢帧次数与第一视点的丢帧次数的比值大于所述预设比值时,减小T2,其中,减小后的T2大于0。
- 如权利要求1至4任一项所述的图像渲染方法,其特征在于,所述方法还包括:当第一次接收到所述同步信号时,将第一视点和第二视点中其中一个设置为左眼视点,另一个设置为右眼视点;从第一次接收到所述同步信号起每隔预设的时间周期T3将第一视点和第二视点的左右眼视点设置进行互换。
- 一种图像渲染装置,其特征在于,所述装置包括:第一渲染模块,用于当接收到按照预设的时间周期T1发送的同步信号后,获取第一视点当前的第一待渲染数据开始渲染;第二渲染模块,用于若在接收到所述同步信号后的T2时刻所述第一待渲染数据未渲染完成,则停止渲染所述第一待渲染数据,并获取第二视点当前的第二待渲染数据开始渲染,其中,0<T2<T1;所述第二渲染模块还用于若所述第一待渲染数据在接收到所述同步信号后的T2时刻前渲染完成,则在所述第一待渲染数据渲染完成后获取所述第二待渲染数据开始渲染;缓存模块,用于在接收到所述同步信号后的T2时刻对最近一次渲染得到的第一视点的帧图像进行异步时间扭曲后存入显示缓冲区,以及在接收到所述同步信号的时刻对最近一次渲染得到的第二视点的帧图像进行异步时间扭曲后存入所述显示缓冲区。
- 一种图像渲染设备,其特征在于,所述图像渲染设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的图像渲染程序,所述图像渲染程序被所述处理器执行时实现如权利要求1至7中任一项所述的图像渲染方法的步骤。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现如权利要求1至7中任一项所述的图像渲染方法的步骤
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/007,202 US12315029B2 (en) | 2021-07-27 | 2021-11-02 | Image rendering method, device, equipment and computer-readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110853251.4A CN113538648B (zh) | 2021-07-27 | 2021-07-27 | 图像渲染方法、装置、设备及计算机可读存储介质 |
CN202110853251.4 | 2021-07-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023005042A1 true WO2023005042A1 (zh) | 2023-02-02 |
Family
ID=78089272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/128030 WO2023005042A1 (zh) | 2021-07-27 | 2021-11-02 | 图像渲染方法、装置、设备及计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US12315029B2 (zh) |
CN (1) | CN113538648B (zh) |
WO (1) | WO2023005042A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116059637A (zh) * | 2023-04-06 | 2023-05-05 | 广州趣丸网络科技有限公司 | 虚拟对象渲染方法、装置、存储介质及电子设备 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538648B (zh) | 2021-07-27 | 2024-04-30 | 歌尔科技有限公司 | 图像渲染方法、装置、设备及计算机可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190043448A1 (en) * | 2018-06-29 | 2019-02-07 | Intel Corporation | Computers for supporting multiple virtual reality display devices and related methods |
CN109358830A (zh) * | 2018-09-20 | 2019-02-19 | 京东方科技集团股份有限公司 | 消除ar/vr画面撕裂的双屏显示方法及ar/vr显示设备 |
CN109887065A (zh) * | 2019-02-11 | 2019-06-14 | 京东方科技集团股份有限公司 | 图像渲染方法及其装置 |
CN112230776A (zh) * | 2020-10-29 | 2021-01-15 | 北京京东方光电科技有限公司 | 虚拟现实显示方法、装置及存储介质 |
CN113538648A (zh) * | 2021-07-27 | 2021-10-22 | 歌尔光学科技有限公司 | 图像渲染方法、装置、设备及计算机可读存储介质 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090100096A1 (en) * | 2005-08-01 | 2009-04-16 | Phanfare, Inc. | Systems, Devices, and Methods for Transferring Digital Information |
JP4312238B2 (ja) * | 2007-02-13 | 2009-08-12 | 株式会社ソニー・コンピュータエンタテインメント | 画像変換装置および画像変換方法 |
JPWO2008105092A1 (ja) * | 2007-02-28 | 2010-06-03 | パナソニック株式会社 | グラフィックス描画装置及びグラフィックス描画方法 |
CN105224288B (zh) * | 2014-06-27 | 2018-01-23 | 北京大学深圳研究生院 | 双目三维图形渲染方法及相关系统 |
US20170125064A1 (en) * | 2015-11-03 | 2017-05-04 | Seastar Labs, Inc. | Method and Apparatus for Automatic Video Production |
CN108604385A (zh) * | 2016-11-08 | 2018-09-28 | 华为技术有限公司 | 一种应用界面显示方法及装置 |
US10861215B2 (en) * | 2018-04-30 | 2020-12-08 | Qualcomm Incorporated | Asynchronous time and space warp with determination of region of interest |
CN108921951B (zh) * | 2018-07-02 | 2023-06-20 | 京东方科技集团股份有限公司 | 虚拟现实图像显示方法及其装置、虚拟现实设备 |
US11127214B2 (en) * | 2018-09-17 | 2021-09-21 | Qualcomm Incorporated | Cross layer traffic optimization for split XR |
WO2020062052A1 (en) * | 2018-09-28 | 2020-04-02 | Qualcomm Incorporated | Smart and dynamic janks reduction technology |
CN109819232B (zh) * | 2019-02-19 | 2021-03-26 | 京东方科技集团股份有限公司 | 一种图像处理方法及图像处理装置、显示装置 |
CN109920040B (zh) * | 2019-03-01 | 2023-10-27 | 京东方科技集团股份有限公司 | 显示场景处理方法和装置、存储介质 |
JP7184192B2 (ja) * | 2019-07-01 | 2022-12-06 | 日本電信電話株式会社 | 遅延測定装置、遅延測定方法及びプログラム |
CN111652962B (zh) * | 2020-06-08 | 2024-04-23 | 北京联想软件有限公司 | 图像渲染方法、头戴式显示设备及存储介质 |
CN112347408B (zh) * | 2021-01-07 | 2021-04-27 | 北京小米移动软件有限公司 | 渲染方法、装置、电子设备及存储介质 |
-
2021
- 2021-07-27 CN CN202110853251.4A patent/CN113538648B/zh active Active
- 2021-11-02 US US18/007,202 patent/US12315029B2/en active Active
- 2021-11-02 WO PCT/CN2021/128030 patent/WO2023005042A1/zh active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190043448A1 (en) * | 2018-06-29 | 2019-02-07 | Intel Corporation | Computers for supporting multiple virtual reality display devices and related methods |
CN109358830A (zh) * | 2018-09-20 | 2019-02-19 | 京东方科技集团股份有限公司 | 消除ar/vr画面撕裂的双屏显示方法及ar/vr显示设备 |
CN109887065A (zh) * | 2019-02-11 | 2019-06-14 | 京东方科技集团股份有限公司 | 图像渲染方法及其装置 |
CN112230776A (zh) * | 2020-10-29 | 2021-01-15 | 北京京东方光电科技有限公司 | 虚拟现实显示方法、装置及存储介质 |
CN113538648A (zh) * | 2021-07-27 | 2021-10-22 | 歌尔光学科技有限公司 | 图像渲染方法、装置、设备及计算机可读存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116059637A (zh) * | 2023-04-06 | 2023-05-05 | 广州趣丸网络科技有限公司 | 虚拟对象渲染方法、装置、存储介质及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US12315029B2 (en) | 2025-05-27 |
US20240265485A1 (en) | 2024-08-08 |
CN113538648A (zh) | 2021-10-22 |
CN113538648B (zh) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109992232B (zh) | 图像更新方法、装置、终端及存储介质 | |
CN106296566B (zh) | 一种虚拟现实移动端动态时间帧补偿渲染系统及方法 | |
CN110018874B (zh) | 垂直同步方法、装置、终端及存储介质 | |
JP6894976B2 (ja) | 画像円滑性向上方法および装置 | |
CN103701807B (zh) | 一种vdi环境下的数据发送方法和装置 | |
CN100418136C (zh) | 图像帧更新的同步 | |
CN109920040B (zh) | 显示场景处理方法和装置、存储介质 | |
WO2023005042A1 (zh) | 图像渲染方法、装置、设备及计算机可读存储介质 | |
CN109819232B (zh) | 一种图像处理方法及图像处理装置、显示装置 | |
CN106936995A (zh) | 一种移动终端帧率的控制方法、装置及移动终端 | |
KR20110073567A (ko) | 여러 비디오 이미지들의 매끄러운 디스플레이 이주 | |
US12056854B2 (en) | Systems and methods for frame time smoothing based on modified animation advancement and use of post render queues | |
CN106843859A (zh) | 一种虚拟现实场景的绘制方法和装置及一种虚拟现实设备 | |
US20240020913A1 (en) | Image processing method, image processing device and computer readable storage medium | |
CN105913371A (zh) | 一种针对虚拟现实应用延迟的系统优化方法和装置 | |
CN104268113B (zh) | Dpi接口的lcd控制器以及其自适应带宽的方法 | |
WO2020078172A1 (zh) | 帧率控制方法、装置、终端及存储介质 | |
WO2022089046A1 (zh) | 虚拟现实显示方法、装置及存储介质 | |
CN117576358A (zh) | 一种云渲染方法及装置 | |
CN114610255A (zh) | 画面绘制方法、装置、存储介质以及终端 | |
WO2023000598A1 (zh) | 增强现实设备的帧率调整方法、系统、设备及存储介质 | |
EP1947602B1 (en) | Information processing device, graphic processor, control processor, and information processing method | |
WO2025039698A1 (zh) | 拍摄模块的控制方法、头戴显示设备以及存储介质 | |
JP2002244646A (ja) | データ処理システム及びデータ処理方法、コンピュータプログラム、記録媒体 | |
JP2000029456A (ja) | ディスプレイ描画・表示方法およびディスプレイ装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21951614 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.06.2024) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21951614 Country of ref document: EP Kind code of ref document: A1 |
|
WWG | Wipo information: grant in national office |
Ref document number: 18007202 Country of ref document: US |