WO2023000547A1 - 图像处理方法、装置和计算机可读存储介质 - Google Patents

图像处理方法、装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2023000547A1
WO2023000547A1 PCT/CN2021/129157 CN2021129157W WO2023000547A1 WO 2023000547 A1 WO2023000547 A1 WO 2023000547A1 CN 2021129157 W CN2021129157 W CN 2021129157W WO 2023000547 A1 WO2023000547 A1 WO 2023000547A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye image
rendering process
image
rendering
time period
Prior art date
Application number
PCT/CN2021/129157
Other languages
English (en)
French (fr)
Inventor
邱绪东
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Publication of WO2023000547A1 publication Critical patent/WO2023000547A1/zh
Priority to US18/474,285 priority Critical patent/US20240020913A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/341Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device

Definitions

  • the present application relates to the field of image processing, and in particular to an image processing method, device and computer-readable storage medium.
  • AR/VR uses image rendering technology to refresh the rendered virtual image to the display device, and users experience the effect of virtual reality/augmented reality through the head-mounted display device.
  • the rendering process takes time, there is a time delay between actual and perceived time.
  • the user's head or the head-mounted device worn by the user may move, resulting in a certain time delay between the posture information of the user's head and the image data output by the head-mounted device.
  • the embodiments of the present application aim to solve the problem of dizziness caused by excessive time delay during the rendering process by providing an image processing method, device, and computer-readable storage medium.
  • the present application provides an image processing method on the one hand, the method comprising:
  • asynchronous time warping is performed on the right-eye image after the second rendering process; wherein, the first time period, the second time period, and the third time period constitute the first rendering cycle, at least part of the fourth The time period overlaps at least part of the third time period.
  • a time sum of the first time period and the second time period is greater than half of the first rendering cycle.
  • the step of asynchronously time warping the left-eye image after the second rendering process includes:
  • the step of performing first rendering processing on the left-eye image and the right-eye image includes:
  • the step of asynchronously time warping the right-eye image after the second rendering process includes:
  • the step of performing a second rendering process on the left-eye image obtained by the first rendering process includes:
  • the step of performing a second rendering process on the right-eye image obtained by the first rendering process includes:
  • the method before performing asynchronous time warping on the left-eye image after the second rendering process, the method further includes:
  • the method before performing asynchronous time warping on the right-eye image after the second rendering process, the method further includes:
  • the method also includes:
  • the asynchronously time-warped left-eye image and the asynchronously time-warped right-eye image are respectively sent to the display component for display.
  • the head posture information includes at least one of the following:
  • an image processing device the device includes a memory, a processor, and an image processing program stored in the memory and run on the processor, the processor executes The image processing program is to realize the steps of the above image processing method.
  • another aspect of the present application provides a computer-readable storage medium, on which an image processing program is stored, and when the image processing program is executed by a processor, the above-mentioned The steps of the image processing method.
  • This application proposes an image processing method, by performing the first rendering process on the left-eye image and the right-eye image in the first time period; and performing the second rendering process on the left-eye image obtained by the first rendering process in the second time period processing; in the third period of time, the second rendering process is performed on the right-eye image obtained by the first rendering process; in the fourth period of time, asynchronous time warping is performed on the left-eye image after the second rendering process is completed; in the fifth time period segment, performing asynchronous time warping on the right-eye image after the second rendering process is completed.
  • the time delay in the rendering process can be reduced, thereby reducing User dizziness.
  • FIG. 1 is a schematic diagram of the terminal structure of the hardware operating environment involved in the solution of the embodiment of the present application;
  • FIG. 2 is a schematic flow chart of the first embodiment of the image processing method of the present application.
  • FIG. 3 is a schematic flow diagram of performing asynchronous time warping on the left-eye image after the second rendering process in the image processing method of the present application;
  • FIG. 4 is a schematic flow diagram of performing asynchronous time warping on the right-eye image after the second rendering process in the image processing method of the present application;
  • FIG. 5 is a schematic diagram of the original asynchronous time warping processing mechanism
  • FIG. 6 is a schematic diagram of an asynchronous time warping processing mechanism of the image processing method of the present application.
  • the main solution of the embodiment of the present application is: in the first time period, perform the first rendering process on the left-eye image and the right-eye image; in the second time period, perform the second rendering process on the left-eye image obtained by the first rendering process Two rendering processing; in the third period of time, perform the second rendering process on the right-eye image obtained by the first rendering process; in the fourth period of time, perform asynchronous time on the left-eye image obtained after the second rendering process Distorting; in the fifth time period, performing asynchronous time warping on the right-eye image after the second rendering process; wherein, the first time period, the second time period, and the third time period constitute the first rendering cycle, at least partly The fourth time period overlaps at least part of the third time period.
  • the user's head or the VR headset worn by the user may move, causing the position and/or orientation information of the user's head to be incorrect when outputting the frame to the display. Accurate, thereby causing the user to feel dizzy.
  • the present application performs the first rendering process on the left-eye image and the right-eye image in the first time period; in the second time period, performs the second rendering process on the left-eye image obtained by the first rendering process; In the time period, the second rendering process is performed on the right-eye image obtained by the first rendering process; in the fourth time period, asynchronous time warping is performed on the left-eye image after the second rendering process; in the fifth time period, the The second rendered image for the right eye is asynchronously time warped.
  • the time delay in the rendering process can be reduced, thereby reducing User dizziness.
  • the rendering time of the right-eye image can be accelerated, so that the asynchronous time warping of the left-eye image can be delayed, so that rendering waste can be avoided, allowing more real-time Frames go to the display, reducing user motion sickness.
  • FIG. 1 is a schematic structural diagram of a terminal device in a hardware operating environment involved in the solution of the embodiment of the present application.
  • the terminal device may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • the structure of the terminal device shown in FIG. 1 does not constitute a limitation on the terminal device, and may include more or less components than those shown in the figure, or combine some components, or arrange different components.
  • an image processing program may be included in a memory 1005 as a computer-readable storage medium.
  • the network interface 1004 is mainly used for data communication with the background server;
  • the user interface 1003 is mainly used for data communication with the client (client);
  • the processor 1001 can be used to call the image processing program, and do the following:
  • asynchronous time warping is performed on the right-eye image after the second rendering process; wherein, the first time period, the second time period, and the third time period constitute the first rendering cycle, at least part of the fourth The time period overlaps at least part of the third time period.
  • FIG. 2 is a schematic flowchart of a first embodiment of an image processing method of the present application.
  • the embodiment of the present application provides an image processing method. It should be noted that although the logic sequence is shown in the flow chart, in some cases, the steps shown or described can be executed in a different order than here. step.
  • the image processing method of this embodiment runs on the terminal device side, and includes the following steps:
  • Step S10 performing a first rendering process on the left-eye image and the right-eye image during the first time period
  • the rendering engine uses pre-measured real-time head posture information (such as direction information and position information) before sending the content to be displayed to the user.
  • head posture information such as direction information and position information
  • time warping can be used to adjust (e.g. rotate or adjust) an image or frame to correct for head motions that occur after (or while) the image is rendered, thereby reducing perceived latency.
  • time warping can be applied to transform the displayed image to match the new perspective.
  • time warping means that by collecting a large amount of gyroscope data, in the case of enough samples, it is possible to predict the rotation and position of the user's head after 16.66ms, and then render according to the predicted data.
  • Time warping includes synchronization time Warping and asynchronous time warping, the asynchronous time warping used in this application, the asynchronous time warping means that it is performed on another thread in parallel (that is, asynchronously) with the rendering thread.
  • an ATW (Asynchronous Time Warp) thread can generate a new time warp frame from the latest frame completed by the rendering thread.
  • One advantage of ATW is that the rendering thread is not delayed by the calculations involved in the time warp, and can take advantage of the multiprocessing capabilities of the underlying software and/or hardware platform.
  • a VR headset when a VR headset renders the left-eye image and the right-eye image, it renders the left-eye image in the first T/2 (T refers to the rendering cycle), and then asynchronously time-warps the left-eye image, and simultaneously Right-eye image rendering, ie, asynchronous time-warping of the left-eye image and rendering of the right-eye image in parallel to increase data processing speed.
  • T refers to the rendering cycle
  • Right-eye image rendering ie, asynchronous time-warping of the left-eye image and rendering of the right-eye image in parallel to increase data processing speed.
  • the current practice is to perform asynchronous time warping on the left-eye image at T/2. If the rendering is not completed at T/2, the previous frame will be used instead for asynchronous time warping, as shown in 12 in the figure.
  • the left and right eye images can be rendered through the multi-view rendering function (MultiView Rendering).
  • the multi-view rendering function can reduce the number of duplicate object draw calls in VR applications, allowing the GPU to broadcast objects in one draw call. to the left and right eye, which helps reduce CPU load, resulting in fewer dropped frames and optimized rendering latency.
  • the rendering operations of the left and right eye images are mostly the same, and the only difference between the two renderings is that they are applied to vertices. convert. Therefore, in order to avoid repeated common processing such as data loading and mapping for the left and right eye images, when rendering the left eye image, the image data is obtained from the first image buffer (such as the CPU buffer), and the left image data is obtained from the image data according to preset conditions. eye image and right eye image, and then perform the first rendering process on the left eye image and the right eye image, the first rendering process refers to performing common processing on the left eye image and right eye image at the same time, the common processing includes data loading, mapping, Sampling and other rendering operations.
  • a structure array is created by an OpenGL (Open Graphics Library, open graphics library) program, and the structure array consists of two elements representing the left-eye image and the right-eye image respectively Composition, bind the structure array to a frame buffer object (Frame Buffer Object, FBO), and at the same time, load the left-eye texture buffer and right-eye texture buffer to the frame buffer object FBO.
  • OpenGL Open Graphics Library, open graphics library
  • the image frame is loaded from the frame buffer object FBO to the left-eye texture buffer (GPU buffer) and the right-eye texture buffer (GPU buffer), and then, according to the image frame in The left-eye image is acquired in the left-eye texture buffer, and the right-eye image is acquired in the right-eye texture buffer according to the image frame, and the first rendering process is performed on the acquired left-eye image and right-eye image.
  • the image driver since the structure array is bound to a frame buffer object, the image driver only needs to provide a GPU command buffer (Command Buffer), so that the left and right eyes share the command buffer, and re-enable it each time rendering The command buffer.
  • a GPU command buffer Common Buffer
  • the right eye since the common processing of the left and right eye images is performed when rendering the left eye image, when rendering the right eye image, the right eye does not need to perform rendering operations such as data loading, texture mapping, and sampling, thereby speeding up the process of rendering the right eye image.
  • the rendering time of the eye image since the common processing of the left and right eye images is performed when rendering the left eye image, when rendering the right eye image, the right eye does not need to perform rendering operations such as data loading, texture mapping, and sampling, thereby speeding up the process of rendering the right eye image.
  • the rendering time of the eye image since the common processing of the left and right eye images is performed when rendering the left eye image, when rendering the right eye image, the right eye does not need to perform rendering operations such as data loading, texture mapping, and sampling, thereby speeding up the process of rendering the right eye image.
  • Step S20 performing a second rendering process on the left-eye image obtained by the first rendering process during a second time period
  • the second rendering process refers to performing pose processing on the left-eye image Forecast and Compensation.
  • two image transformation matrices corresponding to the left eye and the right eye are respectively generated according to the preset attributes of the two virtual cameras, wherein the attributes of the virtual cameras may include the position of the virtual camera, the orientation of the virtual camera, and The angle between the two virtual cameras, etc.; determine the image transformation matrix corresponding to the left-eye image obtained through the first rendering process, and use the image transformation matrix of the left eye to perform projection transformation on the left-eye image, so as to project the left-eye image to In the projection window, the image transformation matrix of the left eye is used to draw the left eye image, and the drawn left eye image is rendered to the second image buffer (GPU buffer), so that the three-dimensional coordinates are converted to two-dimensional Transformation of coordinates.
  • the attributes of the virtual cameras may include the position of the virtual camera, the orientation of the virtual camera, and The angle between the two virtual cameras, etc.
  • Step S30 performing a second rendering process on the right-eye image obtained by the first rendering process during a third time period
  • the second rendering process refers to bitmapping the right-eye image Attitude prediction and compensation.
  • two image transformation matrices corresponding to the left eye and the right eye are respectively generated according to the preset attributes of the two virtual cameras, wherein the attributes of the virtual cameras may include the position of the virtual camera, the orientation of the virtual camera, and The angle between the two virtual cameras, etc.; determine the image transformation matrix corresponding to the right-eye image obtained through the first rendering process, and use the right-eye image transformation matrix to perform projection transformation on the right-eye image, so as to project the right-eye image onto In the projection window, the image transformation matrix of the right eye is used to draw the image for the right eye, and the drawn right eye image is rendered to the third image buffer (GPU buffer), so that the three-dimensional coordinates are converted to two-dimensional Transformation of coordinates.
  • the attributes of the virtual cameras may include the position of the virtual camera, the orientation of the virtual camera, and The angle between the two virtual cameras, etc.
  • Step S40 performing asynchronous time warping on the left-eye image after the second rendering process in the fourth time period
  • asynchronous time warping is performed on the left-eye image after the second rendering process is completed.
  • the duration of the current rendering cycle reaches a preset duration
  • asynchronous time warping is performed on the left-eye image after the second rendering process, wherein the preset duration is longer than half a rendering cycle.
  • the preset duration is 3T/4
  • the preset duration can be set according to the running requirements of the rendering program, which is not limited here.
  • the asynchronous time warp is currently performed on the left-eye image at T/2, but if the left-eye image is not rendered at T/2, then the left-eye image rendered after T/2 is wasted.
  • the rendering time of the right-eye image is shortened, the time for asynchronously time-warping the left-eye image can be extended to ensure that the asynchronous time-warp is performed after the left-eye image is rendered, so as to avoid rendering waste and reduce dizziness.
  • the warped left-eye image is rendered to an image buffer, such as a single buffer (CPU buffer), and the left-eye image is pushed to the display through the MIPI bus through the image buffer for display .
  • an image buffer such as a single buffer (CPU buffer)
  • Step S50 in the fifth time period, perform asynchronous time warping on the right-eye image after the second rendering process; wherein, the first time period, the second time period and the third time period constitute the first rendering cycle, at least Part of the fourth time period overlaps at least part of the third time period.
  • the right-eye image After completing the second rendering process of the right-eye image, in the fifth time period, the right-eye image is asynchronously time-warped, and the warped right-eye image is rendered to an image buffer, such as a single buffer (CPU buffer), through The image buffer pushes the right-eye image to the display through the MIPI bus for display.
  • an image buffer such as a single buffer (CPU buffer)
  • the rendering process of the left and right eye images is executed in the rendering thread
  • the ATW process of the left and right eye images is executed in the ATW thread.
  • image buffer to which the warped left-eye image is rendered and the image buffer to which the warped right-eye image is rendered may be different image buffer areas.
  • the first time period and the second time period are left-eye image rendering time periods, and the left-eye image has completed part of the image rendering in the first time period, and completed the rendering of all images after the second time period.
  • the rendering time of the eye image is the time sum of the first time period and the second time period.
  • the third time period is the rendering time of the right eye image, therefore, the first time period, the second time period and the third time period together constitute the first rendering cycle. Since the common rendering operation of the left-eye image is completed when the left-eye image is rendered, the rendering time of the right-eye image is shortened, and the asynchronous time warping of the left-eye image can be delayed.
  • the rendering time of the left-eye image becomes longer, so the time sum of the first time period and the second time period is greater than half of the first rendering cycle.
  • the rendering of the right-eye image is also performed when the asynchronous time warping of the left-eye image is performed, at least part of the fourth time period overlaps at least part of the third time period.
  • the above-mentioned fifth time period may be located in a second rendering cycle after the first rendering cycle.
  • the left and right eye images are rendered by using the multi-view rendering function, so as to speed up the rendering time of the right eye image.
  • the asynchronous time warp of the left eye image can be delayed to ensure that the asynchronous time warp is performed after the left eye image is rendered , to avoid rendering waste and reduce dizziness.
  • Step S41 acquiring the first head posture information of the user
  • Step S42 performing asynchronous time warping on the left-eye image after the second rendering process according to the first head posture information.
  • the first head pose information of the user may be obtained, the first head pose information may include information indicating (or identifying) the head pose, and may At least one of position information indicating the user's head or the head-mounted VR device and direction information indicating the user's head or the head-mounted device are included.
  • the user's first head posture information may be obtained from a tracking device that tracks the posture of the VR device worn by the user, such as including position information and/or direction information of the user's head or the VR device worn by the user, wherein the tracking device It can be a part of the head-mounted VR device, such as IMU, accelerometer, camera or other tracking devices, which are used to track the posture of the head-mounted VR device or the user's head.
  • a tracking device that tracks the posture of the VR device worn by the user, such as including position information and/or direction information of the user's head or the VR device worn by the user, wherein the tracking device It can be a part of the head-mounted VR device, such as IMU, accelerometer, camera or other tracking devices, which are used to track the posture of the head-mounted VR device or the user's head.
  • the final left-eye image is rendered to an image buffer, such as a single buffer (Single buffer), and the distorted left-eye image is sent to the display through the image buffer for display.
  • image buffer such as a single buffer (Single buffer)
  • the distorted left-eye image is sent to the display through the image buffer for display.
  • obtain the 6dof quadruple data of the head calculate the prediction rotation matrix through the 6dof quadruple data of the head, and use the prediction matrix to process the left eye image after the second rendering process to obtain the distorted (rotated or adjusted) left eye image.
  • eye image and then render the distorted left-eye image to the image buffer, and the image buffer pushes the left-eye image to the display through hardware for display.
  • time-warping (such as rotating or adjusting) the left-eye image to correct the head movement that occurs after (or at the same time) the left-eye image is rendered can reduce perceived delay, thereby reducing the user's dizziness.
  • FIG. 4 a third embodiment of the image processing method of the present application is proposed.
  • the difference between the third embodiment of the image processing method and the second embodiment is that the asynchronous time warping is performed on the right eye image after the second rendering process, so as to output and display the asynchronous time warped right eye image
  • the steps include:
  • Step S51 acquiring the second head posture information of the user
  • Step S52 performing asynchronous time warping on the right-eye image after the second rendering process according to the second head posture information.
  • the second head pose information of the user may be obtained, the second head pose information may include information indicating (or identifying) the head pose, and may At least one of position information indicating the user's head or the head-mounted VR device and direction information indicating the user's head or the head-mounted device are included.
  • the user's second head posture information may be obtained from a tracking device that tracks the posture of the VR device worn by the user, such as including position information and/or direction information of the user's head or the VR device worn by the user, wherein the tracking device It can be a part of the head-mounted VR device, such as IMU, accelerometer, camera or other tracking devices, which are used to track the posture of the head-mounted VR device or the user's head.
  • a tracking device that tracks the posture of the VR device worn by the user, such as including position information and/or direction information of the user's head or the VR device worn by the user, wherein the tracking device It can be a part of the head-mounted VR device, such as IMU, accelerometer, camera or other tracking devices, which are used to track the posture of the head-mounted VR device or the user's head.
  • the final right-eye image is rendered to an image buffer, such as a single buffer (Single buffer), and the distorted right-eye image is sent to the monitor for display through the image buffer.
  • image buffer such as a single buffer (Single buffer)
  • the distorted right-eye image is sent to the monitor for display through the image buffer.
  • obtain the head 6dof quadruple data calculate the prediction rotation matrix through the head 6dof quadruple data, and use the prediction matrix to process the right eye image after the second rendering process to obtain the distorted (rotated or adjusted) right eye image.
  • eye image and then render the distorted right-eye image to the image buffer, and the image buffer pushes the right-eye image to the monitor for display through hardware.
  • time warping (such as rotation or adjustment) is performed on the right-eye image to correct the head movement that occurs after (or at the same time) the right-eye image is rendered, which can reduce the perceived delay, thereby reducing the user's dizziness.
  • the present application also provides an image processing device, the device includes a memory, a processor, and an image processing program stored in the memory and run on the processor.
  • the first rendering process is performed on the image and the right-eye image; in the second time period, the second rendering process is performed on the left-eye image obtained by the first rendering process; in the third time period, the right-eye image obtained by the first rendering process is performed.
  • the second rendering process in the fourth time period, perform asynchronous time warping on the left-eye image after the second rendering process; in the fifth time period, perform asynchronous time warping on the right-eye image after the second rendering process.
  • the time delay in the rendering process can be reduced, thereby reducing User dizziness.
  • the rendering time of the right-eye image can be accelerated, so that the asynchronous time warping of the left-eye image can be delayed, so that rendering waste can be avoided, and more Real-time frames are fed into the display, reducing the user's sense of vertigo.
  • the above-mentioned image processing device may be a hardware module (such as a dedicated processor, and an integrated processor), a software function module, or a complete device (such as a VR device, an AR device, an MR device), etc.
  • a hardware module such as a dedicated processor, and an integrated processor
  • a software function module such as a VR device, an AR device, an MR device
  • a complete device such as a VR device, an AR device, an MR device
  • the present application also provides a computer-readable storage medium, where an image processing method program is stored on the computer-readable storage medium, and when the image processing method program is executed by a processor, the steps of the above-mentioned image processing method are realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种图像处理方法、装置和计算机可读存储介质,图像处理方法包括:在第一时间段,对左眼图像和右眼图像进行第一渲染处理;在第二时间段,对第一渲染处理得到的左眼图像进行第二渲染处理;在第三时间段,对第一渲染处理得到的右眼图像进行第二渲染处理;在第四时间段,对完成第二渲染处理后的左眼图像进行异步时间扭曲;在第五时间段,对完成第二渲染处理后的右眼图像进行异步时间扭曲。本申请实施例通过对左眼图像和右眼图像进行第一渲染处理,并在一定时间启动左眼图像异步时间扭曲和右眼图像的异步时间扭曲,可以降低渲染过程中的时间延迟,从而降低用户的眩晕感。

Description

图像处理方法、装置和计算机可读存储介质
本申请要求于2021年7月22日提交中国专利局、申请号为202110833918.4、申请名称为“图像处理方法、装置和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种图像处理方法、装置和计算机可读存储介质。
背景技术
AR/VR通过图像渲染技术,将渲染的虚拟图像刷新到显示设备,用户通过头戴显示设备体验到虚拟现实/增强现实的效果。
由于渲染过程需要时间,就会造成实际和感知之间的时间延迟。例如,在渲染过程中,用户头部或用户佩戴的头戴设备可能会移动,导致用户头部的姿态信息与头戴设备输出的图像数据之间存在一定时间延迟。
如果时间延迟过大,会造成眩晕感。
申请内容
本申请实施例通过提供一种图像处理方法、装置和计算机可读存储介质,旨在解决在渲染的过程中,如果时间延迟过大,会造成眩晕感的问题。
为实现上述目的,本申请一方面提供一种图像处理方法,所述方法包括:
在第一时间段,对左眼图像和右眼图像进行第一渲染处理;
在第二时间段,对所述第一渲染处理得到的左眼图像进行第二渲染处理;
在第三时间段,对所述第一渲染处理得到的右眼图像进行第二渲染处理;
在第四时间段,对完成所述第二渲染处理后的左眼图像进行异步时间扭曲;
在第五时间段,对完成所述第二渲染处理后的右眼图像进行异步时间扭 曲;其中,第一时间段、第二时间段以及第三时间段组成第一渲染周期、至少部分第四时间段与至少部分第三时间段重叠。
可选地,所述第一时间段与所述第二时间段的时间和值大于所述第一渲染周期的一半。
可选地,所述对完成所述第二渲染处理后的左眼图像进行异步时间扭曲的步骤包括:
获取用户的第一头部姿势信息;
根据所述第一头部姿势信息对完成所述第二渲染处理后的左眼图像进行异步时间扭曲。
可选地,所述对左眼图像和右眼图像进行第一渲染处理的步骤包括:
从第一图像缓冲区获取图像数据;
依据预设条件从所述图像数据获取所述左眼图像和所述右眼图像;
对所述左眼图像和所述右眼图像进行第一渲染处理。
可选地,所述对完成所述第二渲染处理后的右眼图像进行异步时间扭曲的步骤包括:
获取用户的第二头部姿势信息;
根据所述第二头部姿势信息对完成所述第二渲染处理后的右眼图像进行异步时间扭曲。
可选地,所述对所述第一渲染处理得到的左眼图像进行第二渲染处理的步骤包括:
获取所述第一渲染处理得到的所述左眼图像对应的第一预设位姿信息;
根据所述第一预设位姿信息对所述第一渲染处理得到的左眼图像进行第二渲染处理。
可选地,所述对所述第一渲染处理得到的右眼图像进行第二渲染处理的步骤包括:
获取所述第一渲染处理得到的所述右眼图像对应的第二预设位姿信息;
根据所述第二预设位姿信息对所述第一渲染处理得到的右眼图像进行第二渲染处理。
可选地,所述对完成所述第二渲染处理后的左眼图像进行异步时间扭曲 之前,所述方法还包括:
将对完成所述第二渲染处理后的左眼图像渲染至第二图像缓冲区。
可选地,所述对完成所述第二渲染处理后的右眼图像进行异步时间扭曲之前,所述方法还包括:
将对完成所述第二渲染处理后的右眼图像渲染至第三图像缓冲区。
可选地,所述方法还包括:
分别将经过异步时间扭曲的左眼图像和经过异步时间扭曲的右眼图像发送至显示部件显示。
可选地,头部姿态信息包括以下中的至少一个:
指示用户的头部或用户佩戴的VR设备的位置信息;
指示用户的头部或用户佩戴的VR设备的方向信息。
此外,为实现上述目的,本申请另一方面还提供一种图像处理装置,所述装置包括存储器、处理器及存储在存储器上并在所述处理器上运行图像处理程序,所述处理器执行所述图像处理程序时实现如上所述图像处理方法的步骤。
此外,为实现上述目的,本申请另一方面还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有图像处理程序,所述图像处理程序被处理器执行时实现如上所述图像处理方法的步骤。
本申请提出一种图像处理方法,通过在第一时间段,对左眼图像和右眼图像进行第一渲染处理;在第二时间段,对第一渲染处理得到的左眼图像进行第二渲染处理;在第三时间段,对第一渲染处理得到的右眼图像进行第二渲染处理;在第四时间段,对完成第二渲染处理后的左眼图像进行异步时间扭曲;在第五时间段,对完成第二渲染处理后的右眼图像进行异步时间扭曲。本申请实施例通过对左眼图像和右眼图像进行第一渲染处理,并在一定时间启动左眼图像异步时间扭曲和右眼图像的异步时间扭曲,可以降低渲染过程中的时间延迟,从而降低用户的眩晕感。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的终端结构示意图;
图2为本申请图像处理方法第一实施例的流程示意图;
图3为本申请图像处理方法中对完成所述第二渲染处理后的左眼图像进行异步时间扭曲的流程示意图;
图4为本申请图像处理方法中对完成所述第二渲染处理后的右眼图像进行异步时间扭曲的流程示意图;
图5为原异步时间扭曲处理机制的示意图;
图6为本申请图像处理方法的异步时间扭曲处理机制的示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例的主要解决方案是:在第一时间段,对左眼图像和右眼图像进行第一渲染处理;在第二时间段,对所述第一渲染处理得到的左眼图像进行第二渲染处理;在第三时间段,对所述第一渲染处理得到的右眼图像进行第二渲染处理;在第四时间段,对完成所述第二渲染处理后的左眼图像进行异步时间扭曲;在第五时间段,对完成所述第二渲染处理后的右眼图像进行异步时间扭曲;其中,第一时间段、第二时间段以及第三时间段组成第一渲染周期、至少部分第四时间段与至少部分第三时间段重叠。
由于在某些情况下,当图形引擎正在渲染帧的图形时,用户头部或用户佩戴的VR头戴设备可能会移动,导致用户头部的位置和/或方向信息在向显示器输出帧时不准确,从而使用户产生眩晕感。
因此,本申请通过在第一时间段,对左眼图像和右眼图像进行第一渲染处理;在第二时间段,对第一渲染处理得到的左眼图像进行第二渲染处理;在第三时间段,对第一渲染处理得到的右眼图像进行第二渲染处理;在第四时间段,对完成第二渲染处理后的左眼图像进行异步时间扭曲;在第五时间 段,对完成第二渲染处理后的右眼图像进行异步时间扭曲。本申请实施例通过对左眼图像和右眼图像进行第一渲染处理,并在一定时间启动左眼图像异步时间扭曲和右眼图像的异步时间扭曲,可以降低渲染过程中的时间延迟,从而降低用户的眩晕感。
进一步地,通过对左眼图像和右眼图像进行第一渲染处理,可以加快右眼图像的渲染时间,从而可以延迟启动左眼图像的异步时间扭曲,如此,可以避免渲染浪费,让更多实时帧进入显示器,降低用户的眩晕感。
如图1所示,图1为本申请实施例方案涉及的硬件运行环境的终端设备结构示意图。
如图1所示,该终端设备可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
本领域技术人员可以理解,图1中示出的终端设备结构并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机可读存储介质的存储器1005中可以包括图像处理程序。
在图1所示的终端设备中,网络接口1004主要用于与后台服务器进行数据通信;用户接口1003主要用于与客户端(用户端)进行数据通信;处理器1001可以用于调用存储器1005中图像处理程序,并执行以下操作:
在第一时间段,对左眼图像和右眼图像进行第一渲染处理;
在第二时间段,对所述第一渲染处理得到的左眼图像进行第二渲染处理;
在第三时间段,对所述第一渲染处理得到的右眼图像进行第二渲染处理;
在第四时间段,对完成所述第二渲染处理后的左眼图像进行异步时间扭 曲;
在第五时间段,对完成所述第二渲染处理后的右眼图像进行异步时间扭曲;其中,第一时间段、第二时间段以及第三时间段组成第一渲染周期、至少部分第四时间段与至少部分第三时间段重叠。
参考图2,图2为本申请图像处理方法第一实施例的流程示意图。
本申请实施例提供了一种图像处理方法,需要说明的是,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本实施例的图像处理方法运行于终端设备侧,包括以下步骤:
步骤S10,在第一时间段,对左眼图像和右眼图像进行第一渲染处理;
需要说明的是,在理想情况下,渲染引擎发送内容显示给用户之前,使用的是预先测量的实时头部姿势信息(如方向信息和位置信息)。但是,在现实情况下,由于渲染过程需要时间,就会造成实际和感知之间的时间延迟,此时,用户所看到的画面就会发生抖动,也即设备无法同步渲染出与头部动作相应的画面,而当画面产生抖动时,人自然而然就会有眩晕感产生。
基于上述问题,可以使用时间扭曲来调整(如旋转或调整)图像或帧,以校正图像被渲染之后(或同时)发生的头部运动,从而减少感知的延迟。当用户头部位置改变时,可以应用时间扭曲来变换所显示的图像以与新视角一致。其中,时间扭曲是指通过采集大量陀螺仪数据,在样本足够多的情况下,可以预测出16.66ms后用户头部应有的旋转和位置,然后按照预测的数据来渲染,时间扭曲包括同步时间扭曲和异步时间扭曲,本申请采用的异步时间扭曲,异步时间扭曲是指在另一个线程上与渲染线程并行(即,异步)进行。例如,ATW(异步时间扭曲)线程可以从渲染线程完成的最新帧中生成新的时间扭曲帧。ATW的一个优点是渲染线程不会因时间扭曲中涉及的计算而延迟,而且可以利用底层软件和/或硬件平台的多处理能力。
参考图5,通常,VR头戴设备在渲染左眼图像和右眼图像时,在前T/2(T是指渲染周期)渲染左眼图像,然后对左眼图像进行异步时间扭曲,同时进行右眼图像渲染,也即,左眼图像的异步时间扭曲和右眼图像的渲染并 行,以提高数据处理速度。而当前做法是固定在T/2对左眼图像进行异步时间扭曲,如果T/2时没有做完渲染就用上一帧来代替进行异步时间扭曲处理,如图中①②所示。但是,如果左眼图像渲染时间超过T/2,则会被废弃,那么超过T/2渲染的左眼图像内容就被浪费了。基于此问题,可以通过多视图渲染功能(MultiView Rendering)对左右眼图像进行渲染,多视图渲染功能可以减少VR应用程序中的复制对象绘制调用的次数,从而允许GPU在一次绘制调用中将对象广播至左眼和右眼,这有助于减少CPU负载,从而减少丢帧数和优化渲染延迟。
由于在进行左右眼图像渲染时,需要进行数据加载、贴图、采样以及顶点处理等操作,其中,左右眼图像的渲染操作大部分是相同的,两次渲染唯一的区别是应用到顶点时要进行转换。因此,为避免左右眼图像重复进行数据加载、贴图等共性处理,在渲染左眼图像时,从第一图像缓冲区(例如CPU缓冲区)中获取图像数据,依据预设条件从图像数据获取左眼图像和右眼图像,然后对左眼图像和右眼图像进行第一渲染处理,第一渲染处理是指对左眼图像和右眼图像同时执行共性处理,该共性处理包括数据加载、贴图、采样等渲染操作。其中,左眼图像和右眼图像为同一幅图像,但是由于存在角度差异,左右眼看到的图像内容并不是完成相同的。在一实施例中,在启用多视图渲染功能的情况下,通过OpenGL(Open Graphics Library,开放图形库)程序创建一个结构数组,该结构数组由两个分别代表左眼图像和右眼图像的元素组成,将结构数组绑定至一个帧缓冲区对象(Frame Buffer Object,FBO),同时,将左眼纹理缓冲区和右眼纹理缓冲区加载至该帧缓冲区对象FBO。如此,在渲染左眼图像时,从帧缓冲区对象FBO中将图像帧同时加载至左眼纹理缓冲区(GPU缓冲区)和右眼纹理缓冲区(GPU缓冲区),然后,根据图像帧在左眼纹理缓冲区中获取左眼图像,以及根据图像帧在右眼纹理缓冲区中获取右眼图像,对获取到的左眼图像和右眼图像进行第一渲染处理。其中,由于将结构数组绑定至一个帧缓冲区对象,因此图像驱动程序只需提供一个GPU命令缓冲区(Command Buffer),使左右眼共享该命令缓冲区,并在每次进行渲染时重新启用该命令缓冲区。
本实施例中,由于在进行左眼图像渲染时,执行了左右眼图像的共性处 理,如此,在渲染右眼图像时,右眼无需执行数据加载、贴图、采样等渲染操作,从而加快了右眼图像的渲染时间。
步骤S20,在第二时间段,对所述第一渲染处理得到的左眼图像进行第二渲染处理;
在完成左眼图像的第一渲染处理后,需要在第二时间段对第一渲染处理得到的左眼图像进行第二渲染处理,其中,该第二渲染处理是指对左眼图像进行位姿预测和补偿。在一实施例中,根据预先设置的两个虚拟摄像机的属性分别产生与左眼和右眼对应的两个图像转换矩阵,其中,虚拟摄像机的属性可以包括虚拟摄像机的位置、虚拟摄像机的朝向以及两个虚拟摄像机之间的夹角等;确定经过第一渲染处理得到的左眼图像对应的图像转换矩阵,采用左眼的图像转换矩阵对左眼图像进行投影转换,以将左眼图像投影到投影窗口中,即采用左眼的图像转换矩阵对左眼图像进行图像绘制,并将绘制得到的左眼图像渲染至第二图像缓冲区(GPU缓冲区),如此,实现了三维坐标至二维坐标的变换。
步骤S30,在第三时间段,对所述第一渲染处理得到的右眼图像进行第二渲染处理;
在完成右眼图像的第一渲染处理后,需要在第三时间段,对第一渲染处理得到的右眼图像进行第二渲染处理,其中,该第二渲染处理是指对右眼图像进行位姿预测和补偿。在一实施例中,根据预先设置的两个虚拟摄像机的属性分别产生与左眼和右眼对应的两个图像转换矩阵,其中,虚拟摄像机的属性可以包括虚拟摄像机的位置、虚拟摄像机的朝向以及两个虚拟摄像机之间的夹角等;确定经过第一渲染处理得到的右眼图像对应的图像转换矩阵,采用右眼的图像转换矩阵对右眼图像进行投影转换,以将右眼图像投影到投影窗口中,即采用右眼的图像转换矩阵对右眼图像进行图像绘制,并将绘制得到的右眼图像渲染至第三图像缓冲区(GPU缓冲区),如此,实现了三维坐标至二维坐标的变换。
步骤S40,在第四时间段,对完成所述第二渲染处理后的左眼图像进行异步时间扭曲;
在完成左眼图像的第二渲染处理后,在第四时间段,对完成第二渲染处 理后的左眼图像进行异步时间扭曲。在一实施例中,在当前渲染周期的时长达到预设时长时,对完成第二渲染处理后的左眼图像进行异步时间扭曲,其中,预设时长大于半个渲染周期。例如,参考图6,预设时长为3T/4,在当前渲染周期的时长达到3T/4时,对完成第二渲染处理后的左眼图像进行异步时间扭曲。其中,预设时长可根据渲染程序运行的需求设置,在此不做限定。需要说明的是,目前是在T/2对左眼图像进行异步时间扭曲,但如果左眼图像在T/2未完成渲染,那么T/2之后渲染的左眼图像就浪费了。而本申请由于右眼图像的渲染时间变短了,因此,可以延长对左眼图像进行异步时间扭曲的时间,以保证左眼图像完成渲染后进行异步时间扭曲,避免渲染浪费,降低眩晕感。
在完成左眼图像的异步时间扭曲后,将扭曲后的左眼图像渲染至图像缓冲区,如单缓冲区(CPU缓冲区),通过图像缓冲区将左眼图像经过MIPI总线推送至显示器进行显示。
步骤S50,在第五时间段,对完成所述第二渲染处理后的右眼图像进行异步时间扭曲;其中,第一时间段、第二时间段以及第三时间段组成第一渲染周期、至少部分第四时间段与至少部分第三时间段重叠。
在完成右眼图像第二渲染处理后,在第五时间段,对右眼图像进行异步时间扭曲,将扭曲后的右眼图像渲染至图像缓冲区,如单缓冲区(CPU缓冲区),通过图像缓冲区将右眼图像经过MIPI总线推送至显示器进行显示。其中,左右眼图像的渲染过程均是在渲染线程执行的,而左右眼图像的ATW过程均是在ATW线程执行的。
需要说明的是,扭曲后的左眼图像渲染至的图像缓冲区和扭曲后的右眼图像渲染至的图像缓冲区可以是不同的图像缓冲区域。
其中,第一时间段与第二时间段均为左眼图像渲染时间段,左眼图像在第一时间段完成了部分图像的渲染,而在第二时间段后完成了所有图像的渲染,左眼图像的渲染时间为第一时间段与第二时间段的时间和值。第三时间段为右眼图像的渲染时间,因此,第一时间段、第二时间段以及第三时间段共同组成第一渲染周期。由于在进行左眼图像渲染时,将左右眼图像的共性渲染操作完成了,如此,缩短了右眼图像的渲染时间,同时可以延迟启动左 眼图像的异步时间扭曲。由于延迟启动左眼图像的异步时间扭曲,使得左眼图像的渲染时间变长,因此,第一时间段与第二时间段的时间和值大于第一渲染周期的一半。此外,由于在进行左眼图像的异步时间扭曲时,也进行右眼图像的渲染,因此,至少部分第四时间段与至少部分第三时间段重叠。
上述第五时间段可以位于第一渲染周期之后的第二渲染周期。
本实施例通过采用多视图渲染功能对左右眼图像进行渲染,从而加快右眼图像的渲染时间,如此,可以延迟进行左眼图像的异步时间扭曲,以保证左眼图像完成渲染后进行异步时间扭曲,避免渲染浪费,降低眩晕感。
进一步地,参考图3,提出本申请图像处理方法第二实施例。
所述图像处理方法第二实施例与所述图像处理方法第一实施例的区别在于,所述对完成所述第二渲染处理后的左眼图像进行异步时间扭曲,以输出显示异步时间扭曲后的左眼图像的步骤包括:
步骤S41,获取用户的第一头部姿势信息;
步骤S42,根据所述第一头部姿势信息对完成所述第二渲染处理后的左眼图像进行异步时间扭曲。
在对完成第二渲染处理后的左眼图像进行异步时间扭曲时,获取用户的第一头部姿势信息,该第一头部姿态信息可以包括指示(或识别)头部姿态的信息,并且可以包括指示用户的头部或者头戴式VR设备的位置信息以及指示用户的头部或头戴式设备的方向信息中的至少一个。例如,可从跟踪用户佩戴的VR设备的姿态的跟踪设备中获取用户的第一头部姿态信息,如包括用户头部或用户佩戴的VR设备的位置信息和/或方向信息,其中,跟踪设备可以是头戴式VR设备的一部分,如IMU、加速度计、相机或其他跟踪设备,用于跟踪头戴式VR设备或用户头部的姿态。然后,根据获取到的用户的第一头部姿势信息预测用户头部的旋转方向或位置,按照预测的旋转方向或位置对完成第二渲染处理后的左眼图像进行异步时间扭曲,并将扭曲后的左眼图像渲染至图像缓冲存区,如单缓冲区(Single buffer),通过图像缓冲区将扭曲后的左眼图像发送至显示器显示。例如,获取头部6dof四元组数据,通过头部6dof四元组数据计算预测旋转矩阵,采用预测矩阵对完成第二渲染 处理后的左眼图像进行处理得到扭曲(旋转或调整)后的左眼图像,再将扭曲后的左眼图像渲染到图像缓冲区,图像缓冲区通过硬件推送左眼图像至显示器显示。
本实施例通过对左眼图像进行时间扭曲(如旋转或调整)以校正在左眼图像被渲染之后(或同时)发生的头部运动,可以减少感知的延迟,从而降低了用户的眩晕感。
进一步地,参考图4,提出本申请图像处理方法第三实施例。
所述图像处理方法的第三实施例与第二实施例的区别在于,所述对完成所述第二渲染处理后的右眼图像进行异步时间扭曲,以输出显示异步时间扭曲后的右眼图像的步骤包括:
步骤S51,获取用户的第二头部姿势信息;
步骤S52,根据所述第二头部姿势信息对完成所述第二渲染处理后的右眼图像进行异步时间扭曲。
在对完成第二渲染处理后的右眼图像进行异步时间扭曲时,获取用户的第二头部姿势信息,该第二头部姿态信息可以包括指示(或识别)头部姿态的信息,并且可以包括指示用户的头部或者头戴式VR设备的位置信息以及指示用户的头部或头戴式设备的方向信息中的至少一个。例如,可从跟踪用户佩戴的VR设备的姿态的跟踪设备中获取用户的第二头部姿态信息,如包括用户头部或用户佩戴的VR设备的位置信息和/或方向信息,其中,跟踪设备可以是头戴式VR设备的一部分,如IMU、加速度计、相机或其他跟踪设备,用于跟踪头戴式VR设备或用户头部的姿态。然后,根据获取到的用户的第二头部姿势信息预测用户头部的旋转方向或位置,按照预测的旋转方向或位置对完成第二渲染处理后的左眼图像进行异步时间扭曲,并将扭曲后的右眼图像渲染至图像缓冲存区,如单缓冲区(Single buffer),通过图像缓冲区将扭曲后的右眼图像发送至显示器显示。例如,获取头部6dof四元组数据,通过头部6dof四元组数据计算预测旋转矩阵,采用预测矩阵对完成第二渲染处理后的右眼图像进行处理得到扭曲(旋转或调整)后的右眼图像,再将扭曲后的右眼图像渲染到图像缓冲区,图像缓冲区通过硬件推送右眼图像至显 示器显示。
本实施例通过对右眼图像进行时间扭曲(如旋转或调整)以校正在右眼图像被渲染之后(或同时)发生的头部运动,可以减少感知的延迟,从而降低了用户的眩晕感。
此外,本申请还提供一种图像处理装置,所述装置包括存储器、处理器及存储在存储器上并在所述处理器上运行图像处理程序,所述装置通过在第一时间段,对左眼图像和右眼图像进行第一渲染处理;在第二时间段,对第一渲染处理得到的左眼图像进行第二渲染处理;在第三时间段,对第一渲染处理得到的右眼图像进行第二渲染处理;在第四时间段,对完成第二渲染处理后的左眼图像进行异步时间扭曲;在第五时间段,对完成第二渲染处理后的右眼图像进行异步时间扭曲。本申请实施例通过对左眼图像和右眼图像进行第一渲染处理,并在一定时间启动左眼图像异步时间扭曲和右眼图像的异步时间扭曲,可以降低渲染过程中的时间延迟,从而降低用户的眩晕感。
进一步地,通过对左眼图像和右眼图像进行第一渲染处理,使得可以加快右眼图像的渲染时间,从而可以延迟启动左眼图像的异步时间扭曲,如此,可以避免渲染浪费,让更多实时帧进入显示器,降低用户的眩晕感。
需要说明是的,上述图像处理装置可以是某硬件模块(例如专用处理器、及集成式处理器)、软件功能模块或者整机设备(例如VR设备、AR设备、MR设备)等。
此外,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有图像处理方法程序,所述图像处理方法程序被处理器执行时实现如上所述图像处理方法的步骤。
本说明书中各个实施例采用并列或者递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处可参见方法部分说明。
本领域普通技术人员还可以理解,结合本文中所公开的实施例描述的各 示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (13)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    在第一时间段,对左眼图像和右眼图像进行第一渲染处理;
    在第二时间段,对所述第一渲染处理得到的左眼图像进行第二渲染处理;
    在第三时间段,对所述第一渲染处理得到的右眼图像进行第二渲染处理;
    在第四时间段,对完成所述第二渲染处理后的左眼图像进行异步时间扭曲;
    在第五时间段,对完成所述第二渲染处理后的右眼图像进行异步时间扭曲;其中,第一时间段、第二时间段以及第三时间段组成第一渲染周期、至少部分第四时间段与至少部分第三时间段重叠。
  2. 如权利要求1所述的图像处理方法,其特征在于,所述第一时间段与所述第二时间段的时间和值大于所述第一渲染周期的一半。
  3. 如权利要求1所述的图像处理方法,其特征在于,所述对完成所述第二渲染处理后的左眼图像进行异步时间扭曲的步骤包括:
    获取用户的第一头部姿势信息;
    根据所述第一头部姿势信息对完成所述第二渲染处理后的左眼图像进行异步时间扭曲。
  4. 如权利要求1所述的图像处理方法,其特征在于,所述对左眼图像和右眼图像进行第一渲染处理的步骤包括:
    从第一图像缓冲区获取图像数据;
    依据预设条件从所述图像数据获取所述左眼图像和所述右眼图像;
    对所述左眼图像和所述右眼图像进行第一渲染处理。
  5. 如权利要求1所述的图像处理方法,其特征在于,所述对完成所述第二渲染处理后的右眼图像进行异步时间扭曲的步骤包括:
    获取用户的第二头部姿势信息;
    根据所述第二头部姿势信息对完成所述第二渲染处理后的右眼图像进行异步时间扭曲。
  6. 如权利要求1所述的图像处理方法,其特征在于,所述对所述第一渲染处理得到的左眼图像进行第二渲染处理的步骤包括:
    获取所述第一渲染处理得到的所述左眼图像对应的第一预设位姿信息;
    根据所述第一预设位姿信息对所述第一渲染处理得到的左眼图像进行第二渲染处理。
  7. 如权利要求1所述的图像处理方法,其特征在于,所述对所述第一渲染处理得到的右眼图像进行第二渲染处理的步骤包括:
    获取所述第一渲染处理得到的所述右眼图像对应的第二预设位姿信息;
    根据所述第二预设位姿信息对所述第一渲染处理得到的右眼图像进行第二渲染处理。
  8. 如权利要求1至7任一项所述的图像处理方法,其特征在于,所述对完成所述第二渲染处理后的左眼图像进行异步时间扭曲之前,所述方法还包括:
    将对完成所述第二渲染处理后的左眼图像渲染至第二图像缓冲区。
  9. 如权利要求1至7任一项所述的图像处理方法,其特征在于,所述对完成所述第二渲染处理后的右眼图像进行异步时间扭曲之前,所述方法还包括:
    将对完成所述第二渲染处理后的右眼图像渲染至第三图像缓冲区。
  10. 如权利要求1所述的图像处理方法,其特征在于,所述方法还包括:
    分别将经过异步时间扭曲的左眼图像和经过异步时间扭曲的右眼图像发送至显示部件显示。
  11. 如权利要求1所述的图像处理方法,其特征在于,头部姿态信息包括以下中的至少一个:
    指示用户的头部或用户佩戴的VR设备的位置信息;
    指示用户的头部或用户佩戴的VR设备的方向信息。
  12. 一种图像处理装置,其特征在于,所述装置包括存储器、处理器及存储在存储器上并在所述处理器上运行图像处理程序,所述处理器执行所述图像处理程序时实现如权利要求1至11中任一项所述的方法的步骤。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有图像处理程序,所述图像处理程序被处理器执行时实现如权利要求1至11中任一项所述的方法的步骤。
PCT/CN2021/129157 2021-07-22 2021-11-06 图像处理方法、装置和计算机可读存储介质 WO2023000547A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/474,285 US20240020913A1 (en) 2021-07-22 2023-09-26 Image processing method, image processing device and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110833918.4A CN113596569B (zh) 2021-07-22 2021-07-22 图像处理方法、装置和计算机可读存储介质
CN202110833918.4 2021-07-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/474,285 Continuation US20240020913A1 (en) 2021-07-22 2023-09-26 Image processing method, image processing device and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2023000547A1 true WO2023000547A1 (zh) 2023-01-26

Family

ID=78249405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129157 WO2023000547A1 (zh) 2021-07-22 2021-11-06 图像处理方法、装置和计算机可读存储介质

Country Status (3)

Country Link
US (1) US20240020913A1 (zh)
CN (1) CN113596569B (zh)
WO (1) WO2023000547A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596569B (zh) * 2021-07-22 2023-03-24 歌尔科技有限公司 图像处理方法、装置和计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892683A (zh) * 2016-04-29 2016-08-24 上海乐相科技有限公司 一种显示方法及目标设备
CN106683034A (zh) * 2016-12-29 2017-05-17 上海拆名晃信息科技有限公司 一种用于虚拟现实的异步时间卷曲计算方法
US20170243324A1 (en) * 2016-02-22 2017-08-24 Google Inc. Separate time-warping for a scene and an object for display of virtual reality content
CN108921951A (zh) * 2018-07-02 2018-11-30 京东方科技集团股份有限公司 虚拟现实图像显示方法及其装置、虚拟现实设备
CN109863538A (zh) * 2016-08-26 2019-06-07 奇跃公司 用于虚拟及增强现实显示的连续时间扭曲及双目时间扭曲系统和方法
CN109920040A (zh) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 显示场景处理方法和装置、存储介质
CN112230776A (zh) * 2020-10-29 2021-01-15 北京京东方光电科技有限公司 虚拟现实显示方法、装置及存储介质
CN113596569A (zh) * 2021-07-22 2021-11-02 歌尔光学科技有限公司 图像处理方法、装置和计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658170A (zh) * 2016-12-20 2017-05-10 福州瑞芯微电子股份有限公司 一种降低虚拟现实延迟的方法和装置
CN106971368A (zh) * 2017-01-18 2017-07-21 上海拆名晃信息科技有限公司 一种用于虚拟现实的同步时间卷曲计算方法
US11335071B2 (en) * 2018-08-02 2022-05-17 Sony Interactive Entertainment Inc. Image generation apparatus and image generation method for augmented reality images based on object interaction
CN109242944B (zh) * 2018-09-28 2023-08-11 京东方科技集团股份有限公司 一种显示方法和装置
CN109819232B (zh) * 2019-02-19 2021-03-26 京东方科技集团股份有限公司 一种图像处理方法及图像处理装置、显示装置
CN110488977B (zh) * 2019-08-21 2021-10-08 京东方科技集团股份有限公司 虚拟现实显示方法、装置、系统及存储介质
CN111586391B (zh) * 2020-05-07 2022-07-08 中国联合网络通信集团有限公司 一种图像处理方法、装置及系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243324A1 (en) * 2016-02-22 2017-08-24 Google Inc. Separate time-warping for a scene and an object for display of virtual reality content
CN105892683A (zh) * 2016-04-29 2016-08-24 上海乐相科技有限公司 一种显示方法及目标设备
CN109863538A (zh) * 2016-08-26 2019-06-07 奇跃公司 用于虚拟及增强现实显示的连续时间扭曲及双目时间扭曲系统和方法
CN106683034A (zh) * 2016-12-29 2017-05-17 上海拆名晃信息科技有限公司 一种用于虚拟现实的异步时间卷曲计算方法
CN108921951A (zh) * 2018-07-02 2018-11-30 京东方科技集团股份有限公司 虚拟现实图像显示方法及其装置、虚拟现实设备
CN109920040A (zh) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 显示场景处理方法和装置、存储介质
CN112230776A (zh) * 2020-10-29 2021-01-15 北京京东方光电科技有限公司 虚拟现实显示方法、装置及存储介质
CN113596569A (zh) * 2021-07-22 2021-11-02 歌尔光学科技有限公司 图像处理方法、装置和计算机可读存储介质

Also Published As

Publication number Publication date
US20240020913A1 (en) 2024-01-18
CN113596569A (zh) 2021-11-02
CN113596569B (zh) 2023-03-24

Similar Documents

Publication Publication Date Title
CN112020858B (zh) 利用感兴趣区域的确定的异步时间和空间扭曲
TWI659391B (zh) 顯示器同步的圖像規整
US11719933B2 (en) Hand-locked rendering of virtual objects in artificial reality
WO2017185622A1 (zh) 一种显示方法及目标设备
US11170577B2 (en) Generating and modifying representations of objects in an augmented-reality or virtual-reality scene
US20230039100A1 (en) Multi-layer reprojection techniques for augmented reality
JP7101269B2 (ja) ポーズ補正
US20210368152A1 (en) Information processing apparatus, information processing method, and program
US11847552B2 (en) Pipeline with processor including neural network for rendering artificial reality
US20130113784A1 (en) Maintenance of Three Dimensional Stereoscopic Effect Through Compensation for Parallax Setting
US20240020913A1 (en) Image processing method, image processing device and computer readable storage medium
JP2024502772A (ja) 合成イメージの生成
TWI792535B (zh) 圖形處理方法和相關眼動追蹤系統
TW202320022A (zh) 合成器層外推
Nguyen Low-latency mixed reality headset
JP2024502273A (ja) 時間的中心窩レンダリング
TWI715474B (zh) 動態調整鏡頭配置的方法、頭戴式顯示器及電腦裝置
US11836872B1 (en) Method and device for masked late-stage shift
TW202316239A (zh) 具有應用程式產生的移動向量和深度之圖框外插
CN115878548A (zh) 用于相机使能装置的低延迟增强现实架构

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE