US10777014B2 - Method and apparatus for real-time virtual reality acceleration - Google Patents

Method and apparatus for real-time virtual reality acceleration Download PDF

Info

Publication number
US10777014B2
US10777014B2 US16/346,125 US201716346125A US10777014B2 US 10777014 B2 US10777014 B2 US 10777014B2 US 201716346125 A US201716346125 A US 201716346125A US 10777014 B2 US10777014 B2 US 10777014B2
Authority
US
United States
Prior art keywords
image
output
pixel
real
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/346,125
Other languages
English (en)
Other versions
US20190279427A1 (en
Inventor
Yupu Tang
Jun Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Allwinner Technology Co Ltd
Original Assignee
Allwinner Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Allwinner Technology Co Ltd filed Critical Allwinner Technology Co Ltd
Assigned to ALLWINNER TECHNOLOGY CO., LTD. reassignment ALLWINNER TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TANG, Yupu, ZHANG, JUN
Publication of US20190279427A1 publication Critical patent/US20190279427A1/en
Application granted granted Critical
Publication of US10777014B2 publication Critical patent/US10777014B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/005Retouching; Inpainting; Scratch removal
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware

Definitions

  • the present invention relates to the field of virtual reality, and specifically, to a real-time virtual reality (VR) acceleration method and apparatus.
  • VR virtual reality
  • VR head-mounted display lens distortion can maximize covering of the warping of a visual range of a person, to present a larger field of view and enhance sense of immersion of experience.
  • an image on a display screen is amplified by using a lens, the image is distorted. To cancel the distortion, the image needs to be stretched and warped. In this way, an undistorted image is projected to the eye's retina.
  • the technology is referred to as anti-distortion.
  • a rainbow is formed when a white light is made to pass through a prism. That is because different colors have different light refractive indexes.
  • a same scenario occurs at the edge of a VR lens.
  • a practice similar to anti-distortion is used. According to a principle of optical path reversibility, anti-dispersion is first performed before an image enters the lens. In this way, the image through the lens is normal. The technology is referred to as anti-dispersion.
  • Asynchronous time warping is a technology for generating an intermediate frame.
  • the ATW can generate the intermediate frame, thereby effectively reducing video jitter of the image.
  • head moves excessively quickly, resulting in delay of scenario rendering.
  • the time warping warps, based on a direction, an image before being sent to a display, to resolve the delay problem.
  • the ATW technology is widely applied to VR products and the like, effectively overcoming image jitter and delay, thereby reducing sense of dizziness.
  • the prior art mainly has the following defects: the anti-distortion, the anti-dispersion, and the ATW are performed in steps by using GPU or through software operation; anti-distortion, anti-dispersion, ATW calculation need to cost GPU load and system load; ATW requires GPU hardware to support a proper preemption granularity, and requires an operating system and a driver program to support GPU preemption; during implementation, GPU processing is in memory-memory mode, consuming much bandwidth, and resulting in increased power consumption; and the GPU uses an offline mode, additionally increasing the processing delay and making VR experience worse.
  • a technical solution used by the present invention to resolve the technical problems is to provide a real-time VR acceleration method, including: step 1, obtaining an input image; step 2, partitioning an output image buffer into M rows and N columns of rectangular grid blocks, and outputting vertex coordinates of the rectangular grid blocks; step 3, calculating, according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW, vertex coordinates of input image grid blocks that correspond to the vertex coordinates of the output image buffer grid blocks; step 4, calculating two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks; step 5, calculating, according to the two-dimensional mapping coefficients, coordinates of an input image pixel corresponding to an output image pixel; step 6, selecting pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel; and step
  • the input image in the step 1 is a video image obtained after spherical transformation.
  • the values of M and N in the step 2 are a power of 2.
  • the step 4 includes: calculating two-dimensional mapping coefficients of R, G, and B components of the pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the step 5 includes: respectively calculating, according to the two-dimensional mapping coefficients of the R, G, and B components, coordinates of R, G, and B components of the input image pixel that correspond to R, G, and B components of the output image pixel.
  • the step 5 of calculating, according to the two-dimensional mapping coefficients, coordinates of a pixel in the input image that is corresponding to a pixel in the output image buffer is completed by hardware circuit.
  • the step 6 of selecting pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel is completed by hardware circuit.
  • the step 6 includes: selecting the pixel values of the at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, and performing interpolation calculation by using a bilinear interpolation algorithm or a bicubic interpolation algorithm, to calculate the pixel value of the output image pixel.
  • the image that is output in the step 7 and that is obtained after the anti-distortion, the anti-dispersion, and the ATW is directly displayed after image synthesis.
  • the present invention further provides a real-time VR acceleration apparatus, including an input image buffering module, an output image buffer partitioning module, a mapping coefficient calculation module, and an image calculation module.
  • the input image buffering module receives and stores an input image.
  • the output image buffer partitioning module partitions an output image buffer into M rows and N columns of rectangular grid blocks, and outputs vertex coordinates of all the grid blocks.
  • the mapping coefficient calculation module calculates, according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW, vertex coordinates of input image grid blocks that correspond to the vertex coordinates of the output image buffer grid blocks, and calculates two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the image calculation module calculates, according to the two-dimensional mapping coefficients, coordinates of an input image pixel corresponding to an output image pixel, and selects pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel, to output an image obtained after the anti-distortion, the anti-dispersion, and the ATW.
  • the input image is a video image obtained after spherical transformation.
  • the values of M and N are a power of 2.
  • the mapping coefficient calculation module includes a vertex coordinate calculation unit and a mapping coefficient calculation unit.
  • the vertex coordinate calculation unit calculates, according to the algorithm model integrating the anti-distortion, the anti-dispersion, and the ATW, the vertex coordinates of the input image grid blocks that correspond to the vertex coordinates of the output image buffer grid blocks.
  • the mapping coefficient calculation unit calculates the two-dimensional mapping coefficients of the pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the mapping coefficient calculation unit includes an R component mapping coefficient calculation unit, a G component mapping coefficient calculation unit, and a B component mapping coefficient calculation unit.
  • the calculating, by the image calculation module according to the two-dimensional mapping coefficients, coordinates of an input image pixel corresponding to an output image pixel; and selecting pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel are completed by hardware circuit.
  • the image calculation module includes a coordinate calculator, a coordinate decider, a value selector, and an image interpolation generator.
  • the coordinate calculator calculates, according to the two-dimensional mapping coefficients, the coordinates of the input image pixel corresponding to the output image pixel.
  • the coordinate decider determines whether the coordinates obtained by the coordinate calculator are valid, where when the coordinates calculated by the coordinate calculator are out of a range of the coordinates of the input image, the coordinate decider determines that the coordinates of the input image pixel corresponding to the output image pixel are invalid, and the pixel value of the output image pixel is 0.
  • the value selector selects, from the input image, pixel values of at least four pixels adjacent to valid coordinates determined by the coordinate decider.
  • the image interpolation generator performs interpolation calculation according to the pixel values selected by the value selector, to calculate the pixel value of the output image pixel, and generates the image obtained after the anti-distortion, the anti-dispersion, and the ATW.
  • the coordinate calculator includes an R component coordinate calculation unit, a G component coordinate calculation unit, and a B component coordinate calculation unit.
  • the coordinate decider determines that the coordinates of the input image corresponding to the R, G, and B components obtained by the coordinate calculator are invalid, and the pixel value of the output image pixel is 0.
  • the image interpolation generator includes a bilinear interpolation calculator or a bicubic interpolation calculator, configured to perform the interpolation calculation according to the pixel values selected by the value selector, to calculate the pixel value of the output image pixel, to generate the image obtained after the anti-distortion, the anti-dispersion, and the ATW.
  • the image that is output by the image calculation module and that is obtained after the anti-distortion, the anti-dispersion, and the ATW is directly displayed after image synthesis.
  • the output image buffer is partitioned into grids, and the vertices are obtained, and then three functions of anti-distortion, anti-dispersion, and ATW, are integrated to one software.
  • Vertex coordinates of corresponding input image grids are obtained by software vertex rendering.
  • a set of coefficients integrating the functions of anti-distortion, anti-dispersion, and ATW are calculated by using two-dimensional mapping.
  • an output image is obtained by input image interpolation.
  • the method effectively makes use of GPU vertex rendering advantage, reducing a large amount of GPU interpolation calculation, and further effectively resolves problems of GPU load and resource preemption.
  • the interpolation algorithm can be adaptively adjusted, to improve image definition, and superior to GPU rendering.
  • the present invention further provides a real-time VR acceleration apparatus.
  • Two-dimensional mapping is performed between an input image and an output image by using a block mapping coefficient (M ⁇ N). Mapping of a target image to an original image is completed in a form of hardware lookup table. Finally, a final image is generated by interpolation. Rendering speed is increased, display delay is decreased, the GPU is released, and a system load is reduced.
  • an algorithm model is flexibly integrated into vertex coordinates in a software form, to adapt to models such as a plurality of types of anti-distortion and ATW, without any modification of hardware.
  • Online image acceleration processing needs only data reading and does not need data writing, reduces bandwidth consumption, reduces power consumption, reduces GPU and system load to reduce power consumption, further decreases delay, improve experience, and has no dizziness.
  • FIG. 1 is a block diagram of a process of a real-time VR acceleration method 100 according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a two-dimensional mapping principle of an output image buffer grid block and an input image grid block according to an embodiment of the present invention
  • FIG. 3 is a block diagram of a real-time VR acceleration apparatus 300 according to an embodiment of the present invention.
  • FIG. 4 is a block diagram of a mapping coefficient calculation module 400 according to an embodiment of the present invention.
  • FIG. 5 is a circuit block diagram of an image calculation module 500 according to an embodiment of the present invention.
  • FIG. 6 is a block diagram of a real-time VR acceleration apparatus 600 according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of left and right eye buffers calculation process according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of block partitioning and value selecting of an input image according to an embodiment of the present invention.
  • FIG. 1 is a block diagram of a process of a real-time VR acceleration method 100 according to an embodiment of the present invention.
  • an input image is first obtained.
  • the input image may be an image generated after a decoded video image is subjected to a spherical transformation, and is stored in a corresponding buffer.
  • the image may be stored in an eye buffer.
  • an output image buffer is partitioned into M rows and N columns of rectangular grid blocks, and vertex coordinates of the rectangular grid blocks are output.
  • step 105 vertex coordinates of input image grid blocks that correspond to the vertex coordinates of the output image buffer grid blocks are calculated according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW.
  • algorithm models integrating anti-distortion, anti-dispersion, and ATW in the prior arts may be used to calculate the vertex coordinates.
  • step 107 two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image are calculated according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • mapping coefficients of R, G, and B components of each pair of grid blocks may be calculated respectively.
  • step 109 coordinates of an input image pixel corresponding to an output image pixel are calculated according to the two-dimensional mapping coefficients.
  • step 111 pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel are selected, to calculate a pixel value of the output image pixel.
  • the coordinates of the pixel in the input image that is corresponding to the pixel in the output image buffer may be calculated in row scanning for the grid blocks in the output image buffer according to the two-dimensional mapping coefficients, and the pixel value of the output image may be calculated in row scanning.
  • step 113 an image is output after anti-distortion, anti-dispersion, and ATW.
  • the output image buffer is partitioned into grids. Then, through software calculation, vertex coordinates of input and output image blocks are interpolated by two-dimensional mapping, to implement the functions of anti-distortion, anti-dispersion, and ATW. In addition, these three functions are integrated together by using a set of coefficients to be completed once and for all. VR acceleration is realized, and the problems of GPU load and resource preemption are effectively resolved. In addition, the algorithm models are flexibly integrated into vertex coordinates calculation through software, to adapt to various anti-distortion, anti-dispersion, and ATW models.
  • the input image may be a video image obtained after spherical transformation.
  • the lengths of sides of the rectangular grid block of the partitioned output image buffer, M and N each may be a power of 2, thereby reducing the amount of calculation and the calculation time.
  • a calculation method is as follows:
  • the two-dimensional mapping coefficients of R, G, and B components of each pair of grid blocks in the output image buffer and the input image may be calculated according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image may be directly calculated.
  • two-dimensional mapping coefficients are stored according to row scanning mode. Each block stores the coefficients thereof according to the pixel component.
  • the coefficients are a(a(R), a(G), a(B)), b(b(R), b(G), b(B)), c(c(R), c(G), c(B)), d(d(R), d(G), d(B)), e(e(R), e(G), e(B)), f(f(R), f(G), f(B)), g(g(R), g(G), g(B)), h(h(R), h(G), h(B)), ⁇ ( ⁇ (R), ⁇ (G), and ⁇ (B)).
  • the storing sequence is first storing 8 ⁇ 32 bits of R component, then 8 ⁇ 32 bits of G, 8 ⁇ 32 bits of B, and 8 ⁇ 32 bits of a constant term ⁇ component, and then next block data.
  • 8 ⁇ 32 bits refers to eight (a, b, c, d, e, f, g, and h) 32 bits of data.
  • the ⁇ component has only three 32 bits of R, G, and B, and is set to 8 ⁇ 32 bits for hardware design alignment.
  • coordinates of R, G, and B components of the pixel in the input image that correspond to R, G, and B components of the pixel in the output image buffer are respectively calculated according to the two-dimensional mapping coefficients of the R, G, and B components.
  • the step 109 of calculating, according to the two-dimensional mapping coefficients, the coordinates of an input image pixel corresponding to an output image pixel may be completed by hardware circuit.
  • the step 111 of selecting pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel may also be completed by hardware circuit.
  • the algorithm models of anti-distortion, ATW, and anti-dispersion are integrated into input and output buffer grid coordinates, and an image interpolation effect is achieved through hardware table lookup, thereby online implementing processes such as anti-distortion, ATW, and anti-dispersion, reducing data reading and writing, decreasing bandwidth and power consumption, decreasing delay, and reducing dizziness.
  • a bilinear interpolation algorithm or a bicubic interpolation algorithm may be used, so that the pixel values of the at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel are selected, and the pixel value of the corresponding pixel of the output image is obtained through interpolation calculation.
  • VR acceleration is realized by partitioning grid vertices and performing interpolation calculation using two-dimensional mapping.
  • Interpolation algorithm can be adaptively adjusted, to improve image definition, and to be superior to GPU rendering.
  • values of different quantities of pixels may be selected in the input image according to different interpolation algorithms to perform interpolation calculation, to obtain the pixel value of the output image pixel.
  • pixel values of four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel may be selected to perform interpolation calculation.
  • pixel values of sixteen pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel may be selected to perform interpolation calculation.
  • an output image obtained after the anti-distortion, the anti-dispersion, and the ATW may be directly displayed on a display device after image synthesis.
  • FIG. 2 is a schematic diagram of two-dimensional mapping principle of an output image buffer grid block and an input image grid block according to an embodiment of the present invention. The following describes the process of the real-time VR acceleration method 100 shown in FIG. 1 in detail with reference to FIG. 2 .
  • step 101 a video image after spherical transformation is received, and is stored in an input buffer, such as an eye buffer.
  • an output buffer is partitioned into M*N blocks, and vertex coordinates of each grid block are output.
  • vertex coordinates of a grid block Vd in the output buffer are (j 0 , j 0 ), (i 1 , j 1 ), (i 2 , j 2 ), and (i 3 , j 3 ).
  • the vertex coordinates of the input image grid blocks that correspond to the vertex coordinates of the output image buffer grid blocks are calculated according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW.
  • an algorithm model integrating anti-distortion, anti-dispersion, and ATW For example, four vertex coordinates of a grid block Vs in the input image that is corresponding to Vd in the left figure in FIG. 2 are (x 0 , y 0 ), (x 1 , y 1 ), (x 2 , y 2 ), and (x 3 , y 3 ).
  • the two-dimensional mapping coefficients namely, a, b, c, d, e, f, g, and h, of each pair of grid blocks in the output buffer and the input image are calculated according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the following calculation formula may be used to calculate the two-dimensional mapping coefficients of the grid block Vd in the output buffer and the input image grid block Vs:
  • step 109 the coordinates of the pixel in the input image that is corresponding to each pixel in the output image buffer are calculated according to the two-dimensional mapping coefficients, namely, a, b, c, d, e, f, g, and h.
  • step 111 the pixel values of the at least four pixels adjacent to the pixel (x, y) in the input image grid block Vs are selected according to step 109 , to perform interpolation calculation, to obtain a pixel value of the pixel (i, j) in the output buffer.
  • bilinear interpolation algorithm can be used.
  • four pixels adjacent to the pixel (x, y) may be selected.
  • bilinear interpolation calculation is performed on pixels (x, y+1), (x+1, y+1), (x+1, y) and (x ⁇ 1, y ⁇ 1), to obtain the pixel value of the pixel (i, j) in the output buffer.
  • pixel values of sixteen pixels adjacent to the pixel (x, y) may also be selected to perform bicubic interpolation calculation, to obtain the pixel value of the pixel (i, j) in the output buffer.
  • step 111 after pixel values of all pixels in the output buffer are calculated, in step 113 , the image obtained after anti-distortion, anti-dispersion, and ATW is output, then is synthetized and superimposed with other image layers, and finally is directly sent, through a display channel, to a screen for displaying.
  • the input image and the output image buffers are partitioned into grids through software calculation, to obtain the vertex coordinates of input and output image blocks. Then corresponding grid mapping coefficients are obtained by two-dimensional mapping. Finally, hard interpolation is used to implement the functions, of anti-distortion, ATW, and anti-dispersion.
  • the three functions are integrated together by using a set of coefficients to be completed once and for all. VR acceleration is realized.
  • the method effectively resolves problems of GPU load and resource preemption.
  • the algorithm models can be flexibly integrated into vertex coordinates in a software form, to adapt to various anti-distortion, anti-dispersion, and ATW models.
  • FIG. 3 is a block diagram of a real-time VR acceleration apparatus 300 according to an embodiment of the present invention.
  • the real-time VR acceleration apparatus 300 includes an input image buffering module 301 , an output image buffer partitioning module 303 , a mapping coefficient calculation module 305 , and an image calculation module 307 .
  • the input image buffering module 301 receives and stores an input image.
  • the input image may be a video image obtained after spherical transformation, and may be stored in an eye buffer.
  • the output image buffer partitioning module 303 partitions an output image buffer into M rows and N columns of rectangular grid blocks, and outputs vertex coordinates of all the grid blocks.
  • the mapping coefficient calculation module 305 calculates, according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW, vertex coordinates in the input image that correspond to the vertex coordinates of the output image buffer grid blocks, and calculates the two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • the algorithm model integrating anti-distortion, anti-dispersion, and ATW may be one of various algorithm models in the prior art.
  • the image calculation module 307 calculates, according to the two-dimensional mapping coefficients, coordinates of an input image pixel corresponding to an output image pixel, and selects pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel.
  • the coordinates of the pixel in the input image buffer that is corresponding to the pixel in the output image may be calculated according to the two-dimensional mapping coefficients in row scanning for the output image, and the pixel value of the output image is calculated by interpolation by using the obtained input image pixel.
  • the real-time VR acceleration apparatus 300 partitions the input image and the output image buffer into grids. Then, vertex coordinates of the input and output image blocks are interpolated by two-dimensional mapping, to implement the functions of anti-distortion, anti-dispersion, and ATW. In addition, the three functions are integrated together by using a set of coefficients to be completed once and for all. Problems of GPU load consumption and resource preemption are effectively resolved.
  • the image calculation module can be implemented by hardware on a display channel.
  • Image interpolation effect is achieved through hardware table lookup, thereby reducing data reading and writing, further decreasing bandwidth, and satisfying VR experience requirements such as reducing power consumption, decreasing delay, and having no dizziness.
  • the input image is a video image obtained after spherical transformation.
  • values of the M rows and the N columns of the grid blocks of the partitioned output image buffer may be a power of 2, thereby reducing the amount of calculation and calculation time.
  • the mapping coefficient calculation module 400 may include a vertex coordinate calculation unit 401 and a mapping coefficient calculation unit 403 .
  • the vertex coordinate calculation unit 401 calculates, according to the algorithm model integrating anti-distortion, anti-dispersion, and ATW, the vertex coordinates in the input image that correspond to the vertex coordinates of the output image buffer grid blocks.
  • the mapping coefficient calculation unit 403 calculates the two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image according to the vertex coordinates of the output image buffer grid blocks and the corresponding vertex coordinates of the input image grid blocks.
  • VR acceleration is realized by partitioning grid vertices, and performing interpolation using two-dimensional mapping.
  • the mapping coefficient calculation unit may include an R component mapping coefficient calculation unit, a G component mapping coefficient calculation unit, and a B component mapping coefficient calculation unit. Two-dimensional mapping coefficients of R, G, and B components are calculated by using the input and output grid coordinates. When only an algorithm of anti-distortion and ATW is calculated, the two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image may be directly calculated. In a specific implementation, to facilitate implementation, the two-dimensional mapping coefficients calculated by the mapping coefficient calculation module 307 are stored in row scanning mode.
  • Each block stores a coefficient thereof according to the pixel component, a(a(R), a(G), a(B)), b(b(R), b(G), b(B)), c(c(R), c(G), c(B)), d(d(R), d(G), d(B)), e(e(R), e(G), e(B)), f(f(R), f(G), f(B)), g(g(R), g(G), g(B)), h(h(R), h(G), h(B)), ⁇ ( ⁇ (R), ⁇ (G), and ⁇ (B)).
  • the storing sequence is first storing 8 ⁇ 32 bits of R component, 8 ⁇ 32 bits of G, 8 ⁇ 32 bits of B, and 8 ⁇ 32 bits of a constant term ⁇ component, and then next grid block of data.
  • 8 ⁇ 32 bits refer to eight (a, b, c, d, e, f, g, and h) 32 bits of data.
  • the ⁇ component has only three 32 bits of R, G, and B, and is set to 8 ⁇ 32 bits for hardware design alignment.
  • calculating, by the image calculation module according to the two-dimensional mapping coefficients, coordinates of an input image pixel corresponding to an output image pixel, and selecting pixel values of at least four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel, to calculate a pixel value of the output image pixel may be completed by a hardware circuit. Processes such as anti-distortion, anti-dispersion, and ATW are implemented online. Only data reading is needed, and data writing is not needed, thereby decreasing bandwidth and power consumption, decreasing delay, and reducing dizziness.
  • the image calculation module may include a coordinate calculator 501 , a coordinate decider 503 , a value selector 505 , and an image interpolation generator 507 .
  • the coordinate calculator 501 calculates, according to the two-dimensional mapping coefficients, the coordinates of the input image pixel corresponding to the output image pixel.
  • the coordinate decider 503 determines whether the coordinates of the input image pixel corresponding to the output image pixel that are calculated by the coordinate calculator 501 are reasonable and valid.
  • the coordinate decider determines that the coordinates of the input image pixel corresponding to the output image pixel are invalid, and the pixel value of the output image pixel is 0.
  • the value selector 505 selects, from the input image, pixel values of at least four pixels adjacent to the valid coordinates determined by the coordinate decider 503 .
  • the image interpolation generator 507 performs interpolation calculation according to the pixel values selected by the value selector 505 , to calculate the pixel value of the output image pixel, and finally generates an image obtained after anti-distortion, anti-dispersion, and ATW.
  • the coordinate calculator may include an R component coordinate calculation unit, a G component coordinate calculation unit, and a B component coordinate calculation unit, and respectively calculates, according to the two-dimensional mapping coefficients of the R, G, and B components that are calculated by the mapping coefficient calculation module, coordinates of R, G, and B components of the pixel in the input image that correspond to R, G, and B components of the pixel in the output image buffer.
  • the coordinate decider determines that the coordinates of the input image corresponding to the R, G, and B components obtained by the coordinate calculator are invalid. In this case, the pixel value of the output image pixel is 0.
  • the image interpolation generator may include a bilinear interpolation calculator or a bicubic interpolation calculator.
  • the bilinear interpolation calculator or the bicubic interpolation calculator performs the interpolation calculation according to the pixel values selected by the value selector, to calculate the pixel value of the output image pixel, to finally generate an image obtained after anti-distortion, anti-dispersion, and ATW.
  • VR acceleration is realized by partitioning grid vertices, and performing interpolation calculation using two-dimensional mapping.
  • the interpolation algorithm can be adaptively adjusted, to improve image definition, and superior to GPU rendering.
  • values of different numbers of pixels may be selected from the input image according to different algorithms of a interpolation calculator to perform interpolation calculation, to obtain the pixel value of the output image pixel.
  • a bilinear interpolation calculator When a bilinear interpolation calculator is used, pixel values of four pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel may be selected to perform interpolation calculation.
  • pixel values of sixteen pixels adjacent to the coordinates of the input image pixel corresponding to the output image pixel may be selected to perform interpolation calculation.
  • the image after anti-distortion, anti-dispersion, and ATW that is output by the image calculation module may be directly displayed on a display device after image synthesis.
  • Processes such as anti-distortion, anti-dispersion, and ATW are implemented online, thereby reducing data reading and writing, decreasing bandwidth and power consumption, decreasing delay, and reducing dizziness.
  • FIG. 6 is a block diagram of a real-time VR acceleration apparatus 600 according to an embodiment of the present invention.
  • the real-time VR acceleration apparatus 600 includes an output image buffer partitioning module 601 , a mapping coefficient calculation module 603 , and an image calculation module 605 .
  • the output image buffer partitioning module 601 partitions an output image buffer into M rows and N columns of rectangular grid blocks, and outputs the vertex coordinates of all the grid blocks.
  • the mapping coefficient calculation module 603 includes an R component mapping coefficient calculation unit, a G component mapping coefficient calculation unit, and a B component mapping coefficient calculation unit that respectively calculate, according to an algorithm model integrating anti-distortion, anti-dispersion, and ATW, R, G, and B vertex coordinates in the input image that correspond to the R, G, and B vertex coordinates of the output image buffer grid blocks, and calculate the two-dimensional mapping coefficients of each pair of grid blocks in the output image buffer and the input image according to the R, G, and B vertex coordinates of the output image buffer grid blocks and the corresponding R, G, and B vertex coordinates of the input image grid blocks.
  • the image calculation module 605 includes a value selector, a coordinate calculator, a coordinate decider, and an image interpolation generator.
  • the coordinate calculator includes an R component coordinate calculation unit, a G component coordinate calculation unit, and a B component coordinate calculation unit that respectively calculate, according to the two-dimensional mapping coefficients of the R, G, and B components that are calculated by the mapping coefficient calculation module, coordinates of R, G, and B components of the pixel in the input image that correspond to R, G, and B components of the pixel in the output image buffer.
  • the coordinate decider determines that the coordinates of the input image corresponding to the R, G, and B components obtained by the coordinate calculator are invalid. In this case, the pixel value of the output image pixel is 0.
  • the value selector selects pixel values from the input image for interpolation calculation according to the valid coordinates determined by the coordinate decider.
  • the image interpolation generator performs interpolation calculation according to data selected by the value selector, to generate the output image pixel values, and finally generates an image obtained after anti-distortion, anti-dispersion, and ATW.
  • the output image buffer partitioning module 601 and the mapping coefficient calculation module 603 may be implemented by software.
  • the image calculation module 605 may be directly completed by hardware. Processes such as anti-distortion, anti-dispersion online, and ATW are implemented online. Only data reading is needed, and data writing is not needed, thereby decreasing bandwidth and power consumption, decreasing delay, and reducing dizziness.
  • the manner in FIG. 7 may be used. There are left and right eye buffers. Calculation of a two-dimensional mapping coefficient of the right eye buffer only needs to be completed no later than a time of 7/16 frame. In this way, the coefficient calculation workload may be properly reduced without generating image delay.
  • a screen displays and outputs according to scanning line by line.
  • a row of data output is corresponding to the input as a curve in a left figure in FIG. 8 .
  • the hardware accesses data when selecting values, and needs to open up a line buffer for storing. At least a line buffer of a rectangular region from the top transverse line to the bottom of the curve shown in the left figure in FIG. 8 needs to be opened up. It is assumed that the width of the input image is 200, the depth of the curve is 40, then the line buffer needs to be in a size of 200*40 to load the curve.
  • value selection may be performed on the input image according to the partitioning manner shown in FIG. 8 .
  • the input curve is partitioned into blocks and covered, to reduce the numbers of line buffer to be opened up. It is assumed that a size of each block in a left figure in FIG. 8 is ⁇ pixels.
  • a height of a block may be determined based on the algorithm model of various anti-distortion, anti-dispersion, and ATW, and ⁇ value operation collection. ⁇ is only a reference value.
  • a smaller block can reduce the quantity of rows of the line buffer. In this way, costs of the quantity of rows of the line buffer are reduced.
  • a total width of the line buffer is determined by the maximum value of an input image resolution width and an output image width. For example, 1280 pixels width is sufficient to support dual-screen 1080 P.
  • a total height of the line buffer may need only 16. That is, the line buffer only needs to be opened up as 200*16. If the blocks are partitioned to be smaller, for example, the width of the small block becomes 16, the total height of the line buffer may become 8, so that the line buffer opened up becomes 200*8. In this way, line buffer costs of hardware design are greatly reduced, and a design area is reduced.
US16/346,125 2017-05-05 2017-09-11 Method and apparatus for real-time virtual reality acceleration Active US10777014B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710310655.2A CN107220925B (zh) 2017-05-05 2017-05-05 一种实时虚拟现实加速方法及装置
CN201710310655 2017-05-05
CN201710310655.2 2017-05-05
PCT/CN2017/101248 WO2018201652A1 (zh) 2017-05-05 2017-09-11 一种实时虚拟现实加速方法及装置

Publications (2)

Publication Number Publication Date
US20190279427A1 US20190279427A1 (en) 2019-09-12
US10777014B2 true US10777014B2 (en) 2020-09-15

Family

ID=59943828

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/346,125 Active US10777014B2 (en) 2017-05-05 2017-09-11 Method and apparatus for real-time virtual reality acceleration

Country Status (4)

Country Link
US (1) US10777014B2 (zh)
EP (1) EP3522104A4 (zh)
CN (1) CN107220925B (zh)
WO (1) WO2018201652A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210405356A1 (en) * 2019-05-24 2021-12-30 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for controlling virtual reality display device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171662B (zh) * 2017-12-18 2020-08-07 珠海全志科技股份有限公司 读取图像压缩数据的方法及包含该方法的反畸变方法
CN108109181B (zh) * 2017-12-18 2021-06-01 珠海全志科技股份有限公司 读取图像压缩数据的电路及包含该电路的反畸变电路
CN109961402A (zh) * 2017-12-22 2019-07-02 中科创达软件股份有限公司 一种显示设备目镜反畸变方法及装置
CN108282648B (zh) * 2018-02-05 2020-11-03 北京搜狐新媒体信息技术有限公司 一种vr渲染方法、装置、穿戴式设备及可读存储介质
US10768879B2 (en) 2018-03-06 2020-09-08 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method and apparatus, virtual reality apparatus, and computer-program product
CN108287678B (zh) * 2018-03-06 2020-12-29 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质
CN110335200A (zh) * 2018-03-29 2019-10-15 腾讯科技(深圳)有限公司 一种虚拟现实反畸变方法、装置以及相关设备
CN108648254B (zh) * 2018-04-27 2022-05-17 中科创达软件股份有限公司 一种图像渲染方法及装置
CN111199518B (zh) * 2018-11-16 2024-03-26 深圳市中兴微电子技术有限公司 Vr设备的图像呈现方法、装置、设备和计算机存储介质
CN109754380B (zh) 2019-01-02 2021-02-02 京东方科技集团股份有限公司 一种图像处理方法及图像处理装置、显示装置
CN109819232B (zh) 2019-02-19 2021-03-26 京东方科技集团股份有限公司 一种图像处理方法及图像处理装置、显示装置
KR20220093985A (ko) 2020-12-28 2022-07-05 삼성전자주식회사 이미지 지연 보정 방법 및 이를 이용한 장치
CN113115018A (zh) * 2021-03-09 2021-07-13 聚好看科技股份有限公司 一种图像的自适应显示方法及显示设备
CN112905831B (zh) * 2021-04-02 2023-03-24 上海国际汽车城(集团)有限公司 物体在虚拟场景中的坐标获取方法、系统及电子设备
CN113256484B (zh) * 2021-05-17 2023-12-05 百果园技术(新加坡)有限公司 一种对图像进行风格化处理的方法及装置
CN113740035A (zh) * 2021-08-26 2021-12-03 歌尔光学科技有限公司 投影质量检测方法、装置、设备及可读存储介质

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052878A1 (en) * 2001-06-29 2003-03-20 Samsung Electronics Co., Ltd. Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering
US20040207733A1 (en) * 2003-01-30 2004-10-21 Sony Corporation Image processing method, image processing apparatus and image pickup apparatus and display apparatus suitable for the application of image processing method
US6897883B1 (en) * 2000-05-23 2005-05-24 Sharp Kabushiki Kaisha Omniazimuthal visual system
US20090067749A1 (en) * 2006-01-13 2009-03-12 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Calibration Method and Calibration System for Projection Apparatus
US20130107055A1 (en) * 2010-07-14 2013-05-02 Mitsubishi Electric Corporation Image synthesis device
US20150143237A1 (en) * 2013-11-21 2015-05-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method and non-transitory computer-readable medium
US20160189350A1 (en) * 2014-12-30 2016-06-30 Texas Instruments Incorporated System and method for remapping of image to correct optical distortions
US20160314564A1 (en) * 2015-04-22 2016-10-27 Esight Corp. Methods and devices for optical aberration correction
US20160321787A1 (en) * 2015-04-30 2016-11-03 Beijing Pico Technology Co., Ltd. Head-Mounted Display and Video Data Processing Method Thereof
US20160379335A1 (en) * 2015-06-23 2016-12-29 Samsung Electronics Co., Ltd. Graphics pipeline method and apparatus
US20170192734A1 (en) * 2015-12-31 2017-07-06 Le Holdings (Beijing) Co., Ltd. Multi-interface unified displaying system and method based on virtual reality
US20170316607A1 (en) * 2016-04-28 2017-11-02 Verizon Patent And Licensing Inc. Methods and Systems for Minimizing Pixel Data Transmission in a Network-Based Virtual Reality Media Delivery Configuration
US9824498B2 (en) * 2014-12-30 2017-11-21 Sony Interactive Entertainment Inc. Scanning display system in head-mounted display for virtual reality
US20180088890A1 (en) * 2016-09-23 2018-03-29 Daniel Pohl Outside-facing display for head-mounted displays
US20180365797A1 (en) * 2016-03-28 2018-12-20 Tencent Technology (Shenzhen) Company Limited Image display method, custom method of shaped cambered curtain, and head-mounted display device
US20190289327A1 (en) * 2018-03-13 2019-09-19 Mediatek Inc. Method and Apparatus of Loop Filtering for VR360 Videos

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496242B2 (en) * 2004-12-16 2009-02-24 Agfa Inc. System and method for image transformation
JP4013989B2 (ja) * 2006-02-20 2007-11-28 松下電工株式会社 映像信号処理装置、仮想現実感生成システム
CN102236790B (zh) * 2011-05-23 2013-06-05 杭州华三通信技术有限公司 一种图像处理方法和设备
US9280810B2 (en) * 2012-07-03 2016-03-08 Fotonation Limited Method and system for correcting a distorted input image
US8928730B2 (en) * 2012-07-03 2015-01-06 DigitalOptics Corporation Europe Limited Method and system for correcting a distorted input image
CN104822059B (zh) * 2015-04-23 2017-07-28 东南大学 一种基于gpu加速的虚拟视点合成方法
CN106572342A (zh) * 2016-11-10 2017-04-19 北京奇艺世纪科技有限公司 一种影像的反畸变反色散处理方法、装置和虚拟现实设备

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6897883B1 (en) * 2000-05-23 2005-05-24 Sharp Kabushiki Kaisha Omniazimuthal visual system
US20030052878A1 (en) * 2001-06-29 2003-03-20 Samsung Electronics Co., Ltd. Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering
US20040207733A1 (en) * 2003-01-30 2004-10-21 Sony Corporation Image processing method, image processing apparatus and image pickup apparatus and display apparatus suitable for the application of image processing method
US20090067749A1 (en) * 2006-01-13 2009-03-12 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Calibration Method and Calibration System for Projection Apparatus
US20130107055A1 (en) * 2010-07-14 2013-05-02 Mitsubishi Electric Corporation Image synthesis device
US20150143237A1 (en) * 2013-11-21 2015-05-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method and non-transitory computer-readable medium
US9824498B2 (en) * 2014-12-30 2017-11-21 Sony Interactive Entertainment Inc. Scanning display system in head-mounted display for virtual reality
US20160189350A1 (en) * 2014-12-30 2016-06-30 Texas Instruments Incorporated System and method for remapping of image to correct optical distortions
US20160314564A1 (en) * 2015-04-22 2016-10-27 Esight Corp. Methods and devices for optical aberration correction
US20160321787A1 (en) * 2015-04-30 2016-11-03 Beijing Pico Technology Co., Ltd. Head-Mounted Display and Video Data Processing Method Thereof
US20160379335A1 (en) * 2015-06-23 2016-12-29 Samsung Electronics Co., Ltd. Graphics pipeline method and apparatus
US20170192734A1 (en) * 2015-12-31 2017-07-06 Le Holdings (Beijing) Co., Ltd. Multi-interface unified displaying system and method based on virtual reality
US20180365797A1 (en) * 2016-03-28 2018-12-20 Tencent Technology (Shenzhen) Company Limited Image display method, custom method of shaped cambered curtain, and head-mounted display device
US20170316607A1 (en) * 2016-04-28 2017-11-02 Verizon Patent And Licensing Inc. Methods and Systems for Minimizing Pixel Data Transmission in a Network-Based Virtual Reality Media Delivery Configuration
US20180088890A1 (en) * 2016-09-23 2018-03-29 Daniel Pohl Outside-facing display for head-mounted displays
US20190289327A1 (en) * 2018-03-13 2019-09-19 Mediatek Inc. Method and Apparatus of Loop Filtering for VR360 Videos

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210405356A1 (en) * 2019-05-24 2021-12-30 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for controlling virtual reality display device
US11513346B2 (en) * 2019-05-24 2022-11-29 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for controlling virtual reality display device

Also Published As

Publication number Publication date
EP3522104A4 (en) 2020-01-22
CN107220925B (zh) 2018-10-30
EP3522104A1 (en) 2019-08-07
CN107220925A (zh) 2017-09-29
WO2018201652A1 (zh) 2018-11-08
US20190279427A1 (en) 2019-09-12

Similar Documents

Publication Publication Date Title
US10777014B2 (en) Method and apparatus for real-time virtual reality acceleration
KR101785027B1 (ko) 화면 왜곡 보정이 가능한 디스플레이 장치 및 이를 이용한 화면 왜곡 보정 방법
CN106919360B (zh) 一种头部姿态补偿方法及装置
JP6866297B2 (ja) ヘッドマウントディスプレイの電子ディスプレイ安定化
CN110460831B (zh) 显示方法、装置、设备及计算机可读存储介质
JP3052681B2 (ja) 3次元動画像生成装置
CN113170136A (zh) 重投影帧的运动平滑
TWI594018B (zh) 廣視角裸眼立體圖像顯示方法、裸眼立體顯示設備以及其操作方法
US10553014B2 (en) Image generating method, device and computer executable non-volatile storage medium
CN108536405A (zh) 数据处理系统
WO2020170454A1 (ja) 画像生成装置、ヘッドマウントディスプレイ、および画像生成方法
JP7150134B2 (ja) ヘッドマウントディスプレイおよび画像表示方法
US10692420B2 (en) Data processing systems
US10775629B2 (en) System and method for correcting a rolling display effect
US11218691B1 (en) Upsampling content for head-mounted displays
CN112887646B (zh) 图像处理方法及装置、扩展现实系统、计算机设备及介质
KR20200063614A (ko) Ar/vr/mr 시스템용 디스플레이 유닛
JP3231029B2 (ja) レンダリング方法及び装置、ゲーム装置、並びに立体モデルをレンダリングするためのプログラムを格納したコンピュータ読み取り可能な記録媒体
US20180247392A1 (en) Information Processing System, Information Processing Apparatus, Output Apparatus, Program, and Recording Medium
JPWO2020170456A1 (ja) 表示装置および画像表示方法
US11941408B2 (en) Encoding stereo splash screen in static image
US10930185B2 (en) Information processing system, information processing apparatus, output apparatus, program, and recording medium
KR20220128406A (ko) 멀티뷰 스타일 전이 시스템 및 방법
JP2005165283A (ja) 地図表示装置
US5912657A (en) Image display device

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ALLWINNER TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TANG, YUPU;ZHANG, JUN;REEL/FRAME:049056/0469

Effective date: 20190424

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4