CN113093903A - Image display method and display equipment - Google Patents

Image display method and display equipment Download PDF

Info

Publication number
CN113093903A
CN113093903A CN202110291728.4A CN202110291728A CN113093903A CN 113093903 A CN113093903 A CN 113093903A CN 202110291728 A CN202110291728 A CN 202110291728A CN 113093903 A CN113093903 A CN 113093903A
Authority
CN
China
Prior art keywords
fragment
coordinate
color value
image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110291728.4A
Other languages
Chinese (zh)
Other versions
CN113093903B (en
Inventor
任子健
刘帅
杨彬
史东平
吴连朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202110291728.4A priority Critical patent/CN113093903B/en
Publication of CN113093903A publication Critical patent/CN113093903A/en
Application granted granted Critical
Publication of CN113093903B publication Critical patent/CN113093903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The present application relates to the field of VR technologies, and in particular, to an image display method and a display device. Aiming at the sawtooth at the edge of the image, acquiring a first color value from a first image and a second color value from a second image according to the UV coordinate of each fragment, determining a third color value according to the second color value and the first color value of each fragment, performing transparency mixing on the third color value, and rendering a grid to be rendered according to the mixed third color value of each fragment, wherein the method can reduce the sawtooth at the edge of the image; aiming at the internal sawteeth of the image, acquiring a first color value of a corresponding fragment from a first image according to the UV coordinate of each fragment, adjusting the first color value of each fragment in a transition region, and rendering a grid to be rendered according to the first color value of each fragment, wherein the transition region is a region adjacent to a non-video picture region in a video picture region of the first image.

Description

Image display method and display equipment
Technical Field
The present disclosure relates to the field of Virtual Reality (VR) technologies, and in particular, to an image display method and a display device.
Background
The VR technology is a research hotspot in the field of computer application at present, and is a man-machine interaction technology integrating multiple advanced technologies such as a real-time three-dimensional computer graphics technology, a man-machine interaction technology, a sensing technology, a multimedia technology, a wide-angle stereo display technology, a network technology and the like, and can vividly simulate various perceptual behaviors of people in natural environment. A user may be immersed in a computer-created virtual environment through a stereoscopic helmet, data gloves, three-dimensional mouse, etc., and may engage in various interactive activities with objects in the virtual environment with the natural behavior and perception of humans.
In three-dimensional rendering, all elements of a scene are displayed as images on a two-dimensional screen. The display screen is typically pixellated, and the lines that appear to be rounded in the image are actually jagged on the display screen, as shown in FIG. 1. However, since the human eye has a low resolution and a limited recognition capability, when the jaggy is small to some extent, the line is considered to be smooth. In a VR scene, the VR device magnifies an image on the display screen through the lens, and the jaggies are also magnified, so that the image on the display screen in the VR scene has obvious jaggies, especially edges of a played User Interface (UI) picture and a video frame. When the camera moves, the position of the saw teeth changes, and an abnormal phenomenon of image flicker is caused. Therefore, solving the problem of image edge flicker in the VR scene is an urgent issue to be solved.
Disclosure of Invention
The application provides an image display method and display equipment, which are used for reducing the abnormal phenomenon of image flicker caused by VR scene image sawtooth.
In a first aspect, the present application provides a display device comprising: display, memory, and graphics processor:
the display, connected with the graphics processor, configured to display images in a Virtual Reality (VR) scene;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
acquiring a first color value of a corresponding fragment from a first image and a second color value of the corresponding fragment from a second image according to the UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered, the second image sequentially comprises a full transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, and transparency components in the color values of pixels in different areas have different values;
determining a third color value of a corresponding fragment according to the respective first color value and second color value of each fragment, and performing transparency mixing on the third color value;
and rendering the grid to be rendered according to the third color value obtained by mixing the fragments to obtain a rendered image and displaying the rendered image.
Optionally, the graphics processor determines a third color value of the corresponding fragment according to the respective first color value and second color value of each fragment, and is specifically configured to:
taking values of a red R component, a green G component and a blue B component in a first color value of a first fragment as values of the R component, the G component and the B component in a third color value of the first fragment respectively, and taking a value of a transparency A component in a second color value of the first fragment as a value of the A component in the third color value of the first fragment, wherein the first fragment is any one of the fragments.
The display device obtains a first color value of a corresponding fragment from a first image according to a UV coordinate of each fragment obtained by interpolation of the UV coordinate of each grid vertex in a created grid to be rendered, obtains a second color value of the corresponding fragment from a second image, determines a third color value of the corresponding fragment according to the respective first color value and second color value of each fragment, and performs transparency mixing on the third color value of the corresponding fragment based on a transparency component in the third color value to smooth the third color value of the corresponding fragment by utilizing a transparency mixing function of a rendering pipeline because the second image sequentially comprises a fully transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, wherein transparency components in color values of pixel points in different areas are different in value, and obtaining and displaying the rendered image, thereby reducing the sawtooth at the edge of the first image and further reducing the abnormal phenomenon of image flicker caused by the sawtooth of the VR scene image.
In a second aspect, the present application provides a display device comprising: display, memory, and graphics processor:
the display, connected with the graphics processor, configured to display images in a Virtual Reality (VR) scene;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
acquiring a first color value of a corresponding fragment from a first image according to a UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered;
adjusting a first color value of each fragment in a preset transition region, wherein the transition region is a region adjacent to a non-video picture region in a video picture region of the first image;
and rendering the grid to be rendered according to the first color value of each fragment to obtain and display a rendered image.
Optionally, the graphics processor adjusts the first color value of each fragment in the preset transition region, and is specifically configured to:
acquiring UV coordinates of boundary pixel points adjacent to a non-video picture area in a video picture area of the first image;
and adjusting the first color value of the corresponding fragment according to the acquired UV coordinate of the boundary pixel point, the size of the transition region and the UV coordinate of each fragment in the transition region.
Optionally, the graphics processor adjusts the first color value of the corresponding fragment according to the obtained UV coordinate of the boundary pixel point, the size of the transition region, and the UV coordinate of each fragment in the transition region, and is specifically configured to:
if the V coordinate of a second fragment is larger than a first difference value and smaller than the V coordinate of an upper boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting a first color value of the second fragment according to a ratio of a second difference value to the size of the transition area, wherein the first difference value is a difference value between the V coordinate of the upper boundary pixel point and the size of the transition area, and the second difference value is a difference value between the V coordinate of the upper boundary pixel point and the V coordinate of the second fragment; or
If the V coordinate of a second fragment is larger than the V coordinate of a lower boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the V coordinate of the lower boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the V coordinate of the second fragment to the V coordinate of the lower boundary pixel point; or
If the U coordinate of a second fragment is larger than the U coordinate of a left boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the U coordinate of the left boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the U coordinate of the second fragment to the size of the transition area; or
If the U coordinate of a second fragment is larger than a third difference value and smaller than the U coordinate of a right boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting the first color value of the second fragment according to the ratio of a fourth difference value to the size of the transition area, wherein the third difference value is the difference value between the U coordinate of the right boundary pixel point and the size of the transition area, and the fourth difference value is the difference value between the U coordinate of the right boundary pixel point and the U coordinate of the second fragment;
wherein the second fragment is any one of the fragments in the transition region.
According to the display equipment, the first color value of the corresponding fragment is obtained from the first image according to the UV coordinate of each fragment obtained by interpolation of the UV coordinate of each grid vertex in the created to-be-rendered grid, then the first color value of each fragment in the preset transition region is adjusted, the to-be-rendered grid is rendered according to the first color value of each fragment, the rendered image is obtained and displayed, and the first color value of each fragment in the transition region is adjusted because the transition region is the region adjacent to the non-video picture region in the video picture region of the first image, so that the sawtooth at the junction of the video picture region and the non-video picture region can be smoothed, and the abnormal phenomenon of image flicker caused by the VR scene image sawtooth is reduced.
In a third aspect, the present application provides an image display method, comprising:
acquiring a first color value of a corresponding fragment from a first image and a second color value of the corresponding fragment from a second image according to the UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered, the second image sequentially comprises a full transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, and transparency components in the color values of pixels in different areas have different values;
determining a third color value of a corresponding fragment according to the respective first color value and second color value of each fragment, and performing transparency mixing on the third color value;
and rendering the grid to be rendered according to the third color value of each fragment to obtain and display a rendered image.
In a fourth aspect, the present application provides an image display method, comprising:
acquiring a first color value of a corresponding fragment from a first image according to a UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered;
adjusting a first color value of each fragment in a preset transition region, wherein the transition region is a region adjacent to a non-video picture region in a video picture region of the first image;
and rendering the grid to be rendered according to the first color value of each fragment to obtain and display a rendered image.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are configured to enable a computer to execute an image display method provided in the embodiment of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a diagram illustrating an image jaggy provided by an embodiment of the present application;
fig. 2 schematically illustrates a structure diagram of a VR head mounted display device provided by an embodiment of the present application;
FIG. 3a is a diagram illustrating an image with jagged edges according to an embodiment of the present disclosure;
fig. 3b illustrates an example of a mask image provided by an embodiment of the present application;
FIG. 3c illustrates another mask image provided by embodiments of the present application;
FIG. 3d is a diagram illustrating an effect image for solving the problem of image edge aliasing provided by an embodiment of the present application;
FIG. 4 is a flowchart illustrating an image display method for solving the problem of image edge aliasing according to an embodiment of the present application;
FIG. 5a is a diagram illustrating an image with jaggies inside the image provided by an embodiment of the present application;
fig. 5b is a schematic diagram illustrating a boundary pixel point of an image inner region provided in an embodiment of the present application;
FIG. 5c is a diagram illustrating an effect image for solving the internal aliasing problem provided by an embodiment of the present application;
FIG. 5d illustrates an image with internal serrations of another image provided by embodiments of the present application;
FIG. 6 is a flowchart illustrating an image display method for solving intra-image aliasing provided by an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a display device provided in an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first", "second", and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily meant to imply a particular order or sequence Unless otherwise indicated (Unless other indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The embodiment of the application provides an image display method and display equipment. Taking a display device as an example of a VR head-mounted display device, fig. 2 exemplarily shows a structure diagram of the VR head-mounted display device provided in the embodiment of the present application. As shown in fig. 2, the VR head-mounted display device includes a lens assembly 201 and a display terminal 202 (corresponding to a display) disposed right in front of the lens assembly 201, wherein the lens assembly 201 is composed of a left display lens 201_1 and a right display lens 201_2, and the lens assembly 201 can magnify a played image. When the user wears display device using VR, human eyes can watch the VR video displayed by the display terminal 202 through the lens group 201, and experience VR effect.
It should be noted that, the display device in the embodiment of the present application may be, besides the VR head-mounted display device, a device with UI picture, video playing and interaction, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and a smart television in a VR scene.
Generally, when an image is enlarged in a VR scene, jaggies of the image displayed on a display screen are also enlarged, and the more the jaggies are enlarged, the more the edges of the image are. When the camera moves, the position of the saw teeth changes, so that an abnormal phenomenon of image flicker is caused, and the user experience is influenced. The conventional method generally reduces the abnormal phenomenon of image flicker caused by sawteeth by reducing the resolution of the sampled texture image in the three-dimensional rendering process, but the reduction of the resolution of the sampled texture image can cause image blurring and poor user experience effect.
In order to solve the above problem, embodiments of the present application provide an image display method and a display device. The method can smooth the sawtooth at the edge of the image and the sawtooth at the boundary of the video picture and the non-video picture (namely the inside of the image) in the image, thereby reducing the abnormal phenomenon of image flicker caused by the sawtooth. Aiming at the sawtooth at the edge of the image, a mask image is generated in advance, the mask image sequentially comprises a full transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, transparency components in color values of pixel points in different areas are different in value, during three-dimensional rendering, a transparency mixing function is started in a pixel shader, according to a UV coordinate of each fragment obtained by UV coordinate interpolation of each grid vertex in a created grid to be rendered, a first color value of the corresponding fragment is obtained from a sampling image, a second color value of the corresponding fragment is obtained from the mask image, transparency mixing is carried out on the first color value and the second color value of each fragment, a third color value of the corresponding fragment is determined, further, the grid to be rendered is rendered according to the third color value of each fragment, the rendered image is obtained and displayed, and the determined third color value is relatively smooth, therefore, the sawtooth at the edge of the sampling image can be reduced, and the abnormal phenomenon of image flicker caused by the VR scene image sawtooth is further reduced. Aiming at the internal sawteeth of the image, the first color value of the corresponding fragment is obtained from the sampled image according to the UV coordinate of each fragment obtained by the interpolation of the UV coordinate of each grid vertex in the created to-be-rendered grid, the first color value of each fragment in the preset transition region is adjusted, the to-be-rendered grid is rendered according to the first color value of each fragment, the rendered image is obtained and displayed, the transition region is the region adjacent to the non-video picture region in the video picture region of the sampled image, the first color value of each fragment in the transition region is adjusted, the sawteeth at the junction of the video picture region and the non-video picture region can be smoothed, and therefore the abnormal phenomenon of image flicker caused by the sawteeth of the VR scene image is reduced.
It should be noted that, in the embodiment of the present application, the sampling image is also referred to as a first image, and the mask image is also referred to as a second image, and the color value includes values of a red (R) component, a green (G) component, a blue (B) component, and a transparency (a) component.
For the sake of clarity of the description of the embodiments of the present application, the following explains the fragments in the embodiments of the present application.
In a three-dimensional rendering pipeline, geometric vertices are grouped into primitives, the primitives including: points, line segments, polygons. And outputting a fragment sequence after the primitive is rasterized. A fragment is not a true pixel but a collection of states that are used to calculate the final color of each pixel. These states include, but are not limited to, screen coordinates of the fragment, depth information, and other vertex information output from the geometry stage, such as normal, texture coordinates, and the like.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
In some embodiments, the serrations are present in the edge regions of the first image (as shown in the region between the two solid black lines in fig. 3 a). When a jaggy exists in an edge region of the first image, a final color value of a corresponding fragment may be determined based on a pre-generated mask image (also referred to as a second image).
Wherein, the second image can be generated in advance according to actual needs. The second image sequentially comprises a fully transparent area, a semi-transparent area and an opaque area from outside to inside, the size of each area can be preset according to actual conditions, and values of transparency components in color values of pixels in different areas are different. The value of the transparency component in the color value of the pixel point in the full transparent area is smaller than that of the transparency component in the color value of the pixel point in the semi-transparent area, the value of the transparency component in the color value of the pixel point in the semi-transparent area is smaller than that of the transparency component in the color value of the pixel point in the non-transparent area, and the lower the value of the transparency component is, the higher the transparency is. For example, the value of the a component in the color values of the pixels in the fully transparent region is 0, which indicates complete transparency, the value of the a component in the color values of the pixels in the opaque region is 1, which indicates opacity, and the value of the a component in the color values of the pixels in the semi-transparent region is greater than 0 and smaller than 1. Optionally, the value of the a component in the color values of the pixels in the translucent area may gradually decrease from inside (near the opaque area) to outside (near the fully transparent area), which indicates that the transparency in the translucent area gradually increases from inside to outside.
Fig. 3b is a second image exemplarily shown in the present application. As shown in fig. 3b, the second image is a regular rectangle, and comprises, in order from the outside to the inside, a fully transparent region 301 (filled with a dotted line in fig. 3 b), a translucent region 302 (filled with a solid line in fig. 3 b), and an opaque region 303 (filled with a dotted line in fig. 3 b). The shapes of the semi-transparent area 302 and the non-transparent area 303 are rounded rectangles, and the size and the shape of each area can be set according to actual conditions.
It should be noted that fig. 3b is only an example, and the second image may have a different shape in different scenes, as shown in fig. 3 c. As shown in (1) of fig. 3c, the second image is a square and includes, from inside to outside, a fully transparent region, a semi-transparent region, and an opaque region, wherein the semi-transparent region and the opaque region are circular; as shown in (2) of fig. 3c, the second image is a square and includes, from inside to outside, a fully transparent region, a semi-transparent region, and an opaque region, where the semi-transparent region and the opaque region are pentagons. The filling method of each region in fig. 3c is the same as that in fig. 3 b.
Based on the second image shown in fig. 3b or fig. 3c, fig. 4 exemplarily shows a flowchart of an image display method provided by the embodiment of the present application. As shown in fig. 4, the method is executed by a display device for reducing jagging of an edge of an image, and mainly includes the following steps:
s401: and acquiring a first color value of the corresponding fragment from the first image and acquiring a second color value of the corresponding fragment from the second image according to the UV coordinate of each fragment.
In this step, a client-side player is started through a function key of the display device, and a mesh to be rendered including a plurality of meshes is created in a Graphics Processing Unit (GPU), where each mesh is composed of a pair of triangles and includes a plurality of mesh vertices. And after the vertex shader, performing rasterization operation and interpolation according to the UV coordinates of each grid vertex to obtain the UV coordinates of each fragment, wherein each fragment corresponds to a pixel point on a screen to be rendered. And the created grid to be rendered corresponds to the first image one by one. For example, taking a rectangular screen for rendering video in a VR scene as an example, the rectangular screen generally uses a rectangular grid as a rendering carrier, so that a rectangular grid to be rendered is created in the GPU.
In step S401, taking any one of the slices (hereinafter referred to as a first slice) as an example, according to a UV coordinate of the first slice, a first color value of the first slice is obtained from the first image, and a second color value of the first slice is obtained from the second image, where on the basis of not affecting the essential content of the embodiment of the present application, the embodiment of the present application does not have a limitation on a color value sampling manner, and may be quadratic linear interpolation sampling, nearest point sampling, or the like, for example.
S402: and determining a third color value of the corresponding fragment according to the respective first color value and second color value of each fragment, and performing transparency mixing on the third color value.
In this step, taking the first fragment as an example, the third color value of the first fragment is determined. Specifically, values of an R component, a G component, and a B component in a first color value of the first fragment are respectively used as values of the R component, the G component, and the B component in a third color value of the first fragment, and a value of a transparency a component in a second color value of the first fragment is used as a value of the a component in the third color value of the first fragment, so as to obtain a third color value of the first fragment.
In executing S402, a transparency blending function is turned on in the pixel shader, and the transparency blending function can achieve the following effects: the full-transparency effect can be achieved for the pixel points (called full-transparency pixel points) with the A component value being the full-transparency value, and the final color value of the full-transparency pixel points is the color value of the background image in the VR scene; for the pixel point (called as a non-fully transparent pixel point) with the value of the A component being a semi-transparent value, a transparent effect can be realized, and the final color value of the non-fully transparent pixel point can be mixed with the color value of the background image to generate a new color value according to the value of the A component; for the pixel points (called opaque pixel points) with opaque values, which are the values of the component a, the transparency effect is not realized, and the final color values of the opaque pixel points are kept unchanged. Optionally, the value of full transparency is 0, and the value of opaque is 1. Therefore, after the third color value is determined, transparency mixing can be performed on the third color value according to the value of the a component in the third color value. Taking the first fragment as an example, specifically, when the value of the component a in the third color value of the first fragment is a fully transparent value, replacing the third color value of the first fragment with the color value of the background image in the VR scene to obtain a replaced third color value; when the value of the component a in the third color value of the first fragment is an opaque value, keeping the third color value of the first fragment unchanged, that is, the third color value is the first color value obtained from the first image; and when the value of the component A in the third color value of the first fragment is larger than the fully transparent value and smaller than the opaque value, combining the value of the component A in the third color value of the first fragment and the color value of the background image, and taking the combined color value as the third color value of the first fragment.
S403: and rendering the grid to be rendered according to the third color value mixed by each fragment, and obtaining and displaying the rendered image.
In the step, the created grid to be rendered is rendered according to the mixed third color value of each fragment, so that a rendered image is obtained and is displayed by a display of the display device. As shown in fig. 3d, because transparency blending is performed on the rendered image, for a fragment whose a component value is an opaque value, the color value of the fragment tends to the color value of the background image in the scene. Compared with fig. 3a, the color values of the pixels in the edge region of the image are smoothed, so that the image flicker phenomenon caused by edge aliasing is reduced.
In the above-mentioned embodiment of the application, sawtooth to first image edge existence, according to the UV coordinate of each piece, obtain the first colour value of corresponding piece from first image, and obtain the second colour value of corresponding piece from the second image, owing to opened transparency mixing function, the pixel of waiting to render that the dereferencing is not opaque value to the A component realizes transparent effect, combine the colour value of background image in the VR scene, can carry out the smoothness to the final third colour value of corresponding piece according to the dereferencing of the A component in the third colour value, furthermore, the grid of waiting to render is rendered according to the third colour value rendering after each piece mixes, and the image after the demonstration is rendered, thereby reduce the marginal sawtooth of first image, the unusual phenomenon of image scintillation that sawtooth in the first image caused has been reduced, make the image after the rendering more smooth.
In other embodiments, the first image in some VR scenes contains non-video picture areas (as shown by the black area at 502 in fig. 5 a). At this time, the jaggies exist in a region (also referred to as a transition region, as indicated by 503 in fig. 5a, i.e., the jaggies exist inside the first image) adjacent to the non-video picture region within the video picture region (as indicated by 501 in fig. 5 a) of the first image. The size of the transition area can be set according to actual conditions, and the measurement of the size of the transition area is consistent with the UV coordinate. When the sawtooth exists at the boundary of the video picture area and the non-video picture area of the first image, the color value of the fragment in the area can be adjusted based on the size of the transition area set in the first image.
In the case where the transition region is a region adjacent to the non-video picture region in the video picture region of the first image (as shown in 503 in fig. 5 a), there are significant jaggies at the boundary between the video picture region and the non-video picture region inside the first image, which may cause an abnormal phenomenon of image flicker. Fig. 6 exemplarily outputs a flowchart of another image display method provided by the embodiment of the present application. As shown in fig. 6, to reduce aliasing inside an image, the process mainly includes the following steps:
s601: and acquiring a first color value of the corresponding fragment from the first image according to the UV coordinate of each fragment.
In this step, a client-side player is started through a function key of the display device, and a mesh to be rendered including a plurality of meshes is created in a Graphics Processing Unit (GPU), where each mesh is composed of a pair of triangles and includes a plurality of mesh vertices. And after the vertex shader, performing rasterization operation and interpolation according to the UV coordinates of each grid vertex to obtain the UV coordinates of each fragment, wherein each fragment corresponds to a pixel point on a screen to be rendered. And the created grid to be rendered corresponds to the first image one by one. For example, taking a rectangular screen for rendering video in a VR scene as an example, the rectangular screen generally uses a rectangular grid as a rendering carrier, so that a rectangular grid to be rendered is created in the GPU.
In S601, a first color value of each fragment is obtained from the first image according to the UV coordinate of the fragment. The color value sampling method includes, but is not limited to, quadratic linear interpolation sampling and nearest point sampling.
S602: and adjusting the first color value of each fragment in the preset transition region.
In this step, the transition area is an area adjacent to the non-video screen area in the video screen area of the first image, the measure of the size of the transition area is consistent with the UV coordinate, and the size of the transition area can be set according to the actual situation and is recorded as Offset (as shown in fig. 5 b). And adjusting the first color value of each fragment in the transition region so as to smooth the sawtooth inside the first image.
In specific implementation, the UV coordinates of boundary pixel points adjacent to a non-video picture area in a video picture area of a first image are obtained first. Among them, the pixels (as shown by the dotted line in fig. 5 a) adjacent to the non-video picture area in the video picture area of the first image are called boundary pixels. The boundary pixels include an upper boundary pixel (as indicated by a thick dotted line in fig. 5 b), a lower boundary pixel (as indicated by a thick solid line in fig. 5 b), a left boundary pixel (as indicated by a thick two-dot chain line in fig. 5 b), and a right boundary pixel (as indicated by a thick one-dot chain line in fig. 5 b). Because the V coordinates of the upper boundary pixel point and the lower boundary pixel point adjacent to the non-video picture area in the video picture area of the first image are the same, and the U coordinates are different, only the V coordinate (marked as V1) of the upper boundary pixel point and the V coordinate (marked as V2) of the lower boundary pixel point can be obtained, the U coordinates of the left boundary pixel point and the right boundary pixel point adjacent to the non-video picture area in the video picture area of the first image are the same, and the V coordinates are different, and only the U coordinate (marked as U1) of the left boundary pixel point and the U coordinate (marked as U2) of the right boundary pixel point can be obtained. And adjusting the first color value of the corresponding fragment according to the acquired UV coordinate of the boundary pixel point, the size of the transition region and the UV coordinate of each fragment in the transition region. Specifically, taking any one of the fragments (hereinafter referred to as a second fragment) as an example, the UV coordinate of the second fragment is assumed to be (U, V):
if the V coordinate of the second fragment is larger than the first difference value and smaller than the V coordinate of the upper boundary pixel point adjacent to the non-video picture area in the video picture area, indicating that the second fragment is located in the transition area, adjusting the first color value of the second fragment according to the ratio of the second difference value to the size of the transition area, wherein the first difference value is the difference value between the V coordinate of the upper boundary pixel point and the size of the transition area, the second difference value is the difference value between the V coordinate of the upper boundary pixel point and the V coordinate of the second fragment, and the adjustment formula is as follows:
color (V1-V)/Offset equation 1
Wherein, Color is a first Color value obtained by the second fragment from the first image, V is a V coordinate of the second fragment, (V1-Offset) < V1, V1 is a V coordinate of an upper boundary pixel point adjacent to a non-video picture area in the video picture area, Offset is a size of the transition area, Color' is the adjusted first Color value, (V1-V)/Offset is greater than 0 and less than 1.
If the V coordinate V of the second fragment is larger than the V coordinate V2 of a lower boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the V coordinate V2 of the lower boundary pixel point and the size Offset of the transition area, indicating that the second fragment is located in the transition area, adjusting the first color value of the second fragment according to the ratio of the V coordinate of the second fragment to the size of the transition area, wherein the adjustment formula is as follows:
color V/Offset equation 2
Wherein, Color is a first Color value obtained by the second fragment from the first image, V is a V coordinate of the second fragment, V2 < V < (V2+ Offset), V2 is a V coordinate of a lower boundary pixel point adjacent to a non-video picture region in the video picture region, Offset is a size of the transition region, Color' is the adjusted first Color value, and V/Offset is greater than 0 and less than 1.
If the U coordinate of the second fragment is larger than the U coordinate of a left boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the U coordinate of the left boundary pixel point and the size of the transition area, indicating that the second fragment is located in the transition area, adjusting the first color value of the second fragment according to the ratio of the U coordinate of the second fragment to the size of the transition area, wherein the adjustment formula is as follows:
color U/Offset equation 3
Wherein, Color is the first Color value of the second fragment obtained from the first image, U is the U coordinate of the second fragment, U1 < U < (U1+ Offset), U1 is the U coordinate of the left boundary pixel point adjacent to the non-video picture area in the video picture area, Offset is the size of the transition area, Color' is the adjusted first Color value, and U/Offset is greater than 0 and less than 1.
If the U coordinate of the second fragment is larger than the third difference value and smaller than the U coordinate of the right boundary pixel point adjacent to the non-video picture area in the video picture area, indicating that the second fragment is located in the transition area, adjusting the first color value of the second fragment according to the ratio of the fourth difference value to the size of the transition area, wherein the third difference value is the difference value between the U coordinate of the right boundary pixel point and the size of the transition area, the fourth difference value is the difference value between the U coordinate of the right boundary pixel point and the U coordinate of the second fragment, and the adjustment formula is as follows:
color (U2-U)/Offset equation 4
Wherein, Color is a first Color value obtained by the second fragment from the first image, U is a U coordinate of the second fragment, (U2-Offset) < U2, U2 is a U coordinate of a right boundary pixel point adjacent to the non-video picture area in the video picture area, Offset is the size of the transition area, Color' is the adjusted first Color value, (U2-U)/Offset is greater than 0 and less than 1.
It should be noted that, as shown in formulas 1 to 4, the first color value multiplied by a coefficient larger than 0 and smaller than 1, then the adjusted first color value becomes darker than before the adjustment, that is, in the rendering process, the closer to a pixel point at the boundary between the video picture area and the non-video picture area, the darker the color value is, and the color gradually tends to the original color from the boundary pixel point toward the video picture area, so that fig. 6 is applicable to a scene in which the non-video picture area of the first image is darker (such as black, dark gray, etc.).
S603: and rendering the grid to be rendered according to the first color value of each fragment to obtain and display the rendered image.
In the step, the created grid to be rendered is rendered according to the first color value of each fragment, so that a rendered image is obtained and is displayed by a display of the display device. As shown in fig. 5c, the rendered image gradually brightens from the boundary pixel point toward the video frame region, and finally, the color of the rendered image is consistent with the original color of the pixel point in the video frame, so that the color value of the fragment in the transition region adjacent to the non-video frame region in the video frame region is smoothed, and the image flicker phenomenon caused by the internal sawtooth of the image is reduced.
It should be noted that fig. 5a and fig. 5b are only an example, the periphery of the video picture area of the first image is not necessarily all adjacent to the non-video picture area, and as another first image is illustrated in fig. 5d, only adjacent upper boundary pixel points and lower boundary pixel points (indicated by dotted lines in fig. 5 d) exist in the video area 501 of the first image and the non-video picture area 502, and a transition area adjacent to the non-video picture area in the video area is indicated as 503 in fig. 5d, then the UV coordinates of the upper boundary pixel points and the lower boundary pixel points in fig. 6 may be obtained to adjust the color value of the slice element in the transition area.
In the above embodiment of the present application, for the case that the sawtooth exists in the area adjacent to the non-video picture area in the video picture area of the first image, the UV coordinate of the boundary pixel point adjacent to the video area and the non-video area is obtained, according to the UV coordinate of the fragment in the transition area, the UV coordinate of the corresponding boundary pixel point and the size of the set transition area, the first color value of each fragment in the transition area is adjusted, thereby smooth the color value of the fragment in the transition area adjacent to the non-video picture area in the video picture area of the first image, thereby reduce the abnormal phenomenon of image flicker caused by the internal sawtooth of the rendered image.
In VR scenes where the background image is dark (e.g., black, dark gray, etc.), a method for reducing the internal jaggies of the image may be used to reduce the jaggies of the image edge, as shown in the flow of fig. 6, in which case the transition area is the edge area of the sampled image, as shown in fig. 3 a.
It should be noted that the languages used by the shaders in the embodiments of the present application include, but are not limited to, GLSL (Shader Language of OpenGL), HLSL (Shader Language of microsoft DirectX), CG (C for Graphics, Shader Language commonly proposed by microsoft and NVIDIA), Unity3D Shader (Shader Language of Unity 3D).
Based on the same technical concept, embodiments of the present application provide a display device, which can implement the image display methods in fig. 4 and fig. 6 in the foregoing embodiments and achieve the same technical effects, and are not described herein again.
Referring to fig. 7, the display device includes a display 701, a memory 702, and a graphic processor 703. Wherein the display 701 is connected to the graphics processor 703, configured to display images in the VR scene; the memory 702 is connected to the image processor 703 and is configured to store computer instructions; a graphics processor 703 configured to execute the image display method according to computer instructions stored by the memory 702.
The embodiment of the application also provides a computer-readable storage medium, and the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used for enabling a computer to execute the image display method provided by the embodiment of the application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A display device comprising a display, a memory, and a graphics processor:
the display is connected with the graphics processor and is configured to display images in a Virtual Reality (VR) scene;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
acquiring a first color value of a corresponding fragment from a first image and a second color value of the corresponding fragment from a second image according to the UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered, the second image sequentially comprises a full transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, and transparency components in the color values of pixels in different areas have different values;
determining a third color value of a corresponding fragment according to the respective first color value and second color value of each fragment, and performing transparency mixing on the third color value;
and rendering the grid to be rendered according to the third color value obtained by mixing the fragments to obtain a rendered image and displaying the rendered image.
2. The display device of claim 1, wherein the graphics processor determines the third color value of the corresponding fragment from the respective first and second color values of the respective fragments, and is specifically configured to:
taking the values of a red R component, a green G component and a blue B component in a first color value of a first fragment as the values of the R component, the G component and the B component in a third color value of the first fragment respectively, and taking the value of a transparency A component in a second color value of the first fragment as the value of the A component in the third color value of the first fragment, wherein the first fragment is any one of the fragments.
3. A display device comprising a display, a memory, and a graphics processor:
the display is connected with the graphics processor and is configured to display images in a Virtual Reality (VR) scene;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
acquiring a first color value of a corresponding fragment from a first image according to a UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered;
adjusting a first color value of each fragment in a preset transition region, wherein the transition region is a region adjacent to a non-video picture region in a video picture region of the first image;
and rendering the grid to be rendered according to the first color value of each fragment to obtain and display a rendered image.
4. The display device of claim 3, wherein the graphics processor adjusts the first color value of each fragment in the preset transition region, and is specifically configured to:
acquiring UV coordinates of boundary pixel points adjacent to a non-video picture area in a video picture area of the first image;
and adjusting the first color value of the corresponding fragment according to the acquired UV coordinate of the boundary pixel point, the size of the transition region and the UV coordinate of each fragment in the transition region.
5. The display device of claim 4, wherein the graphics processor adjusts the first color value of the corresponding fragment according to the acquired UV coordinates of the border pixel points, the size of the transition region, and the UV coordinates of each fragment in the transition region, and is specifically configured to:
if the V coordinate of a second fragment is larger than a first difference value and smaller than the V coordinate of an upper boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting a first color value of the second fragment according to a ratio of a second difference value to the size of the transition area, wherein the first difference value is a difference value between the V coordinate of the upper boundary pixel point and the size of the transition area, and the second difference value is a difference value between the V coordinate of the upper boundary pixel point and the V coordinate of the second fragment; or
If the V coordinate of a second fragment is larger than the V coordinate of a lower boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the V coordinate of the lower boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the V coordinate of the second fragment to the V coordinate of the lower boundary pixel point; or
If the U coordinate of a second fragment is larger than the U coordinate of a left boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the U coordinate of the left boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the U coordinate of the second fragment to the size of the transition area; or
If the U coordinate of a second fragment is larger than a third difference value and smaller than the U coordinate of a right boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting the first color value of the second fragment according to the ratio of a fourth difference value to the size of the transition area, wherein the third difference value is the difference value between the U coordinate of the right boundary pixel point and the size of the transition area, and the fourth difference value is the difference value between the U coordinate of the right boundary pixel point and the U coordinate of the second fragment;
wherein the second fragment is any one of the fragments in the transition region.
6. An image display method, comprising:
acquiring a first color value of a corresponding fragment from a first image and a second color value of the corresponding fragment from a second image according to the UV coordinate of each fragment, wherein the UV coordinate of each fragment is that the second image obtained by interpolation of the UV coordinate of each grid vertex in a created grid to be rendered sequentially comprises a full transparent area, a semi-transparent area and an opaque area with preset sizes from outside to inside, and transparency components in color values of pixel points in different areas have different values;
determining a third color value of a corresponding fragment according to the respective first color value and second color value of each fragment, and performing transparency mixing on the third color value;
and rendering the grid to be rendered according to the third color value obtained by mixing the fragments to obtain a rendered image and displaying the rendered image.
7. The method of claim 6, wherein determining a third color value for a corresponding fragment based on the respective first and second color values of the respective fragments comprises:
taking values of a red R component, a green G component and a blue B component in a first color value of a first fragment as values of the R component, the G component and the B component in a third color value of the first fragment respectively, and taking a value of a transparency A component in a second color value of the first fragment as a value of the A component in the third color value of the first fragment, wherein the first fragment is any one of the fragments.
8. An image display method, comprising:
acquiring a first color value of a corresponding fragment from a first image according to a UV coordinate of each fragment, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in a created grid to be rendered;
adjusting a first color value of each fragment in a preset transition region, wherein the transition region is a region adjacent to a non-video picture region in a video picture region of the first image;
and rendering the grid to be rendered according to the first color value of each fragment to obtain and display a rendered image.
9. The method of claim 8, wherein the adjusting the first color value of each fragment in the preset transition region comprises:
acquiring UV coordinates of boundary pixel points adjacent to a non-video picture area in a video picture area of the first image;
and adjusting the first color value of the corresponding fragment according to the acquired UV coordinate of the boundary pixel point, the size of the transition region and the UV coordinate of each fragment in the transition region.
10. The method of claim 9, wherein the adjusting the first color value of the corresponding fragment according to the obtained UV coordinates of the boundary pixel points, the size of the transition region, and the UV coordinates of each fragment in the transition region comprises:
if the V coordinate of a second fragment is larger than a first difference value and smaller than the V coordinate of an upper boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting a first color value of the second fragment according to a ratio of a second difference value to the size of the transition area, wherein the first difference value is a difference value between the V coordinate of the upper boundary pixel point and the size of the transition area, and the second difference value is a difference value between the V coordinate of the upper boundary pixel point and the V coordinate of the second fragment; or
If the V coordinate of a second fragment is larger than the V coordinate of a lower boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the V coordinate of the lower boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the V coordinate of the second fragment to the V coordinate of the lower boundary pixel point; or
If the U coordinate of a second fragment is larger than the U coordinate of a left boundary pixel point adjacent to the non-video picture area in the video picture area and smaller than the sum of the U coordinate of the left boundary pixel point and the size of the transition area, adjusting the first color value of the second fragment according to the ratio of the U coordinate of the second fragment to the size of the transition area; or
If the U coordinate of a second fragment is larger than a third difference value and smaller than the U coordinate of a right boundary pixel point adjacent to the non-video picture area in the video picture area, adjusting the first color value of the second fragment according to the ratio of a fourth difference value to the size of the transition area, wherein the third difference value is the difference value between the U coordinate of the right boundary pixel point and the size of the transition area, and the fourth difference value is the difference value between the U coordinate of the right boundary pixel point and the U coordinate of the second fragment;
wherein the second fragment is any one of the fragments in the transition region.
CN202110291728.4A 2021-03-18 2021-03-18 Image display method and display equipment Active CN113093903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110291728.4A CN113093903B (en) 2021-03-18 2021-03-18 Image display method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110291728.4A CN113093903B (en) 2021-03-18 2021-03-18 Image display method and display equipment

Publications (2)

Publication Number Publication Date
CN113093903A true CN113093903A (en) 2021-07-09
CN113093903B CN113093903B (en) 2023-02-07

Family

ID=76669276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110291728.4A Active CN113093903B (en) 2021-03-18 2021-03-18 Image display method and display equipment

Country Status (1)

Country Link
CN (1) CN113093903B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299266A (en) * 2021-12-27 2022-04-08 贝壳找房(北京)科技有限公司 Color adjustment method and device for model and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140002487A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Animated visualization of alpha channel transparency
CN106600544A (en) * 2016-11-10 2017-04-26 北京暴风魔镜科技有限公司 Anti-aliasing method and anti-aliasing system based on texture mapping
CN106934838A (en) * 2017-02-08 2017-07-07 广州阿里巴巴文学信息技术有限公司 Picture display method, equipment and programmable device
US20170243390A1 (en) * 2014-04-05 2017-08-24 Sony Interactive Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN107483771A (en) * 2017-06-13 2017-12-15 青岛海信电器股份有限公司 A kind of method and image display device of image generation
CN107545595A (en) * 2017-08-16 2018-01-05 歌尔科技有限公司 A kind of VR scene process method and VR equipment
CN108376417A (en) * 2016-10-21 2018-08-07 腾讯科技(深圳)有限公司 A kind of display adjusting method and relevant apparatus of virtual objects
US20190371013A1 (en) * 2018-06-05 2019-12-05 Kyocera Document Solutions Inc. Radial Gradient Module
CN112218132A (en) * 2020-09-07 2021-01-12 聚好看科技股份有限公司 Panoramic video image display method and display equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140002487A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Animated visualization of alpha channel transparency
US20170243390A1 (en) * 2014-04-05 2017-08-24 Sony Interactive Entertainment America Llc Gradient adjustment for texture mapping for multiple render targets with resolution that varies by screen location
CN108376417A (en) * 2016-10-21 2018-08-07 腾讯科技(深圳)有限公司 A kind of display adjusting method and relevant apparatus of virtual objects
CN106600544A (en) * 2016-11-10 2017-04-26 北京暴风魔镜科技有限公司 Anti-aliasing method and anti-aliasing system based on texture mapping
CN106934838A (en) * 2017-02-08 2017-07-07 广州阿里巴巴文学信息技术有限公司 Picture display method, equipment and programmable device
CN107483771A (en) * 2017-06-13 2017-12-15 青岛海信电器股份有限公司 A kind of method and image display device of image generation
CN107545595A (en) * 2017-08-16 2018-01-05 歌尔科技有限公司 A kind of VR scene process method and VR equipment
US20190371013A1 (en) * 2018-06-05 2019-12-05 Kyocera Document Solutions Inc. Radial Gradient Module
CN112218132A (en) * 2020-09-07 2021-01-12 聚好看科技股份有限公司 Panoramic video image display method and display equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299266A (en) * 2021-12-27 2022-04-08 贝壳找房(北京)科技有限公司 Color adjustment method and device for model and storage medium

Also Published As

Publication number Publication date
CN113093903B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
KR102455696B1 (en) Graphics processing systems
CN103946895B (en) The method for embedding in presentation and equipment based on tiling block
US7742060B2 (en) Sampling methods suited for graphics hardware acceleration
JP5531093B2 (en) How to add shadows to objects in computer graphics
Lee et al. Real-time depth-of-field rendering using anisotropically filtered mipmap interpolation
CN111161392B (en) Video generation method and device and computer system
JP2016018560A (en) Device and method to display object with visual effect
TW201539374A (en) Method for efficient construction of high resolution display buffers
EP2705501A2 (en) Texturing in graphics hardware
US20120256906A1 (en) System and method to render 3d images from a 2d source
CA3045133C (en) Systems and methods for augmented reality applications
GB2546720B (en) Method of and apparatus for graphics processing
TW201828255A (en) Apparatus and method for generating a light intensity image
CN114758051A (en) Image rendering method and related equipment thereof
CN113093903B (en) Image display method and display equipment
US11263805B2 (en) Method of real-time image processing based on rendering engine and a display apparatus
JP7460641B2 (en) Apparatus and method for generating a light intensity image - Patents.com
CN111882498A (en) Image processing method, image processing device, electronic equipment and storage medium
JP2005346417A (en) Method for controlling display of object image by virtual three-dimensional coordinate polygon and image display device using the method
CN115049776A (en) Video rendering method and device, storage medium and electronic equipment
Smit et al. The design and implementation of a vr-architecture for smooth motion
CN116450017B (en) Display method and device for display object, electronic equipment and medium
CN113115018A (en) Self-adaptive display method and display equipment for image
ZEHNER Landscape visualization in high resolution stereoscopic visualization environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant